Interesting academic papers about comment sections


Another here on the impact of anonymity tested on TechCrunch comments in 2011, comparing Disqus and Facebook:

Momernick, E. & Sood, S.O. (2013) The Impact of Anonymity on Online Communities, Social Computing (SocialCom), 2013 International Conference on Social Computing


Here’s a paper analyzing 126 submissions on how to improve comment spaces on news articles: FROM PUBLIC SPACES TO PUBLIC SPHERE by Rodrigo Zamith & Seth C. Lewis.

“Four main themes emerged in the submissions: a need to (1) better organize content, (2) moderate content more effectively, (3) unite disjointed discourse, and (4) increase participation while promoting diversity. We find in these proposed solutions the possibility for relatively low-cost, easy-to-build systems that could moderate comments more efficiently while also facilitating more civil, cohesive, and diverse discourse; however, we also find the lingering danger of designing new systems that could perpetuate old problems such as fragmentation, filter bubbles, and homogenization.”


In search of online deliberation: Towards a new method for examining the quality of online discussions


Plenty of great papers cited in our blogpost on the real name fallacy by @natematias


Oh hey how about our own paper with the Engaging News Project at UT-Austin?

Comment Section Survey Across 20 News Sites, Natalie Jomini Stroud, Emily Van Duyn, Alexis Alizor, Alishan Alibhai, & Cameron Lang


Slovenia has a law that makes the editor-in-chief legally responsible for comments on their site. (More on that here: Shutting down onsite comments: a comprehensive list of all news organisations ) Here’s a paper about how that could be challenged by European human rights laws:


Anika Gupta’s Masters thesis from MIT, Towards a better inclusivity : online comments and community at news organizations


Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions Justin Cheng, Michael Bernstein, Cristian Danescu-Niculescu-Mizil, Jure Leskovec


The New Governors: The People, Rules, and Processes Governing Online Speech Kate Klonick, Harvard Law Review

This one is about moderation practices at Twitter, Facebook, and YouTube. Also contains a good summary of the history of Section 230 of the CDA, as well as the history of moderation on those platforms.


What we talk about when we talk about talking: Ethos at work in an online community, PhD thesis by Quinn Warnick, Iowa State University 2010

  • A virtual ethnography of Metafilter


The Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data
A smart way to overcome the frequent issue of lack of training data for your machine learning system.

Conflict in Comments: Learning but Lowering Perceptions, with Limits
Participants reported learning most from comments containing
constructive conflict.


“This is a Throwaway Account”: Temporary Technical Identities and Perceptions of Anonymity in a Massive Online Community
Alex Leavitt, Annenberg School for Communication & Journalism, University of Southern California


Constructing the cyber-troll: Psychopathy, sadism, and empathy
Natalie Sesta and Evita March, Federation University, School of Health Science and Psychology, Australia

• Trolling is an online antisocial behaviour with negative psychological outcomes.
• Current study predicted trolling perpetration from gender and personality.
• Trolls more likely to be male with high levels of trait psychopathy and sadism
• Trolls have lower affective empathy, and psychopathy moderates cognitive empathy.
• Results have implications for establishing education and prevention programs.


A low standard of comments increases the reading by those who already participate but “non-users are even more frustrated by the low quality of discussions. They consider such participation activities to be a waste of time, and are not willing to register.”
User comments: motives and inhibitors to write and read
Nina Springer, Ines Engelmann & Christian Pfaffinger

Discussion of how comment incivility can affect how reliable you find the article:
The “Nasty Effect:” Online Incivility and Risk Perceptions of Emerging Technologies
Ashley A. Anderson, Dominique Brossard, Dietram A. Scheufele, Michael A. Xenos, Peter Ladwig

Suggests that frequent commenters are more civil than occasional ones:
Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments
Kevin Coe, Kate Kenski, Stephen A. Rains

How incivility reduces trust in the information:
The Role of Civility and Anonymity on Perceptions of Online Comments
Joseph Graf, Joseph Erba & Ren-Whei Harn

Have comments become so bad that their mere presence makes people trust journalism less?Effects of civility and reasoning in user comments on perceived journalistic quality
Fabian Prochazka, Patrick Weber & Wolfgang Schweiger


A paper from Danielle Citron and Benjamin Wittes suggesting a revision to the Communications Decency Act section 230, the part of U.S. law that protects American companies from being found liable for the contents of user-submitted comments.

They suggest dividing websites into Good Samaritans (who make efforts to remove harmful content, and should receive immunity through the law) and Bad Samaritans (who don’t)


Outnumbered But Well-Spoken: Female Commenters in the New York Times
By Emma Pierson
Department of Statistics, Oxford University & Stanford University
CSCW '15 Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing
Vancouver, BC, Canada — March 14 - 18, 2015

Using eight months of online comments on New York Times articles, we find that only 28% of commenters of identifiable gender are female, but that their comments receive more recommendations from other readers…We discuss the implications of these gender differences for democratic discourse and suggest ways to increase gender parity.



Discussion quality diffuses in the digital public square
By George Berry, Cornell University and Sean J. Taylor, Facebook

A paper on sorting comments. It’s about how Facebook compared ‘Most recent’ comments with ‘Social Feedback’ comments (Most Liked, etc) to see how people engaged with each.


You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech

ESHWAR CHANDRASEKHARAN, Georgia Institute of Technology
UMASHANTHI PAVALANATHAN, Georgia Institute of Technology
ANIRUDH SRINIVASAN, Georgia Institute of Technology
ADAM GLYNN, Emory University
JACOB EISENSTEIN, Georgia Institute of Technology
ERIC GILBERT, University of Michigan

This paper looks at the success of Reddit’s closing of communities filled with hate speech in reducing hate speech across the platform.

Key points:

The empirical work in this paper suggests that when narrowly applied to small, specific groups, banning deviant hate groups can work to reduce and contain the behavior.

The banning of r/fatpeoplehate and r/CoonTown led to the rise of alternatives on, for example, where the core group of users from Reddit reorganized. For instance, in another ongoing study, we observed that 1,536 r/fatpeoplehate users have exact match usernames on The users of the Voat equivalents of the two banned subreddits continue to engage in racism and fat-shaming [22, 45]. In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To be clear, from a macro persepctive, Reddit’s actions likely did not make the internet safer or less hateful. One possible interpretation, given the evidence at hand, is that the ban drove the users from these banned subreddits to darker corners of the internet.


Classification and Its Consequences for Online Harassment: Design Insights from HeartMob

LINDSAY BLACKWELL, University of Michigan School of Information
JILL DIMOND, Sassafras Tech Collective
SARITA SCHOENEBECK, University of Michigan School of Information
CLIFF LAMPE, University of Michigan School of Information

We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior… we argue that fully addressing online harassment requires the ongoing integration of vulnerable users’ needs into the design and moderation of online platforms.


Not Funny? The Effects of Factual Versus Sarcastic Journalistic Responses to Uncivil User Comments
Marc Ziegele, Pablo B. Jost

Incivility in user comments on news websites has been discussed as a significant problem of online participation. Previous research suggests that news outlets should tackle this problem by interactively moderating uncivil postings and asking their authors to discuss more civilized. We argue that this kind of interactive comment moderation as well as different response styles to uncivil comments (i.e., factual vs. sarcastic) differently affect observers’ evaluations of the discussion atmosphere, the credibility of the news outlet, the quality of its stories, and ultimately observers’ willingness to participate in the discussions. Results from an online experiment show that factual responses to uncivil comments indirectly increase participation rates by suggesting a deliberative discussion atmosphere. In contrast, sarcastic responses indirectly deteriorate participation rates due to a decrease in the credibility of the news outlet and the quality of its stories. Sarcastic responses however increase the entertainment value of the discussions.