The Real Name Fallacy


First published on our blog

By J.Nathan Matias

People often say that online behavior would improve if every comment system forced people to use their real names. It sounds like it should be true – surely nobody would say mean things if they faced consequences for their actions?

Yet the balance of experimental evidence over the past thirty years suggests that this is not the case. Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment.

We need to change our entire approach to the question. Our concerns about anonymity are overly-simplistic; system design can’t solve social problems without actual social change.

Why Did We Think That Anonymity Was The Problem?

The idea that anonymity is the real problem with the internet is based in part on misreadings of theories formed more than thirty years ago.

In the early 1980s, many executives were unsure if they should allow employees to use computers and email. Managers worried that allowing employees to communicate across the company would enable labor organizing, conflict, and inefficiency by replacing formal corporate communication with informal digital conversations.

As companies debated email, a group of social psychologists led by Sara Kiesler published experiments and speculations on the effects of "computer-mediated communication’’ in teams. Their articles inspired decades of valuable research and offered an early popular argument that anonymity might be a source of social problems online.

In one experiment, the researchers asked computer science students who were complete strangers to make group decisions about career advice. They hosted deliberations around a table, through anonymous text chat, or through chat messages that displayed names. They also compared real-time chat to email. They found that while online decisions were more equitable, the decisions also took longer. Students also used more swear words and insults in chat conversations on average. But the researchers did not find a difference between the anonymous and non-anonymous groups [19].

Writing about unanswered questions for future research, Kiesler speculated in 1984 that since computers included less information on social context, online communications might increase social conflict and disputes with employers [13]. As Kiesler’s speculations became cited thousands of times, her call for more research was often taken as scientific fact. Her later, correlational findings were also misinterpreted as true effects [20]. Along the way, Kiesler’s nuanced appeal for changes in social norms was lost and two misconceptions became common:

(a) social problems could be attributed to the design of computer systems

(b) anonymity is to blame.

These ideas aren’t reflected in the research. In 2016, a systematic review of 16 lab studies by Guanxiong Huang and Kang Li of Michigan State University found that on average, people are actually more sensitive to group norms when they are less identifiable to others [11].

While some non-causal studies have found associations between anonymity and disinhibited behavior, this correlation probably results from the technology choices of people who are already intending conflict or harm [12]. Under lab conditions, people do behave somewhat differently in conversations under different kinds of social identifiability, something psychologists call a “deindividuation” effect.

Despite the experimental evidence, the misconception of online anonymity as a primary cause of social problems has stuck. Since the 1980s, anonymity has become an easy villain to blame for whatever fear people hold about social technology, even though lab experiments now point in a different direction.

Nine Key Facts on Anonymity and Social Problems Online

Beyond the lab, what else does research tell us about information disclosure and online behavior?

Roughly half of US adult victims of online harassment already know who their attacker is, according a nationally-representative study by Pew’s Maeve Duggan in 2014 [6]. The study covered a range of behaviors from name calling to threats and domestic abuse. Even if harassment related to protected identities could be "solved’’ in one effort to move to ‘real names’, more than half of US harassment victims, over 16 million adults, would be unaffected.

Conflict, harassment, and discrimination are social and cultural problems, not just online community problems. In societies including the US where violence and mistreatment of women, people of color, and marginalized people is common, we can expect similar problems in people’s digital interactions [1]. Lab and field experiments continue to show the role that social norms play in shaping individual behavior; if the norms favor harassment and conflict, people will be more likely to follow. While most research and design focuses on changing the behavior of individuals, we may achieve better results by focusing on changing climates of conflict and prejudice [17,16].

Revealing personal information exposes people to greater levels of harassment and discrimination. While there is no conclusive evidence that displaying names and identities will reliably reduce social problems, many studies have documented the problems it creates. When people’s names and photos are shown on a platform, people who provide a service to them – drivers, hosts, buyers – reject transactions from people of color and charge them more [9,5,8]. Revealing marital status on DonorsChoose caused donors give less to students with women teachers, in fields where women were a minority [18]. Gender- and race-based harassment are only possible if people know a person’s gender and/or race, and real names often give strong indications around both of these categories. Requiring people to disclose that information forces those risks upon them.

Companies that store personal information for business purposes also expose people to potentially serious risks, especially when that information is leaked. In the early 2010s, poorly-researched narratives about the effects of anonymity led to conflicts over real-name policies known as “Nymwars.” This provided the justification for more advanced advertising-based business models to develop, which collect more of people’s personal information in the name of reducing online harm. Several high-profile hackings of websites have revealed the risks involved in trusting companies with your personal information.

We also have to better understand if there is a trade-off between privacy and resources for public safety. Since platforms that collect more personal information have high advertising revenues, they can hire hundreds of staff to work on online safety. Paradoxically, platforms that protect people’s identities have fewer resources for protecting users. Since it’s not yet possible to compare rates of harassment between platforms, we cannot know which approach works best on balance.

It’s not just for trolls: identity protections are often the first line of defense for people who face serious risks online. According to a US nationally-representative report by the Data & Society Institute, 43% of online harassment victims have changed their contact information and 26% disconnected from online networks or devices to protect themselves [15]. When people do withdraw, they are often disconnected from the networks of support they need to survive harassment. Pseudonymity is a common protective measure. One study on the reddit platform found that women, who are more likely to receive harassment, also use multiple pseudonymous identities at greater rates than men [14].

Requirements of so-called “real names” misunderstand how people manage identity across multiple social contexts, exposing vulnerable people to risks. In the book It’s Complicated, danah boyd shares what she learned by spending time with American teenagers, who commonly manage multiple nickname-based Facebook accounts for different social contexts [24]. Requiring a single online identity can collapse those contexts in embarrassing or damaging ways. In one story, boyd describes a college admissions officer who considered rejecting a black applicant after seeing gang symbols on the student’s social media page. The admissions officer hadn’t considered that the symbols might not have revealed the student’s intrinsic character; posting them might have been a way to survive in a risky situation. People who are exploring LGBTQ identities often manage multiple accounts to prevent disastrous collapses of context, safety practices that some platforms disallow [7].

Clear social norms can reduce problems even when people’s names and other identifying information aren’t visible. Social norms are our beliefs about what other people think is acceptable, and norms aren’t de-activated by anonymity. We learn them by observing other people’s behavior and being told what’s expected [2]. Earlier this year, I supported a 14-million-subscriber pseudonymous community to test the effect of rule-postings on newcomer behavior. In preliminary results, we found that posting the rules to the top of a discussion caused first-time commenters to follow the rules 7 percentage points more often on average, from 75% to 82%.

People sometimes reveal their identities during conflicts in order to increase their influence and gain approval from others on their side. News comments, algorithmic trends, and other popular conversations often become networked battlegrounds, connected to existing conflict and discussions in other places online. Rather than fresh discussions whose norms you can establish, these conversations attract people who already strongly identify with a position and behavior elsewhere, which means that these large-scale struggles are very different from the small, decision-making meetings tested in anonymity lab experiments. Networks of “counterpublics” are common in democracies, where contention is a basic part of the political process [25,26,27]. This means that when people with specific goals try to reframe the politics of a conversation, they may gain more influence by revealing their pre-existing social status [28,29]. For example, in high-stakes discussions like government petitions, one case study from Germany found that aggressive commenters were more likely to reveal their identity than stay anonymous, perhaps in hopes that the comments would be more influential [30].

Abusive communities and hate groups do sometimes attempt to protect their identities, especially in cultures that legally protect groups while socially sanctioning them. But many hate groups operate openly in the attempt to seek legitimacy [4]. Even in pseudonymous settings, illegal activity can often be traced back to the actors involved, and companies can be compelled by courts to share user information, in the few jurisdictions with responsive law enforcement.

Yet law is reactive and cannot respond to escalating risks until something happens. In pseudonymous communities that organize to harm others, social norms are no help because they encourage prejudice and conflict. Until people in those groups break the law, the only people capable of intervening are courageous dissenters and platform operators [3].

Four Lessons For Designers and Communities

Advocates of real-name policies understand the profound value of working on preventing problems, even if the balance of research does not support their beliefs. Designers can become seduced by the technology challenges of detecting and responding to problems; we need to stop playing defense.

Designers need to see beyond cultural assumptions. Many of the lab experiments on “flaming,” “aggression,” and anonymity were conducted among privileged, well-educated people in institutions with formal policies and norms. Such people often believe that problem behaviors are non-normative. But prejudice and conflict are common challenges that many people face every day, problems that are socially reinforced by community and societal norms. Any designer who fails to recognize these challenges could unleash more problems than they solve.

Designers need to acknowledge that design cannot solve harassment and other social problems on its own. Preventing problems and protecting victims is much harder without the help of platforms, designers, and their data science teams. Yes, some design features do expose people to greater risks, and some kinds of nudges can work when social norms line up. But social change at any scale takes people, and we need to apply the similar depth of thought and resources to social norms as we do to design.

Finally, designers need to commit to testing the outcomes of efforts at preventing and responding to social problems. These are big problems, and addressing them is extremely important. The history of social technology is littered with good ideas that failed for years before anyone noticed.

The idea of removing anonymity was on the surface a good idea, but published research from the field and the lab have shown its ineffectiveness. By systematically evaluating your design and social interventions, you too can add to public knowledge on what works, and increase the likelihood that we can learn from our mistakes and build better systems.


J. Nathan Matias is a PhD candidate at the MIT Media Lab Center for Civic Media and an affiliate at the Berkman-Klein Center at Harvard. He conducts independent, public interest research on flourishing, fair, and safe participation online.


Beyond anonymity, if you are interested to learn more about what to do about social problems online, check out the online harassment resource guide to academic research, the list of resources at the FemTechNet Center for Solutions to Online Violence, and a report I facilitated on high impact questions and opportunities for online harassment research and action. See also my recent article on the role of field experiments to monitor, understand, and establish social justice online.
Sarah Banet-Weiser and Kate M. Miltner. #MasculinitySoFragile: culture, structure, and networked misogyny. Feminist Media Studies, 16(1):171-174, January 2016.
2 Robert B. Cialdini, Carl A. Kallgren, and Raymond R. Reno. A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. Advances in experimental social psychology, 24(20):1-243, 1991.

Danielle Keats Citron and Helen L. Norton Intermediaries and hate speech: Fostering digital citizenship for our information age Boston University Law Review, 91:1435, 2011.

Jessie Daniels. Cyber racism: White supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers, 2009.

Jennifer L. Doleac and Luke CD Stein. The visible hand: Race and online market outcomes. The Economic Journal, 123(572):F469-F492, 2013.

Maeve Duggan. Online Harassment, Pew Research, October 2014.

Stefanie Duguay. “He has a way gayer Facebook than I do” : investigating sexual identity disclosure and context collapse on a social networking site. New Media and Society, September 2014.

Benjamin G. Edelman, Michael Luca, and Dan Svirsky. Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment. SSRN Scholarly Paper ID 2701902, Social Science Research
Network, Rochester, NY, January 2016.

Yanbo Ge, Christopher R. Knittel, Don MacKenzie, and Stephen Zoepf. Racial and gender discrimination in transportation network companies. Technical report, National Bureau of Economic Research, 2016.

Arlie Russell Hochschild. The Managed Heart: Commercialization of Human Feeling. University of California Press, Berkeley, third edition, updated with a new preface edition edition, 1983.

Guanxiong Huang and Kang Li. The Effect of Anonymity on Conformity to Group Norms in
Online Contexts: A Meta-Analysis.
International Journal of Communication, 10(0):18, January 2016.

Adam N. Joinson. Disinhibition and the Internet. Psychology and the Internet: Intrapersonal, interpersonal, and transpersonal implications, pages 75-92, 2007.

Sara Kiesler, Jane Siegel, and Timothy W. McGuire. Social psychological aspects of computer-mediated communication. American Psychologist, 39(10):1123-1134, 1984.

Alex Leavitt. This is a Throwaway Account: Temporary Technical Identities and Perceptions of Anonymity in a Massive Online Community. In Proceedings of the 18th ACM Conference on Computer
Supported Cooperative Work & Social Computing
, pages 317-327. ACM, 2015.

Amanda Lenhart, Michelle Ybarra, Kathryn Zickuhr, and Myeshia Prive-Feeney. Online Harassment, Digital Abuse, and Cyberstalking in America. Report, Data & Society Institute, November 2016.

Elizabeth Levy Paluck. The dominance of the individual in intergroup relations research: Understanding social change requires psychological theories of collective and structural phenomena. Behavioral and Brain Sciences, 35(06):443-444, 2012.

Elizabeth Levy Paluck and Donald P. Green. Prejudice reduction: What works? A review and assessment of research and practice. Annual review of psychology, 60:339-367, 2009.

Jason Radford. Architectures of Virtual Decision-Making: The Emergence of Gender Discrimination on a Crowdfunding Website. arXiv preprint arXiv:1406.7550, 2014.

Jane Siegel, Vitaly Dubrovsky, Sara Kiesler, and Timothy W. McGuire. Group processes in computer-mediated communication. Organizational behavior and human decision processes, 37(2):157-187, 1983.

Lee Sproull and Sara Kiesler. Reducing Social Context Cues: Electronic Mail in Organizational Communication. Management Science, 32(11):1492-1512, November 1986.

Tiziana Terranova. Free labor: Producing culture for the digital economy. Social text, 18(2):33-58, 2000.

Kathi Weeks. Life within and against work: Affective labor, feminist critique, and post-Fordist politics. Ephemera, 7(1):233-249, 2007.

JoAnne Yates. Control through communication: The rise of system in
American management
, volume 6. JHU Press, 1993.

danah boyd. It’s complicated: The social lives of networked teens. Yale University Press, 2014.

Nancy Fraser. Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social text, (25/26):56-80, 1990.

Catherine R. Squires. Rethinking the black public sphere: An alternative vocabulary for multiple public spheres. Communication theory, 12(4):446-468, 2002.

Michael Warner. Publics and counterpublics. Public culture, 14(1):49-90, 2002.

Christian von Sikorski. The Effects of Reader Comments on the Perception of Personalized Scandals: Exploring the Roles of Comment Valence and Commenters’ Social Status. International Journal of Communication, 10:22, 2016.

Robert D. Benford and David A. Snow. Framing processes and social movements: An overview and assessment. Annual review of sociology, pages 611-639, 2000.

Katja Rost, Lea Stahel, and Bruno S. Frey. Digital social norm enforcement: Online firestorms in social media. PLoS one, 11(6):e0155923, 2016.



Nice article, thanks.

Roughly half of US adult victims of online harassment already know their attacker

I propose changing “already know their attacker” to “already know who their attacker is”. There’s a minor, but crucial difference.

Also there’s an instance of “to to” where only one “to” was meant.

Your friendly anonymous proofreader.


Fixed and changed, thank you!


Thank you for doing this. This has been driving me crazy for years. Glad to see someone else tacking it.


You’re very welcome! There’s still hope yet in commentary and community on news sites. In fact, we believe it’s essential.


This is such a great piece. That this myth is so stubbornly persistent is truly mystifying to me. Now I have a simple link to give out in response, at least.


Thanks Bassey. We commissioned it exactly for that reason - to have a link to share whenever people raise this persistent myth…


The sociologist Harry T Dyer has written a thoughtful response to this piece discussing how anonymity can contribute to creating an environment ripe for abuse. It’s definitely worth reading.


We’re working on tools which will ideally make commenters feel safer in online spaces, but how do you test the worst case scenario?

For every feature we create, we ask ourselves how it could go wrong or be used in a way which is different than otherwise intended. Empathizing and getting honest responses (in real-time) are crucial to a feature’s success (or failure). In order to test our work fully, does this mean we have to recreate crisis scenarios for testing? If so, how do we do this in a way which will not harm our testers?

For example, during a team exercise, we were trying to empathize with engagement editors who sometimes ban commenters. We were very careful about the effect this exercise could have on team members given the sensitive nature of the comments included.


I have used my real name in comments for thirty years or so. And my real email address. However, I sometimes wonder if someone who disagrees with me might track me down. And sometimes I have had the feeling that prospective employers have searched my online comments, giving them information about my opinions that is none of their business.


I was one of the founding engineers on the Second Life virtual world system. We frequently had the “real names” discussion internally. I came down on the side of real names, but what I really wanted was stable pseudonyms.

Second Life does not use real names. Historically real names were actually impossible - new users could pick an arbitrary first name but had to select from a curated list of available last names. We frequently had new users complain that they couldn’t use their real name.

Second Life is a “sandbox” world – there aren’t many constraints on what you can and can’t do. This makes it easy for a malicious user to spoil the experience for others. For example, griefers could (and did) come to new user areas and create clouds of flying penises. Many new users found that offensive. :slight_smile:

Griefers would get banned, but would quickly return with a new account and a new name. The clever ones would also generate a fresh IP address, so we had very little to use to figure out that it was the same person.

Among the engineers, I think I was the minority in believing that using real names might be helpful for dealing with abuse. I thought it would help reduce the frequency with which people created throwaway accounts in order to misbehave. That might have been naive. Over the years I’ve come to believe the problem wasn’t real name vs. pseudonym, it was the ease with which a user could create a new identity and be freed from the consequences of their prior malicious behavior.

I would love a mechanism by which no one has to know your real name, but everyone can tell that today’s user “Bob” is the same person as yesterday’s user “Alice”. (Let it expire after a while – you can decide to be Bob today, but for a month we’ll still let everyone know you used to be Alice.)

Aside: If I was designing a social service today I would require people to make a small upfront payment, which would require a real-world payment instrument. This would massively decrease the number of people who tried the service (goodbye network effects!) but it also would make policing much easier. If a user misbehaved, I could ban their payment instrument in addition to their account. Yes, once they ran out of personal credit cards they could find stolen cards online, but that would make it harder (and illegal) to continuously churn through accounts. It might be an interesting research area to look at abuse behavior in paid vs. free services.


Thanks James (if indeed that is your real name :slight_smile:). I remember those early days of Second Life! I also remember the flying penises…

I agree with the idea of persistent pseudonyms,. I also might suggest that the problem there wasn’t just the ease of creating new identities - it was the ease with which people could be disruptive with new accounts in a highly visible and annoying way with little recourse. The open sandbox is a terrific concept, however it can never exist outside of non-virtual social realities.

I think that persistent pseudonyms that gain some kind of social capital is the best solution for most systems - give people something to lose that they’ve built up within the community itself, and welcome new members with careful onboarding, rewards, and attention that in some ways counterbalances their early lack of social capital.

As an aside, I’ve seen some people advocate for the use of the blockchain in creating persistent identity across sites without giving up a real-world identity. The potential downside there of course being that once you’ve attached a sufficient number of services to it, the metadata may quickly become enough to identify users anyway.

Also IIRC, Twitter right now asks certain users (verified, persistent abusers) to give their cell phone number, as a way of attaching some higher cost to churning through accounts. Certainly harder to change your cell than get a lowcost prepaid credit card.

Where do you work now?


Great question! I think we may be using the word “test” in different ways. I’m not talking about a sandbox kind of system, where you introduce problems into a controlled lab environment in order to observe how people respond. Instead, I’m talking about a field experiment or A/B test, where instead of deploying a system fully, you might partially deploy it in a way that allows you to observe the effect.

Out in the wider world of a platform in production, people are already exposed to risks. And if you think something might benefit people (or reduce risks), but you’re not sure, you can ensure that you’re making the most helpful decision by testing the feature or system out in the field. And when the stakes/risks are higher, I believe that the obligation to experiment is also higher. If that ethical scale applies to testing life-saving drugs and suicide prevention interventions, I would argue that it applies no less to online risks.

What do we do about designs that aren’t yet in production or haven’t reached enough scale to be testable? I think there are two options, aside from all the great focus group, surveys, and other engagement strategies:

  • design with an eye to what other research has found. That’s also why I think it’s imperative for us to do more research with platforms that already have scale, and why we’re working on CivilServant, which supports users to conduct their own experiments and contribute to open knowledge on what works in online moderation, without having to ask platforms for permission
  • Mechanical Turk and other microwork systems. Here it’s especially important to be careful, since you are de facto exposing people to an interaction they wouldn’t have if you weren’t paying them

Overall though, I think the answer to how to test things in the worst case scenario is to be a designer or organization that runs toward a crisis rather than away from it. If we work hard to establish systems of support for people who are at risk, or find the people who already do, we then have an ethical justification and perhaps moral obligation to make that support as good as it possibly can be.

Great question, Egrdina!


While I agree with most of the individual points, I completely disagree with the conclusion. The pivot is trust.

Ironically, to make this comment, I had to join “Coral” community. The first guideline for this community is “trust”.

You cannot trust an anonymous poster. And folk who aim to corrupt make full use, by telling “just-so” stories, as anonymous. Our current history is overflowing with this sort.

To establish trust, you need a clear history. Using your real name is a means to that end … but not the only means.

In principle, I liked the notion of allowing anonymous posts, to protect good folk who might be harassed. In reality, anonymous posts were mostly used by the least honorable to turn the common dialog into a swamp.

Trust matters.

In the recent intensely polarizing elections, a few of my online friends promised to unfriend anyone who disagreed on a menu of points. I had to say on some of those points, I differed. So far, they have all refused to unfriend me, as they trust there is always measured reason in my responses.

The embarrassing truth is that I am a stereotype … or maybe an archetype. For more than a decade I have found that folk around tend to react with respect and trust. I am an older white male - a father (and grandfather). There is something about common folk making this judgement … read Malcolm’s “Blink” book … who are not wrong. For some reason, I engender automatic trust … and the common folk who make this instant judgement are not wrong. By my nature, I will always try to honor the trust given.

Trust matters. I wrote on this some time ago.


Nowadays I work at Google, on Chromebooks. No more social for me. :slight_smile:

I was thinking about paid vs. free again… I recall that one of my coworkers said he subscribes to the NY Times online, which apparently gives him access to a special cooking site. He remarked that the comment threads there were amazingly civil. Perhaps it’s just the sort of people who like to cook, but perhaps it’s because they paid.

In the Second Life days we used to display the “age” of an account fairly prominently in the user profile. It was sometimes helpful in figuring out who was churning through accounts. If a user was “too good” at using the system for a user 1 day old account then you knew something was up.

For more typical text-based comment systems (which I guess is a focus for Coral?) I suspect things like badges (like your moderator badge) for long-time users, paid users, etc. might help people sort out who to trust.


This is incredibly helpful. Thank you! I think this point can’t be said enough: “That’s also why I think it’s imperative for us to do more research with platforms that already have scale…” I would love to see publishers conducting more (and in some places, conducting even a small amount of research is more than what they’re currently doing) research in the community arena, and I’m glad Coral is helping newsrooms try to do this.


Hi Dreaded,

Are you saying that pseudonymity is ok, and pure anonymity isn’t? A lot of what I took @natematias to be talking about was real names vs unknown identities. If you’re proposing pseudonymity, then I’m all for that - and that can still protect people from harassment, if the tools and moderation don’t otherwise enable it.

And I hear you on trust - that’s the name of one of our products: It relies on persistent identity (pseudonymity is fine) to function.

What spaces do you think have the best online dialog? What are their policies around names?


These are great points!

Subject matter is certainly no guarantee of civility:

Badges are an interesting solution though not everywhere finds them appropriate:

There’s also a counterpoint to only letting comment those who pay: that banning them for bad behavior might have financial consequences in losing long-time subscribers. As soon as money is involved, people can get very defensive about their rights.

The age of an account is an interesting measure. Here in Discourse there are different levels that enable you to unlock certain features such as skipping pre-moderation when you reach them. I think that’s a pretty great model.

What did you do in Second Life when a new account seemed to be particularly skilled? Did you see patterns of actions that gave away who was behind them?


Great point. It’s a real problem - I wonder how often people’s Facebook posts show up when someone is considering hiring them, and how often they don’t get hired as a result. How far does our right to privacy online go?


Heh, end with a question, keep posters engaged. :slight_smile:

In Second Life, a “too good” user was someone with a recent account (days old), a blank profile, but who had in-world currency or sophisticated items in their inventory. It meant they transferred currency and items from their other account (or their friend’s account) to their new griefer account.

Also, to echo an earlier commenter – I almost didn’t post here because of the overhead of signing up as a member of the Coral community. And as a new user my last two comments went straight to moderation, which chafes. As a former system operator, I totally get it. But as a new user, it feels terrible. Sadly, I don’t have a solution. It would be nice to be able to transfer some (but only some) reputation from system A to system B, kinda like “this guy may not know anything about your site’s content area, and hence might not be ‘trusted’, but at least he was civil in posts on our system.”

Fun research area: Do people who get moderated/banned in system A (subreddit?) very frequently get moderated/banned in system B? Or are people often civil on topic A but uncivil on topic B?