When a commenter really crosses a line, what do you do?


#1

Hi, I help manage Slate’s community. I’m wondering what steps others take when a user really crosses a line, especially in cases where you have information about their real identity. For ex, a user who has been banned for repeatedly violating your community’s rules who is nonetheless able to create new accounts because they know how to circumvent an IP ban. Or a user who threatens violence.

Who do you contact, and in what order? The police? The internet service provider? An organization?

Thanks for your thoughts.


#2

Thanks Jeff - great question that touches on some really important areas. A few of us - @sydette, @gregbarber and I - on our side huddled together and came up with this response. We’d love your thoughts - and also those of anyone else here. What did we miss?

We’ve divided up your question into specific types of user actions, which demand different kinds of responses. Some of these responses include features that we already built into our Talk platform. But first, some ways we think Community Managers can plan ahead.

General preventative measures to improve behavior and response time

These won’t deter the most extreme users, but can make a big difference in many new users’ behavior.

  1. Set the first few comments from a new user to premod (if your system allows)

  2. Encourage users to report/flag bad behavior through onboarding and messaging. (Ideally, have a system like Talk that accounts for unreliable flaggers.)

  3. Highlight good contributions - modeling how users can get the newsroom’s attention through more than just bad behavior

  4. Create a list of places you can point users to - e.g. Crash Override, Heartmob, Trollbusters - to get support if they are being targeted.


**i ) Situation: User is being targeted**
  1. Contact the person being targeted and ask what they would like to happen. They might have good reasons for being very wary of the police. Work with them on your proposed solutions. Keep them informed of any developments.

  2. Make a public statement about what’s happening, that it’s not ok, and what you’re doing about it. Also enlist the community to tell you if they come back again.

  3. If the person keeps coming back with new accounts, try slowing down allowing new users… at least set new users’ comments to premod for the first comment or two.

  4. Contact the person doing it, ask what is going on with them. There is often a triggering reason that made them first act this way. See if that can be addressed or at least acknowledged.


**ii ) Situation: A journalist/member of your team is being targeted**
  1. Contact the journalist being targeted. If it seems to be a specific and genuine threat, make sure they’re safe, make sure they know what to do if the person tries to call/go to the office. Inform security at the office. Offer for them to work from home or offer hotel costs if they feel genuinely targeted.

  2. Work with them on your proposed solutions. Keep them informed of any developments.

  3. Make sure the journalist isn’t expected to read their own comments at this time. See if you or one of your team can give specific attention to comments on their piece for a period of time.

  4. Contact the police if the journalist agrees, and you believe there is a genuine threat of harm.

  5. Contact the person doing it, ask what is going on with them. There is often a triggering reason that made them first act this way. See if that can be addressed or at least acknowledged.


**iii ) Situation: General, repeated non-targeted abuse**
  1. Set things to premod, or with our system, set Trust filters for premod (only commenters with sufficiently good/long histories avoid premod)

  2. Look at their history. Look for patterns. How do you know each new account is them again? Is there something you can do to set those patterns to go to premod?

  3. Contact the person doing it, ask what is going on with them. There is often a triggering reason that made them first act this way. See if that can be addressed or at least acknowledged.

  4. Encourage the trusted community to use Ignore/Mute functions (where available - we have it in Talk), and to contact you privately if a particularly egregious user seems to have returned with a new account.



    Was that helpful? How does that match the policies you have in place? What did we miss?


#3

Was that helpful? How does that match the policies you have in place? What did we miss?

Very helpful, thanks! Apologies for the slow reply. Really appreciate the brainstorm.

We do employ a number of these strategies, although technical limitations put a handful of them out of reach.

This particular case concerned an untargeted yet unambiguous threat of violence, and from our review of the user’s commenting history it appeared that their rhetoric had escalated over time. It felt appropriate to notify some external authority—an ISP, the cops, etc. Fortunately this doesn’t come up often, but as a result I wasn’t sure of the best way to handle this. I’m reluctant to offer specifics, but we did end up reaching out to law enforcement. I don’t know whether that was effective or not, but it seemed like the best available option.


#4

Did you also inform the offensive user that you had done so?