I wanted to share a quick update on our research into analytics dashboards. This are some of the early findings about the practices, needs, and wants related to analytics that we’ve identified in brainstorm sessions with moderators and editorial staff from a few news orgs.
We’ve gotten a good idea about prioritizing certain types of data for exploratory dashboards. The staffers want moderation accuracy, actionability, and the ability to find awesome comments more easily. They need to be able to work quickly, and need meaningful, relevant information about how users participate. Many editorial staffers and journalists may not have time to explore and pull interesting trends out of dashboards - for those folks, it’d be better to have the good stuff pushed to them with relevant notifications. Based on the brainstorms so far, we have a lot of ideas about what those notifications might include. The next steps are to come up with detailed design decisions.
The more in-depth version
We have an interesting opportunity to make free software that actually works for newsrooms and end users. Through some lightweight research interviews, I learned from editorial staff who are involved in community moderation and full-time moderators.
I spoke with staffers at four news organizations about how they currently deal with moderation, and what analytics could be helpful to them in their work. The staff included (1) three full-time moderators and the deputy community moderator at The New York Times, (2) the Comments Editor, Audience Editor, and Social Media Editor for the National desk at The Washington Post, (3) the Engagement Editor at Racked (Vox) and (4) the Web editor at Willamette Week. I met individually with most of the staff, except for the folks from the Post. I had the good fortune of being able to hold an informal “focus group” with the three editors.
Roughly half of the conversations were guided by a critical brainstorm session around the above dashboard mockups. The dashboard designs inspired a lot of helpful ideas, but it wasn’t always possible to use them due to a combination of choppy wi-fi connections, preferences for phone calls, and time constraints during the research. In those instances, we defaulted to brainstorming based on the challenges they ran into in routine work.
The brainstorms were informal; we spoke on half hour or one hour time blocks. I took notes during each interview, and identified recurring themes in the notes based on their current work practices. I also identified potential product features we brainstormed together that people expressed interest in seeing (“wants”) and more fundamentally, the needs associated with their current practices.
Of course, I don’t believe this approach gives us a representative idea about how analytics fit into moderation. But it does give us a bit of insight into the needs and practices of day-to-day work for editorial and moderation staff in the comments, and how we can leverage practical analytics to help.
A bit about the staffers
Some staffers focus on moderation full-time (in this sample, at the New York Times) while others do this work in combination with other editorial tasks for related to publishing articles for the web. These tasks vary from day to day, but include working with journalists on optimizing stories for their homepage and social media, or writing articles themselves. In the case of editorial staff, moderation is typically one of many tasks. They tend to take a broader strategic view of analytics to understand audience engagement. Moderators tend to focus primarily on working through flagged comments, as well as identifying and responding to unusual behavior.
Wants (in no particular order)
Like many news orgs, the New York Times uses a system that automatically flags or removes comments that contains curse words and slurs. However, sometimes the terms are subtle, and can slip through. Having ways to find (or automate finding) other forms of questionable content for moderation could be helpful. Additionally, having ways to manually tinker with automated filters would be helpful.
We need analytics that connect directly with the tools to take action. Right now, many editorial staffers already have analytics dashboards, but they’re not always useful because it’s not clear what to with interesting trends they’ve identified.
Finding the good stuff.
The moderation teams and editorial staff agreed that they spend most of their moderation time on unwanted content. Editorial staffers, in particular, wanted to know more about how to get the most compelling, interesting, or insightful comments. Right now, the primary way they do that is by reading through an enormous number of comments. It takes more effort than it should to find the good stuff, to share it with journalists.
Need for Speed.
Moderation staff need to be able to work through flags, and identify unwanted content quickly. In some organizations, this process is partially automated by algorithms. For example, the web editor at Willamette Week uses a tool called Civil, which uses commenters’ ratings of other comments to algorithmically determine the quality of posts, and to automate decisions about how to deal with those posts. Journalists are similarly overbooked with different tasks, and may not be able to spend much time looking at analytics.
Editorial staff sometimes look to dashboards for broader strategic trends among users on their site, and described using dashboards to get snapshots of users’ activities.
We only want analytics that are going to be directly meaningful to newsrooms. We need to give to give relevant information to staff when they need it. I’ll talk about what exactly that means.
What exactly does this mean?
One of the main things we need to do is think about what information will be most relevant to journalists and editorial staff, and push them only the most relevant information when appropriate. The most obvious way to do that is not with dashboards, but with metric-driven notifications. A lot of interesting ideas came out of the conversations for specific features that could help. To name a few…
For journalists and web editorial staff
- Receive notifications for comments on articles they’ve just made live, and information about the comments they should pay attention to. We might be able to do that by surfacing comments that look similar to previous comments that the moderators have chosen as exemplary in the past. Nick Diakopoulos has done a bit of related research here on the comments in the NYT Picks.
- Letting journalists and editorial staff know when they receive an influx of traffic from one location or another (e.g., their front page).
- To receive warnings on topics that are likely to receive a high level of moderation, based on previous similar articles.
- To receive notices when the system detects spammy behaviors. It would also go great with batched moderation choices applied to similar / identical comments posted multiple times within the same article, or across multiple articles.
This is not entirely about analytics, but it’d also be nice to have…
- The ability to control word filters.
- Simply minimizing the number of clicks to accomplish a moderation task (e.g., through batch assignments of moderation decisions) would be a huge time saver, giving them more time to look at the good stuff. There are some interesting paths forward here. For example, if moderators have to provide a reason for removing a comment, we can use machine learning to predict whether a future comment is likely to be removed, and let moderators glance at the comments to decide whether the algorithm was right before confirming a batch removal.
This is quick and dirty research. Still, it’s proved very informative for our purposes on such a short timeline. The next steps are to discuss the specific resulting design decisions with the team. If you’re interested in chatting about this, please reach out - we’d love your thoughts and feedback.