Courtesy of NATE ALLEN
Comment sections and message boards epitomize the Internet at large: they can be chaotic, free-wheeling, and, sometimes, forums for meaningful exchange. One website is perhaps more dedicated than any other to harnessing content and commentary from its users.
Reddit styles itself as the “Front Page of the Internet.” The site is composed of “subreddits,” individual message boards dedicated to niche topics, including a handful of science-related discussions. In each subreddit, users vote on posts and comments to decide which appear at the top of the page. In theory, it is a democratic system for promoting the best content.
The largest of the science-centric subreddits is r/Science. In recent years, r/Science has become a reliable source for thoughtful, detailed discussions of scientific issues. Users post links to recent papers or news stories, and lively conversation ensues in the comment sections of any given thread. Almost daily,...
Today, r/Science counts 14 million subscribers. The misinformation, trash-talking, and obscenity that plague forums elsewhere on the Internet are not found within the subreddit’s jurisdiction. In an age in which information found online holds growing sway over public opinion, and when discourse all too often runs off the rails of civility, r/Science is an exception to the norm.
But it wasn’t always so. Nate Allen began moderating the subreddit in early 2012, when it was a typical, unruly message board. By day, Allen is a research chemist at Sigma-Aldrich. In his spare time, he has led the transformation of r/Science into its current well-kept state.
Over the course of he and his co-moderators’ experiment in creating fertile ground for online discussions about science, Allen has gathered counting insights about moderating comments, building community, and bridging the divide between scientists and a science-interested public. The Scientist recently spoke with Allen about the growth and goals of r/Science.
The Scientist: What was r/Science like when you started moderating?
Nate Allen: When I started, the moderating team was informal and not very organized. We barely were able to keep up with the comments and weren’t able to do anything else. When you have a forum with four or five million subscribers and you have four people, all of whom have jobs and everything, well, you get overwhelmed very quickly. We just tried to remove the most heinous things.
[At the time, most content on the page] was vaguely science-related. People posted big science news and talked about it, but the comment sections often were heated arguments or people posting memes. . . . It was just not worth looking at.
TS: What did you and the other moderators envision as the future of the subreddit, and how did you go about making changes?
NA: At the time, we had limited resources of people and time. So we could not moderate all the comments. Instead, we just moderated the top ones.
Some years ago, Popular Science turned off comments on their articles, and there was a specific reason they did it. A research article came out [that concluded] people’s interpretation of scientific results was highly influenced by comments. You can have a paper that says one thing and a comment that says this is all a hoax, and people would suddenly cast the paper in doubt.
So we realized that if we were going to have these comments, we had a responsibility . . . to make sure they were adding to the value of the article, not causing misinformation, because that does a disservice to the general population and to science itself.
We needed to remove the hate speech [and] massively off-topic posts, [but] we didn’t have enough moderators. At the time, a program [called “AutoModerator”] was coming out that . . . did text-recognition [and] responded to specific text strings. That way, if there was something in a comment that you knew made that comment bad, you could just have the program automatically remove it. So that was the first step.
We did some testing on it, and found that the number of false removals was very low. We took random samplings [of removed comments] and reviewed to see if it was something we would remove, and it was accurate more than 95 percent of the time.
But we still had a problem. It’s really hard to write [a program] that grasps context, implication, or anything like that. We were concerned about people presenting non-scientific [evidence], people who were climate-deniers, creationists, anti-vaxxers, things like that. That’s pretty hard to pick out via language filter. So we [manually removed comments] every day. In a month, AutoModerator would have about 10,000 actions, and I would have about 13,000.
We wanted to start up AMAs [“Ask Me Anything”—Q&A threads]. We wanted to get big-name scientists to come and talk on Reddit directly . . . but in order to do that, I had to give a guarantee that they would have a civil conversation that would be on-topic and valuable.
TS: You were hoping r/Science could rise above the typical standards of Internet discourse to meet the standards of professional discourse. How did the first AMAs go?
NA: They went really well. I was able to learn from other AMAs on Reddit to know what to expect and how to prepare for them. In fact I over-prepared, and we had a lot of people watching over [the comments]. It turns out you can get professors who really like to talk about their subject matter to the general public but don’t have a lot of opportunity to do so. We got a lot of positive feedback.
See “Self-treating ALS AMA,” led by The Scientist Senior Editor Jef Akst
TS: Were other subreddits or Internet communities trying to make similar improvements to their comment sections?
NA: No other subreddit [as large as r/Science] was doing it. It’s a challenge . . . but we [now] have about 1,250 moderators. That’s how we do it. We built a community of people who have the best interests of science at heart, and are willing to do something about it. It turns out there are a lot of good people in the world.
TS: How do you vet Reddit users to become moderators?
NA: The process . . . involves a few tests that people don’t know they’re taking, to find out whether or not they’re of the proper mentality to be a moderator. I’m being deliberately vague about that, because if people know what we do, they can game the system. But we’re looking for very specific things.
TS: How effective are you now at removing bad comments?
NA: We have a big enough moderation team that people are just always reading the comments, so the chances are pretty good that someone recognizes something bad and removes it. . . . The wing-nut conspiracy theorists, people pushing agendas, and hate speech people have given up trying to infiltrate us. We’ve read in other subreddits how they’ve given up. Because they see that their comments get removed . . . and, after a while, they give up.
[This] is important, because bad comments tend to breed more bad comments. When a user sees a comment, they assume that defines the level of decorum in that group. If one person is screaming, then other people will start screaming, and pretty soon everyone is screaming and no one can talk.
TS: This feels like a timely lesson about democracy.
NA: Yeah, it is. We get accused of censorship a lot, as you might guess, since we remove so much junk material. “Why don’t you just let the votes decide?” is a question we hear a lot. But that only works at a very small scale, where you have a self-selecting group of people that . . . values the community.
I use this analogy: if you have 10 people in a room having a conversation, and two of those people happen to be on opposite sides of the issue and start yelling at each other, what happens to the other eight people? They shut up and leave the room. What’s happened is, those two people censored eight people. By removing those two people . . . we enable the other eight people to have freedom of speech. So we’re stopping the extremes from censoring the middle.
TS: Why do people visit r/Science?
NA: I think people come to r/Science because they want to cut through all of the hype and all of the sensationalism that they find in the journalistic world. They want to hear what other Redditors [Reddit users] think of the science. Some people just use it as a news feed for recent science. But [other] people count on anonymous Reddit users as an independent source of information.
Part of it is that people don’t trust authority, and journalists are seen as pushing their biases. That’s not necessarily true, and commenters have biases too. But for some reason, people will believe anonymous internet comments more than they will believe press releases from the CDC [US Centers for Disease Control and Prevention]—which should be shocking to anybody that believes in the CDC. It’s the CDC, right? But Internet commenters are sometimes more believed.
However, studies have shown that if there are two anonymous commenters, and one says they are a doctor, people will believe the doctor’s comment over the other. So there’s a way to diffuse the anonymous-commenter problem, which is to have vetted anonymous commenters, and that’s what we do [through verified user credentialing].
If you’re on Reddit, you see other Redditors as your peer group, so the social effect is to believe they’re telling you the truth. . . . You can see some back-and-forth [between users] and decide for yourself. If one person is pushing an agenda, someone else can call them on it. That gives people the impression that this is a real-time check on moneyed interests influencing the general public, and I think that impression is correct more often that not.