Some surprising findings about content moderation

On Tuesday, the Supreme Court held oral arguments for a case that could substantially alter the internet. While the case was specifically focused on who should be held liable for automatic recommendations, the justices will ultimately be deciding how platforms manage content moderation and misinformation.

While there are a lot of unanswered questions in content moderation, a paper we just published is one of the few rigorous studies of fact-checking that could settle some of them. It offers two striking insights.

ADVERTISING


First, we find that crowdsourcing fact-checks, much like Twitter’s Birdwatch program, are incredibly effective. Overall, it does a better job than just relying on the platform to police content.

Second, we find that platforms should focus moderation efforts on policing content, rather than policing individuals. In other words, suspending accounts probably does more harm than good.

Nearly everyone agrees that misinformation is a problem. Upward of 95% of people cite it as a challenge when accessing news or other information. But there is little agreement about the right mix of policies that can balance the need for context without needlessly censoring content.

Rightly, everyone is concerned that platforms are picking and choosing which content to flag. Previous work from our lab has shown that social media platforms have a vested interest in policing misinformation. If companies want to promote user engagement and connections, they will need to address misinformation.

On top of this, measuring how misinformation affects real-world events is a tough empirical challenge for researchers. The Experimental Economics Lab, which is a part of the Center for Growth and Opportunity (CGO) at Utah State University, was set up specifically to understand these tangled questions.

This study, which is part of a larger research program on misinformation, was set up to evaluate fact-checking policies in a controlled laboratory experiment. Importantly, the decisions made by participants affect the amount of money they earn, so misinformation has real consequences. The study was also structured to allow people to interact with others over multiple rounds via a messaging system that replicates a platform. While no study is perfect, ours comes as close as possible to approximating real-world decision-making on platforms.

Three kinds of fact-checking scenarios were tested. In one version, individuals could fact-check information shared by other group members but they had to pay a small fee for it. The second scenario placed fact-checking in the hands of the platform and was randomly varied. Finally, we tested a combination of the two, both individual and platform fact-checking.

There were two consequences of posting misinformation. If misinformation was identified, it was flagged, so that participants knew. Likewise, users who were found to have posted misinformation were automatically fact-checked in the following round.

The results are remarkable. It is widely assumed that peer-to-peer monitoring, especially when users must pay to fact-check content, would lead to bad outcomes. But to the contrary, we find that this approach yields better outcomes than just relying on the platform.

We also found that adding platform moderation to this peer-to-peer approach has only a small additional benefit.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

By participating in online discussions you acknowledge that you have agreed to the Star-Advertiser's TERMS OF SERVICE. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. To report comments that you believe do not follow our guidelines, email hawaiiwarriorworld@staradvertiser.com.