Meta: Fact-Checking is So 2024

The company is abandoning fact-checking services in favor of community notes

Facts have become an increasingly rare commodity in recent years, so much so that it’s nearly impossible to get even a handful of seemingly reasonable people to agree on something as simple as the way that weather works or whether germs are real (They are.). This is especially true on social media platforms, where the goal is likes and engagement rather than informed, thoughtful discourse. 

Identifying facts may become even more difficult across the platforms operated by Meta, including Facebook, Instagram, and Threads. On Monday, Meta CEO Mark Zuckerberg announced that the company plans to end its third-party fact-checking program in the United States in favor of a community notes model like the one used by X. Rather than using professional, third-party fact-checking services, as the company does now, Meta plans to rely on  community notes, which allows other users on the platform to add context or flag a post as hateful, misinformation, or otherwise harmful. The goal, Meta officials said, is to “return to the commitment to free expression” that Zuckerberg outlined in a speech in 2019.

“We will end the current third party fact checking program in the United States and instead begin moving to a Community Notes program. We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias,” said Joel Kaplan, Chief Global Affairs Officer at Meta.

Meta also is removing some restrictions on topics such as immigration and gender identity that generate heated discussions online. 

The changes have drawn criticism from people who say the moves will harm users in minority and LGMTQ+ communities. Regulators have also taken notice. The Irish Times reported that the Irish media regulator, Coimisiún na Meán, plans to meet with Meta officials about the changes, although they only apply to users in the U.S. for the time being. 

“Coimisiún na Meán will engage with Meta on their decision relating to Community Standards and the impact this might have on EU users,” a spokesman for the commission said, according to the paper.

Content moderation has proven to be one of the more complex and thorny problems for platform providers to address. All of the popular social media platforms have struggled with it, trying various approaches over time, including human, automated, and community based systems. All of them have their limitations and can be prone to mistakes, both in terms of under-moderating and over-moderating content, and that’s one of the things that Meta claims it is trying to address with its new policies. 

“Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn’t have been. So, we’re going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams. For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action,” Kaplan said.

Meta plans to roll these changes out over the next few months, and how they work in practice remains to be seen, and there will likely be more modifications to come. But there is no shortage of concerns about the new system already. 

“Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in the United States,” wrote David Greene and Jillian York of the EFF.