Users will fact-check Instagram and Facebook. Let’s just hope it works

```html
Meta's Risky Gamble: Replacing Fact-Checkers with Community Notes
The Shift to Crowdsourced Fact-Checking
Meta recently abandoned its third-party fact-checking program, opting instead for a crowdsourced approach called Community Notes. Similar to X's system, Community Notes relies on user contributions to flag and contextualize misinformation across Facebook, Instagram, and Threads.
To contribute, users must be US citizens over 18, possess a verified phone number, and have an account older than six months. Crucially, published notes require a consensus among contributors, aiming for a semblance of objectivity.
"Enough contributors must agree that a community note is helpful before it can be published on a post," Meta explains. This publicly-sourced fact-checking system is expected to roll out in the coming months.
The Effectiveness of Community Notes: A Slippery Slope?
With a looser content moderation policy and severed ties with independent fact-checking organizations in the US, Meta is betting heavily on the efficacy of Community Notes. However, this reliance raises concerns.
A misinformation expert working with a leading organization previously part of Meta's fact-checking program expressed skepticism to Digital Trends. Speaking anonymously, the expert highlighted the failure of similar systems on X in key markets like India.
Furthermore, the expert, a member of Poynter's International Fact-Checking Network (IFCN), refuted Meta's claims of political bias among professional fact-checkers, emphasizing the strict non-partisan and transparency guidelines imposed by the platform.
Ironically, a study analyzing over 1.2 million notes on X in 2024 found professional fact-checkers to be the top contributors. "Users frequently rely on fact-checking organizations when proposing Community Notes," the study revealed.
The Potential Fallout of Meta's Decision
Meta's move has sparked debate. Was it driven by user interests or political expediency? While an official answer remains elusive, the potential consequences are far-reaching, particularly for vulnerable minority groups.
"Recent content policy announcements by Meta pose a grave threat to vulnerable communities globally and drastically increase the risk that the company will yet again contribute to mass violence and gross human rights abuses," warned Amnesty International.
Jamie Krenn, an associate professor at Columbia University and Sarah Lawrence College, told Digital Trends that Meta's platforms risk becoming breeding grounds for misinformation, leaving users susceptible to manipulation.
"Expertise in fact-checking plays a crucial role in maintaining this balance, ensuring that free speech is supported by a foundation of truth. Without it, we risk confusing opinion with fact—a dangerous precedent for any society," cautioned Krenn.
A Fundamentally Problematic Solution?
While the concept of Community Notes isn't inherently flawed, its implementation presents significant challenges. The very issues that led Meta to abandon professional fact-checking—namely, alleged bias—could plague the crowdsourced system.
Even X chief Elon Musk admitted, "Unfortunately, is increasingly being gamed by governments & legacy media. Working to fix this." This raises concerns about platform manipulation and censorship.
Moreover, the reliability of sources cited in Community Notes remains a critical issue. How will Meta determine credibility, especially in diverse markets with varying media landscapes? Will audience engagement figures influence decisions, potentially favoring biased but popular sources?
The success of Community Notes hinges on Meta's ability to address these fundamental challenges and prevent its platforms from becoming amplifiers of misinformation.