Meta has announced the end of its third-party fact-checking programme across all its platforms, including Facebook, Instagram, and Threads.
To replace its third-party fact-checking programme, the company is introducing a new programme called Community Notes.
Community Notes which is also currently used by X ( formerly Twitter), is designed to require agreement between people with a range of perspectives to help prevent biased ratings. This is a significant shift in Meta’s approach towards content moderation and promoting freedom of expression by relaximg majority of its rules which it deems contradicts its purpose of free expression and putting users in “Facebook Jail.”
The independent third-party fact-checking across all Meta Platforms was introduced to give room so “independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.” However, the organisation discovered everyone had their biases on what and how to fact-check information.
As a means to ensure free communication, Meta decided to get rid of it complex system which manages content on its platforms and are becoming increasingly complicated to enforce.
According to Meta, “ in December 2024, millions of pieces of content were removed every day.” while they think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated their policies).
Meta says it is getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate which it says “can be said on TV or the floor of Congress, but not on our platforms.” However, the new improvements might take weeks to be implemented.
The organisation which has formerly been using “automated systems” to scan for all policy violations realises it has resulted in many mistakes and contents being censored when they should not have been. The company reiterates its focus on tackling “illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.” For less severe policy violations it is going to rely on a user reporting an issue before taking any action
Meta is working on ways to make recovery of accounts more straightforward and testing facial recognition technology. It has begun the use of AI large language models (LLMs) to provide a second opinion on some content before taking enforcement actions and has decided to add extra staff to its account recovery section. For a post to be taken down, multiple reviewers have to reach a determination in order to take something down.
According to Meta, “these are all steps to ensure that the commitment to free expression that Mark Zuckerberg set out in his Georgetown speech is upheld.”
In Mark Zuckerberg 2019 speech at Georgetown University, he mentioned that free expression has always been the driving force of American society and around the world, highlighting that speech restriction is only empowering existing power structures instead of the people.
“Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous.” Zuckerberg said.
Meta intends to be transparent about how different viewpoints inform the notes displayed in its apps, and are working on the right way to share this information. It encouraged people to sign up through it various platforms Facebook, Instagram and Threads for the opportunity to be among the first contributors to this programme as it becomes available.
This new development is to begin in the United States (US) and continue as the year progresses.



























Home