Free Speech, Really?

3 months ago 30

Free expression has never been absolute, Meta CEO Mark Zuckerberg said in his 2019 Georgetown address.

In a move that could reshape the way online speech is managed, Meta recently announced integrating large language models (LLMs) into its content moderation strategy, along with moving “We’ve started using AI LLMs to provide a second opinion on some content before we take enforcement actions,” read a blog post by the company. 

For a social media giant that is always under public scrutiny, this is a bold attempt to moderate the issues of overreach and inconsistency.

In a recent video, Meta chief Mark Zuckerberg unveiled sweeping changes to the platform’s strategy. One of the most striking ones was the significant reduction in the number of human fact-checkers across all Meta platforms. As part of the downsizing efforts, human fact-checkers are being shifted from their California base to Texas. 

Zuckerberg has long championed free expression as a cornerstone of progress, and according to the company, this move is in tandem with that. 

In his 2019 Georgetown address, he argued that that empowering people to voice their ideas not only drives innovation but also challenges existing power structures. Yet, even as he spoke of the virtues of free speech, he warned that too much moderation could tilt the scales of power, stifling diverse voices and diminishing the democratic discourse.

As the industry embraces AI, with AGI on the verge, Meta’s solution of putting AI and collective intelligence at the heart of moderation decisions is futuristic. By using LLMs as a second opinion before enforcement actions, the company claims to refine its approach, reduce wrongful takedowns, and temper the frustrations users often feel when their content is censored. 

AI LLMs as a Second Opinion

At the heart of this strategy is Meta’s deployment of AI-driven LLMs to review flagged content. 

These models, capable of sifting through massive data troves in seconds, are designed to identify subtle nuances and policy violations that human moderators might overlook. For users, this means fewer errors and more fairness—at least, that’s the goal. 

It’s a vision of moderation that, in addition to minimising the number of cases of wrongful takedowns, improves the accuracy of enforcement decisions. At its core, Meta’s use of AI for second opinions is an experiment in trust—trust in technology, in the community, and the overall commitment to self-regulation.

But the question remains: Can AI deliver on this promise without falling prey to its own flaws? 

The Bias Problem 

Despite their impressive capabilities, LLMs have their limitations. Research has shown that these models often reflect the biases of their creators. Major AI systems—whether from OpenAI, Anthropic, Google, or others—have all been called out for exhibiting ideological leanings.

Andrej Karpathy, a former OpenAI researcher, explained the issue: “LLMs model token patterns without genuine understanding,” making them prone to echoing the biases embedded in their training data. For a system tasked with ensuring fairness, this is a troubling flaw.

xAI CEO Elon Musk has also voiced his concerns, warning that LLMs could exhibit ideological biases, including what he termed a “far-left” leaning. Meanwhile, computer scientist Grady Booch has criticised these models as “unreliable narrators,” capable of producing outputs that are not only deceptive but potentially toxic.

Baybars Orsek, VP of fact-checking at Logically, calls for the industry to unite around sustainable, technology-driven solutions that prioritise accuracy, accountability, and measurable impact. While he sees merit in efforts like community notes, he argues in favour of a professionalised approach to fact-checking. “A professional fact-checking model—enhanced by AI and rigorous methodologies—remains the most effective solution for addressing these issues at scale,” Orsek said. 

These highlight a significant challenge Meta faces: if the tools intended to enhance fairness are fundamentally flawed, can they genuinely be relied upon to moderate speech impartially and equitably?

Meta Still Bets Big on Co-Intelligence

Beyond AI, Meta is taking a leaf out of X’s playbook by adopting a decentralised moderation approach with its Community Notes model.

This model, first introduced by former Twitter CEO Jack Dorsey in 2021, relies on users to add context to flagged content, providing a more collaborative approach to moderation. Under Elon Musk’s ownership, Community Notes became a key feature, earning praise for its ability to scale moderation while capturing a diversity of perspectives. Musk himself has commended Zuckerberg for bringing the model to Meta’s platforms. 

To its credit, the strength of Community Notes lies in its decentralised nature. By leveraging millions of users, it incorporates a wide range of viewpoints, making moderation more representative and less prone to the pitfalls of top-down decision-making.

Picture of Aditi Suresh

Aditi Suresh

I hold a degree in political science, and am interested in how AI and online culture intersect. I can be reached at [email protected]

Association of Data Scientists

GenAI Corporate Training Programs

India's Biggest Developers Summit

February 5 – 7, 2025 | Nimhans Convention Center, Bangalore

Download the easiest way to
stay informed

Free Speech, Really?

Aditi Suresh

Free expression has never been absolute, Meta CEO Mark Zuckerberg said in his 2019 Georgetown address.

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

February 5 – 7, 2025 | Nimhans Convention Center, Bangalore

Rising 2025 | DE&I in Tech & AI

Mar 20 and 21, 2025 | 📍 J N Tata Auditorium, Bengaluru

Data Engineering Summit 2025

15-16 May, 2025 | 📍 Taj Yeshwantpur, Bengaluru, India

AI Startups Conference.
April 25 / Hotel Radisson Blu / Bangalore, India

17-19 September, 2025 | 📍KTPO, Whitefield, Bangalore, India

MachineCon GCC Summit 2025

19-20th June 2025 | Bangalore

discord icon

Our Discord Community for AI Ecosystem.

Read Entire Article