This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Meta's move away from fact-checking could allow more false or misleading content, expert says

Meta
Credit: Julio Lopez from Pexels

Meta's move away from fact-checking in content moderation practices could potentially allow more hate speech or mis- or disinformation, a Northeastern University social media expert says.

Meta is adopting a model similar to the one used by X, called Community Notes.

John Wihbey, an associate professor of media innovation and technology at Northeastern University, sees the move as the company repositioning itself ahead of President-elect Donald Trump's inauguration. But third-party , while difficult to scale on a platform with billions of users, "is an important symbol of commitment to trust and safety and information integrity," Wihbey says.

It is "dangerous," he says, to break from those norms at a moment when "the winds of authoritarian populism are blowing across the globe."

In a video message, Meta founder and CEO Mark Zuckerberg described the shift as part of an effort to "get back to our roots around free expression," noting, among other things, that the company's fact-checking system has resulted in "too many mistakes and too much censorship."

He also cited the 2024 , describing the election of Trump as a "cultural tipping point" toward "once again prioritizing speech."

On X, the Community Notes model uses crowdsourced input from users to fact-check posts, usually in the form of added context. Wihbey described Zuckerberg's announcement as confusing, noting that Meta's third-party fact-checkers played a minimal role in day-to-day moderation when compared to its sophisticated algorithmic tools, which can sometimes result in false positives or negatives.

As part of the policy shift, Meta says it will scale back its content moderation algorithms.

In addition, the company says that it wants to pivot to a more laissez-faire approach to civic or political content after tightening controls in recent years to curb the spread of mis- and disinformation.

"In recent years we've developed increasingly complex systems to manage content across our platforms, partly in response to societal and to moderate content," the company said in a statement. "This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable."

The policy changes could have downstream effects—not only in the U.S., but elsewhere around the world, warns Wihbey, whose forthcoming book, "Governing Babel: The Debate over Social Media Platforms and Free Speech—and What Comes Next," delves into content moderation and free speech.

"The analogy I would use is that they're simultaneously standing down the police while opening up the floodgates for crime," Wihbey says. "And that's a dangerous mix.

"I think it's a real step backwards in terms of trust and safety and platform integrity," he adds. "At the same time, I understand pragmatically why Mark Zuckerberg and their leadership are moving in this direction."

Still, Wihbey described the policy reversal as short-sighted because it is too narrowly focused on the U.S. market. Meta's products, he notes, are used worldwide—accounting for about $134.9 billion in global revenue.

"The Meta product suite, which is Facebook, Instagram and WhatsApp Messenger, is very, very important to , political debate, human rights work and journalism, so this is going to have a lot of second-order consequences beyond the shores of the United States," Wihbey says.

Wihbey notes that, with the U.S. set to potentially ban TikTok, other countries may feel emboldened to shut out U.S.-based platforms.

With the policy yet to be implemented, it remains to be seen how these changes would play out. Wihbey says that X's Community Notes model suffers from many of the same issues third party fact-checkers do, including problems of scale, timeliness and user consensus across party lines.

"The big rubric here is counterspeech, which is: How do you create or present additional information—content labels, for example—or different points of view around a particular claim," Wihbey says.

"I would prefer to see mixed regimes, where you have both professionals and ordinary users, but that you also enlist experts who bring real knowledge," he adds.

Another possibility, Wihbey says, is that Meta could simultaneously be working on AI-based solutions that may serve to fill in the gaps.

"The real tale will be the very powerful, invisible ways in which they use AI tools to try to keep a lid on problems, but also still allow for expression to flourish on the platform," Wihbey says. "That's a tricky balance."

This story is republished courtesy of Northeastern Global News news.northeastern.edu.

Citation: Meta's move away from fact-checking could allow more false or misleading content, expert says (2025, January 8) retrieved 24 January 2025 from https://phys.org/news/2025-01-meta-fact-false-content-expert.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Disinformation experts slam Meta decision to end US fact-checking

0 shares

Feedback to editors

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy