The purchase of Twitter by Elon Musk, right-hand man of the next US president, Donald Trump, turned the platform that the billionaire renamed X into a jungle without rules for the sake of supposed freedom of expression. A study by the City St. George’s School of Science and Technology, University of London, in nine countries, including Spain, pointed out that X has become in just two years the network of abuse or misuse (abuse) political, where adversaries, dissidents or moderates are relegated to treat them as “enemies.” Meta networks (Facebook, Instagram, and Threads) follow these steps by removing fact-checking and relaxing content moderation. “The consequences of this decision will be an increase in harassment, hate speech and other harmful behavior on the platforms of billions of users,” warns Alexios Mantzarlis, director of Safety, Trust and Protection of the technology area at Cornell University . Few organizations defend the measure.
The Cornell researcher, who participated in the international network of fact-checkhighlights that the drift of Meta, founded and directed by Mark Zuckerberg, is twofold: not only does it stop verifying the data to identify hoaxes and delegates its control to users, but it also opens the door to the most susceptible content. of generating hate speech. This is confirmed by the multinational’s new director of global affairs, Joel Kaplan: “We are getting rid of a series of restrictions on issues such as immigration, sexual identity and gender that are the subject of frequent speeches and political debates. It is not fair that things can be said on television or in Congress, but not on our platforms.”
Mantzarlis is highly critical of the move: “In addition to ending the fact-checking program, Zuckerberg has also announced a more lax approach to content moderation, so Meta will not proactively search for potentially harmful content across a wide range of domains.” . Depending on how it is applied, the consequences of this decision will be an increase in harassment, hate speech and other harmful behavior.”
Meta has argued that content moderation and the fact-checking program have involved “censorship,” biases, and limits on freedom of expression, something that the international network of fact-checkers flatly denies.
The Cornell researcher also rejects the arguments and assures that Mark Zuckerberg has data collected over eight years that supports the anti-hoax program and the benefits in controlling unwanted messages. “However,” he laments, “instead of sharing hard evidence, he has chosen to imitate Elon Musk and promise freedom of expression for all.”
“The program [de moderación] was by no means perfect and fact-checkers have undoubtedly gotten some percentage of its labels wrong. [3,5%, según las auditorías de Meta]. But we must be clear that Zuckerberg’s initiative to get rid of fact checkers is a political decision [politic]not of principles [policy]”.
This statement by Mantzarlis refers to Meta’s radical turn after Trump’s election and the appointment of Musk as the next president’s right-hand man. In fact, network X has been one of the first to welcome its competitors to a social communication ecosystem in which populism has grown around the world.
Angie Drobnic Holan, journalist, writer and member of the International Fact-Checking Network, agrees with the Cornell professor: “It is unfortunate that this decision comes as a result of extreme political pressure from a new administration and its supporters.”
For Drobnic Holand, fact-checkers have been impartial and transparent, so questioning this objectivity, in his opinion, “comes from those who feel they should be able to exaggerate and lie without refutations or contradictions.”
“This decision [la supresión de la verificación] will harm social media users who seek accurate and reliable information to make decisions about their daily lives and interactions with friends and family. Fact-checking journalism has never censored or removed publications; has added information and context to controversial claims and debunked false content and conspiracy theories,” concludes Drobnic Holan.
Also speaking out against the measure is Tal-Or Cohen Montemayor, founder and director of CyberWell, an organization that fights online hate and specializes in combating anti-Semitism. In her opinion, Meta’s decision represents “the intentional deterioration of the best practices of trust and security” in an environment where the specialist observes “growing evidence of how hate speech, inflammatory content and harassment cause damage in the real world.”
“The change [en Meta] “It means one thing, very much in line with the trend in both the quantity and quality of content that we have seen on X since Musk acquired Twitter: more hate speech, more politicized content, more niches and less effective responses from the platforms” adds Cohen.
The founder of CyberWell also rejects Meta’s argument about facilitating the rights of opinion and eliminating alleged censorship: “This is not a victory for freedom of expression. The only way to avoid censorship and data manipulation by any government or corporation would be to institute legal requirements and reforms on Big Tech so that they modify social networks and comply with transparency requirements.” “The answer cannot be less responsibility and less investment on the part of the platforms,” he concludes.
In favor of Meta
Although the majority of social communication specialists are against the new measure, Meta’s decision has had some support, apart from the welcome from X, its competitor in the network market, motivated by Meta’s submission to the Elon Musk’s ideology.
In this sense, the conservative organization Foundation for Individual Rights and Expression (FIRE) has celebrated Zuckerberg’s decision. “Meta is giving its users what they want: a social media platform that doesn’t suppress political content or use top-down fact-checkers. “These changes will hopefully result in less arbitrary moderation decisions and greater freedom of expression on Meta platforms,” the foundation states.
FIRE advisor Ari Cohn defends that the decision is in accordance with the first amendment of the United States Constitution, which guarantees freedom of expression. For the member of this group, Meta’s measure “protects the editorial options of social media companies on the content of their platforms.” “It’s good that they voluntarily try to reduce bias and arbitrariness when deciding what content they host, especially when they promise users a culture of free speech like Meta does,” Cohn argues.
Cycle of hate and outrage
However, different studies indicate that this conception of freedom of expression without moderation generates a vicious circle that promotes the most “outrageous” content and relegates the most “trustworthy” content, which is why the prediction of the majority of experts in networks about the increase in hate speech and misinformation.
It is the conclusion of a study published in Sciencewhich warns that social media posts containing misinformation provoke more “moral outrage (a mix of disgust and anger)” than posts with reliable information, and that outrage facilitates the spread of fake news because users are “more likely to share them without reading them to reinforce their moral positions or their loyalty to political groups,” explains Killian L. McLoughlin, from the Department of Psychology at Princeton University.
And there comes the wheel of automatic content selection from networks in search of traffic. “Given that outrage is associated with greater online engagement, misinformation that evokes outrage is likely to spread further in part due to algorithmic amplification of engaging content,” the researchers write.