Meta debuted the absence of data verification in its networks, which it will implement this year in the United States, with its own hoax spread by its new head of global affairs, Joel Kaplan, and by the founder of the social network, Mark Zuckerberg. Both managers justified the suppression of the anti-hoax program with the argument that the objective verification of the veracity of the content is a form of censorship and that the professionals in charge of it introduce their own biases. It is false: verifiers do not censor or eliminate content, they only warn of falsehood, and they do not introduce bias, since their work responds to an objective methodology. The elimination of this service has caused a cascade of criticism from social communication experts and, in Europe, they demand the strict application of the Digital Services Act (DSA) to maintain content moderation.
“We have reached a point where there are too many errors and too much censorship,” says Zuckerberg, CEO of Meta, to justify the elimination of the moderation program. And he adds: “The fact-checkers have simply been too politically biased and have destroyed more trust than they have created.” Kaplan follows: “Too much harmless content is censored, too many people are unfairly locked up in ‘Facebook jail,’ and we are often too slow to respond when they do.”
Clara Jiménez Cruz, verifier, co-founder and CEO of Maldita.es and president of the European Fact-checking Standards Network (EFCSN), denies the premises: “It is a lie that we are censors. What he does fact-checking [verificación de datos] is adding verified information and facts to public discourse. In the case of collaboration with Meta [con quien han firmado un contrato de continuidad de su labor en Europa días antes de anunciar su supresión en Estados Unidos] It is telling them what those data and verified facts are so that the company, by its own decision and in accordance with a program designed by them, decides what to do.”
Jiménez Cruz insists that the objective is to warn users “that they are consuming misinformation so that they can make the decision to continue reading or share it, but in no case delete or delete content.”
The expert in hoax identification also categorically denies that the work of professionals introduces bias. “We verifiers are subjected every year or every two years, depending on the organization, to an examination that is carried out by independent experts who review the way in which we make decisions, what we verify, how we do it and how. we decided. Our investigations are unbiased, correct, transparent and we are subject to methodologies and standards that are reviewed every year and that we comply with. Accusing us of introducing political bias is misplaced.”
Meta’s own evaluations, prior to the decision to eliminate verification after Donald Trump’s victory in the last elections, highlighted the effectiveness of the service that they now do without. During the 2024 European Parliament elections, the company highlighted that 68 million pieces of content were labeled on Facebook and Instagram after monitoring the data and 95% of users avoided consulting them due to the warning.
Goal, as the EFCSN recalls, alsoHe has praised the virtues of the program he is now abandoning: “We know it is working and people find value in the warning screens we apply to content after it has been rated by a fact-checking partner.”
Not even the errors that Zuckerberg and Kaplan allude to in disqualifying the service are massive. According to company data, they represent 3.15%, according to the Meta transparency report.
Reactions
The European network of fact-checkers harshly criticizes the decision that brings the social platforms of Meta closer to Elon Musk’s This decision to replace the data verification service has also alerted social communication experts.
Lisa Fazio, associate professor of psychology and human development at Vanderbilt University, warns that this collective tool is “insufficient to determine what information is true” and considers it too slow and inefficient: “This system ignores a large amount of fake content. “Politicized misinformation is rarely detected because not everyone will agree that it is false.”
Syracuse University professor Roy Gutterman, director of the Tully Center for Freedom of Expression, agrees: “The Meta community policing itself probably won’t decrease the amount of misinformation and disinformation floating around.” on social networks.”
Adding to this group is Gordon Pennycook, professor of psychology and researcher of social networks and hoaxes at Cornell University: “In an ecosystem where misinformation is having a great influence, collaborative fact-checking will simply reflect the erroneous beliefs of the majority.” . I support the use of collective data verification, but eliminating third-party (professional) verification seems like a big mistake to me.”