Changes to a social network’s moderation policies can have a huge role in what its users see. This fact, already known, has been measured for the first time with data provided by a platform. Labeled misinformation on Facebook went from 50 million views in July 2020 to practically zero in November, days before those US elections.
This momentary change was due to extreme moderation measures called “breaking the glass” and were intended to limit the viralization of problematic content in the midst of the electoral campaign. And they got it. However, this year, after the change of president in the White House and with Mark Zuckerberg having dinner with Donald Trump at his residence, Meta says he has learned the lesson of excessive moderation: “We know that when we apply our rules we make too many mistakes, which limits the freedom of expression that we want to allow. Too many times we remove or limit harmless content and unfairly punish too many people,” wrote Nick Clegg, president of Global Affairs at Meta, on December 3.
The problem with these practices is that they depend only on the will of Meta and its leaders, who in turn depend on their interests, not the public interest for a democracy: “The concern is that these companies ignore the general public interest, only to fulfill their objective of making money in the short term, and that can also involve serving without being noticed by those in charge,” says David Lazer, co-author of the research published in Sociological Science and professor at Northeastern University in Boston (USA). The article belongs to the series of works that a group of researchers began to publish in magazines Scienceand Naturein the summer of 2023 and which discovered, for example, that misinformation was widely consumed by people with right-wing ideology. The publication of this new research so long after the events analyzed—four years—is due to the intricate processes of review and acceptance of articles in scientific journals.
Each network is a world of its own
This unprecedented access to the guts of a social network allows us to test how the rules and moderation of each platform influence how content is distributed and goes viral. The characteristics of a network, such as the operation of Pages and Groups in the case of Facebook, and its moderation rules, are two key elements. And both can potentially have a greater weight than the algorithm itself for the information that users see.
Along with this finding, the research has also confirmed something already pointed out by other previous research: the distribution of misinformation depends on a small group of very disciplined users. “The result that only about 1% of users are responsible for the dissemination of the majority of messages labeled as disinformation is consistent with what other research has found on other platforms and times,” says Sandra González Bailón, professor of the University of Pennsylvania (USA) and also co-author of the article. “We can generalize this pattern according to which a minority is responsible for the majority of the problematic content that circulates on the networks,” he adds.
Beyond that detail, it is difficult to clarify whether the behavior described in this article would work the same on other networks. X, Instagram or TikTok have their own characteristics that cause different dynamics, in addition to their own moderation.
The ability to influence elections
This month Meta revealed that, in 2024 alone, it has eliminated around 20 secret influence operations organized by governments. Russia remains the main culprit, followed by Iran and China. None of these efforts were successful, according to the company.
The ability to impact a democratic process in a country is much greater from the management positions of these networks. The problem is that sometimes the current can go in favor of an ideology, but then it turns around. The recent cases of X and Telegram, influenced by their owners and then by pressure from the authorities in Brazil and France, demonstrate that the only reasonable solution is for the networks to be more transparent with their measures.
“At this point it is very difficult to assume that the platforms are going to do what is most convenient to protect democratic processes, coexistence, or a rational and deliberative exchange of opinions,” says González Bailón. “Research, access to data, is the only way to determine how platforms control the flow of information. Our work makes it very clear that they have the ability to control it. It is a source of power that, in open societies, should not be able to be exercised in the dark,” he adds. The researchers’ hope is for regulation to force platforms to open access to this data. Europe is trying to do something with its Digital Services Law, which at the moment has no equivalent in the US.