By January 1, 2025, the terms of service of Instagram and Facebook will be updated. On November 20, LinkedIn’s were updated, before X tried it without warning and probably other social networks will do the same. One of the common motivations is to incorporate into the conditions of use the framework for using the generative artificial intelligence (AI) tools specific to each social network.
It’s not about using ChatGPT or Google Gemini to generate content and paste it on social networks. In this case, it is Instagram, Facebook or LinkedIn themselves who offer their artificial intelligence systems. They are integrated within the platforms and easily accessible to users as another resource. However, the three social networks place responsibility on the user if they share content generated with their own AI that is inaccurate or even offensive.
This occurs even as they admit that the answers offered by their generative AIs may be erroneous or misleading. An intrinsic aspect of this type of technology. The terms of service of Meta AI, present on Facebook and Instagram, highlight: “The accuracy of its content cannot be guaranteed [el generado con Meta AI]including Responses, which may be unpleasant or upsetting.”
In its renewed terms of use, LinkedIn notes that the content generated by its generative AI functionality “may be inaccurate or incomplete, delayed, misleading, or not suitable for your purposes.” It asks the user to review and edit the generated content before sharing it and adds that “you are responsible for ensuring that [el contenido] adheres to our ‘Professional Community Policies’, including not sharing misleading information.”
For Sara Degli-Esposti, CSIC researcher and author of the book The ethics of artificial intelligencethe outlined position leaves no room for doubt: “This policy is along the lines of: ‘we don’t know what can go wrong and anything that goes wrong is the user’s problem.’ It’s like telling you that they are going to give you a tool that they know may be defective.”
LinkedIn’s AI is used to generate texts that will later be published on the social network. At the moment it is only available in English and for paying users. Meta AI, on Instagram and Facebook, can be used to write a message, ask you questions, even in a group chat, ask you to modify a photo, or generate an image from scratch. For now it is not available in the European Union.
“The fundamental issue is that functionalities are being provided here with tools that have not finished being tested and, in fact, the testing will be done by the users themselves,” says Degli-Esposti. “In some way it is as if they subtly admit that they make a tool available to you, but they clarify that this tool may still have problems, which is like saying that it is still in the development phase. “They would have to tell you that you are assuming an additional risk.”
In the terms of Meta AI, a veiled allusion is made to the fact that generative artificial intelligence is still in an incipient phase, although it is spoken of positively. “AIs are a new technology and continue to improve,” read the conditions of use and conclude with a warning: “We cannot guarantee that they are secure, that they will never suffer errors or that they will operate without interruptions, delays or imperfections.” In another section, the text directly challenges the user: “You acknowledge and accept that you, not Meta, are responsible for your use of the content generated by the AI based on your Instructions.”
These are concepts that might be clear to an advanced user of generative AI systems, but not to all people. “The key is the lack of current culture and education about generative AI, about how we obtain information from it, how it should be contrasted and how we should approach it,” says Javier Borràs, a CIDOB researcher specialized in the intersection between technology and democracy. “These systems, by their very nature, don’t give you true or false answers. They offer you a result based on a statistical prediction extracted from all the data they have. They don’t distinguish between what is true and false, they offer you a probability. This knowledge is not widespread among users.”
Looking for a trained and informed user
The ethical dilemma comes from making easily accessible generative AI tools available to the mass public on social networks. Shouldn’t it be done? Borràs points out that users would end up using other third-party systems. “Maybe what they should do [las redes sociales] is that this information, that the results may be inaccurate and that they must be contrasted, is clear when you obtain a result. The user should have the reminder that this can happen all the time, which appears when they get a response,” says the CIDOB researcher.
In the English version, Meta AI has a fine print disclaimer under the prompt bar: “Messages generated with Meta AI may be inaccurate or inappropriate. Get more information”, this last part allows you to click on a link. At the same time, remember in the terms of use: “If you plan to use the Answers [de Meta AI] for any reason, it will be your sole responsibility to verify them.”
One of the concerns of introducing generative AI tools accessible to social media users is the spread of misinformation. An evil for which these platforms have served as speakers for years. However, it is not clear that AI had a major impact in this field during the 2024 election year, critical due to the number of elections there were across the planet. Borràs does not believe that social media tools have a greater impact than third-party systems may have.
It is a debate where individual responsibility appears. Degli-Esposti highlights that from an ethical point of view there is another vision that puts this aspect forward: “The author is the one who provides the prompt to the system. This means that an element of autonomy is maintained on the part of the user, who is the one who guides the AI in its generation and is the one who decides whether to keep that final product.”
The counterargument comes because when their users use their generative AI, social networks obtain benefits, at a competitive level, from training the algorithms and even economically. Well, the more content that is generated and shared, the more advertising can be introduced into the platform, the main source of income for social networks.
“A process of educating users would be needed, so that they have a culture of how generative AI works and the risks it has. And the companies that profit from it should have the responsibility of being part of this process,” says Borràs. And he adds that training should go further: reach the educational system and the business community. A formula for all of us to confidently use generative AI systems.