In a series of posts published on Threads, Adam Mosseri, head of Instagram, raised a crucial issue for the digital age: the growing difficulty of distinguishing reality from fiction online, due to the proliferation of content generated by artificial intelligence . Mosseri underlines how AI is now capable of producing images and videos so realistic that they can easily deceive users. Faced with this challenge, he highlights the responsibility of internet platforms to clearly label AI-generated content, while admitting that “some content” may escape such labels.
For this reason, Mosseri calls for a multifactorial approach that includes, in addition to labeling, also the provision of contextual information on the source of the content. “We need to provide users with the tools to evaluate the reliability of what they see online,” says Mosseri, effectively suggesting a system that allows users to know the origin and reputation of the accounts sharing content. The approach, which recalls moderation models based on user participation such as Community Notes on X and YouTube or Bluesky’s personalized filters, represents a possible solution to counter the spread of misinformation and misleading content.
At the moment, Meta platforms do not yet offer tools of this type, but recent statements from the company suggest important changes in content management policies. It is not clear whether Meta intends to adopt solutions similar to those mentioned by Mosseri, but it is known that the company has already drawn inspiration from Bluesky in the past. The challenge launched by Mosseri highlights a fundamental issue: in a world increasingly permeated by AI, online trust must be gained and preserved through transparency, responsibility and collaboration between platforms and users.