In an unexpected move, Meta has decided to halt operations to use data from European Facebook and Instagram users to train its artificial intelligence. This decision seems to come as direct consequence of the criticism and pressure exerted by consumer and privacy protection bodies in the European Union and the European Economic Area (EEA).
The controversy gained traction when the Irish Data Protection Commission (DPC) and the European Center for Digital Rights (Noyb) questioned theto the legitimacy of the “legitimate interests” invoked by Meta for the processing of such data. The DPC initially approved Meta’s plans, but quickly had to backtrack after numerous criticisms.
Meta had planned to roll out new AI features in the EU, promising benefits like personalized stickers for chats and stories and an advanced virtual assistant and justifying how the use of data was necessary to reflect and interpret “the different cultures and languages of the European communities that will use these services“. However, the proposed method of data processing has raised significant concerns regarding compliance with the General Data Protection Regulation (GDPR).
In particular, criticism focused on Meta’s attempt to avoid requiring users’ explicit consent, instead opting for an opt-out system that would only be active until June 26. This approach was seen as an attempt to use “dark models” to collect as much data as possible before the restriction went into effect.
The decision to suspend these plans was welcomed by Noyb, who nevertheless remains vigilant. Max Schrems, president of Noyb, expressed satisfaction with Meta’s initiative but underlined that the legal battle will continue until there are official and binding changes in Meta’s privacy policy that reflect this suspension.