Dear Mr Macron, Mrs Meloni, Mr Scholz and Mr Sánchez [como presidente de turno]:
We are at a critical point in the life of the Proposed AI Regulation (AI Act). In the trilogue phase, this regulation is threatened by what we consider to be misguided opposition by their government representatives, in favor of self-regulation by companies developing fundamental AI models (such as ChatGPT and Bard). This implies that such companies should adhere to their own codes of conduct, rather than being regulated directly by official bodies. This change in approach is delaying the adoption of the AI Regulation, especially given the upcoming EU Parliament elections scheduled for June. And, more seriously, this could undermine its effectiveness and pose severe risks to the rights of European citizens and European innovation. Against a self-regulatory approach, we urge all parties involved in the trial to approve the AI Regulation as soon as possible. Below we will set out three key reasons to support the adoption of the AI Regulation in its original form.
Companies should not make the rules themselves
Codes of conduct, even when mandatory, are insufficient and often ineffective. When companies self-regulate, they can prioritize their profits over public safety and ethical issues. It is also unclear who will oversee the development and applications of these codes of conduct, how and with what degree of responsibility. This approach rewards companies that take risks by not investing time and resources in strong codes of conduct, to the detriment of those that do comply.
This is also detrimental to the AI industry, as it leaves companies uncertain whether their products and services will be allowed on the market and whether they may face fines after commercialization. Uncertainties may have to be remedied with direct rules once the Regulation has already been approved, thus limiting parliamentary debate. Finally, if each company or sector makes its own rules, the result can only be a confusing patchwork of standards, which increases the oversight burden on the regulator, but also makes it more difficult for companies to comply with the codes, thus hampering both innovation as well as compliance. This goes against one of the fundamental objectives of the AI Regulation, which is to harmonize rules across the EU.
EU leadership in AI regulation
Current opposition from France, Italy and Germany to regulating foundational AI models jeopardizes the EU’s leadership in AI regulation. The EU has been at the forefront, advocating for the development of regulations that ensure technology is safe and fair for all. But this advantage could be lost if remaining regulatory challenges are not quickly and successfully addressed. An indecisive EU will lose its competitive advantage against countries like the US or China. European citizens run the risk of using AI products regulated according to values and agendas not aligned with European principles.
The cost of not regulating AI
Delaying AI regulation has significant costs. Without common standards, citizens are vulnerable to AI applications that do not serve the public interest. This lack of regulation opens the door to potential misuse and abuse of AI technologies. The consequences are serious and include privacy violations, bias, discrimination, and threats to national security in critical areas such as healthcare, transportation, and law enforcement. From an economic point of view, unregulated AI applications can distort competition and market dynamics, creating an uneven playing field in which only powerful, well-funded companies will triumph. It is a mistake to think that regulation goes against innovation: only through regulation and, therefore, fair competition can innovation flourish, for the benefit of markets, societies and environments. Only with better regulation can more innovation be achieved.
In conclusion, the AI Regulation is more than just a law. It is a statement about what values we, as Europeans, want to promote and what kind of society we want to build. Implements and reinforces the identity and reputation of the EU. It highlights the credibility of the EU and its leading role in the global AI community.
For all these reasons – five years after the publication of AI4People’s Ethical Framework for a Good AI Society, which guided the initial work of the European Commission’s High Level Group on AI – we urge the EU institutions and Member States to find a compromise that preserves the integrity and ambition of the AI Regulation. Let this legislation be a beacon of responsible and ethical AI governance, serving as a global example for others to follow.
The letter is signed by:
Luciano FloridiFounding Director of the Center for Digital Ethics at Yale University and President of Atomium-EISMD.
Michelangelo Baracchi BonviciniFirst President of the Scientific Committee of the AI4People Institute and President of the AI4People Institute.
Raja ChatilaProfessor Emeritus of Artificial Intelligence, Robotics and Computer Ethics at Sorbonne University.
Patrice ChazerandDirector of Public Affairs at AI4People Institute and former Director of Digital Public Affairs for Europe.
Donald CombsVice President and Dean of the School of Health Professions at Eastern Virginia Medical School.
Bianca De Teffe’ ErbDirector of Data Ethics and AI at Deloitte.
Virginia DignumProfessor of Responsible Artificial Intelligence, Umeå University and Member of the United Nations High-Level Advisory Council on Artificial Intelligence.
Rónán KennedyAssociate Professor, Faculty of Law, University of Galway.
Robert MadelinePresident of the Advisory Board of the AI4People Institute.
Claudio NovelliPostdoctoral Researcher, Department of Legal Studies of the University of Bologna and International Fellow of the Digital Ethics Center (DEC) of Yale University.
Burkhard SchaferProfessor of Computational Legal Theory at the University of Edinburgh.
Afzal SiddiquiProfessor at the Department of Computer Science and Systems Sciences at Stockholm University.
Sarah SpiekermannPresident of the Institute for IS and Society at the Vienna University of Economics and Business.
Ugo PagalloFull Professor of Jurisprudence, Legal Theory and Legal Informatics at the Department of Law of the University of Turin.
Cory RobinsonProfessor of Communication Design and Information Systems at Linköping University.
Elisabeth StaudeggerProfessor of Legal Informatics and IT Law (Information Technology Law), Head of the Legal and IT Department of the Institute of Legal Foundations at the University of Graz.
Mariarosaria TaddeoProfessor of Digital Ethics and Defense Technologies at the Oxford Internet Institute, University of Oxford.
Peggy ValckeProfessor of Law and Technology at the Catholic University of Louvain and Vice Dean of Research at the Faculty of Law and Criminology of Louvain.
You can followEL PAÍS Technology in Facebook andx or sign up here to receive ourweekly newsletter.