One of the first directors of the Generalitat Social Rights Department received a family who had not granted a subsidy for families with children with disabilities years ago. The family exceeded the limit family income to obtain the subsidy and the system automatically rejected the petition. “If we overcome the rent is poke, we pretend to work to save money that protects our son when we are no longer to take care of him,” the parents reproached. The conversation ended with a rectification of the department, which finally paid the aid after knowing the family situation.
The story is still remembered in the halls of the Catalan authoritat of data protection (APDCAT) and its background has promoted the entity to humanize algorithms and artificial intelligence. The agency has presented this Tuesday at the Parliament of Catalonia a pioneer methodology in Europe to “avoid discrimination and biases” in those cases in which artificial intelligence has a fundamental role when making decisions. “We have to supervise artificial intelligence systems,” says Meritxell Borràs, president of the APDCAT.
Given the unstoppable AI, the European Council approved last year the entry into force of a regulation that requires an impact assessment on fundamental rights in those products and services that use artificial intelligence. However, he understands the APDCAT, “there are no guidelines in Europe of how to do it.” The Catalan proposal proposes a specific method to quantify this impact through a matrices model.
According to risk theory, the impact of systems consists of two key dimensions, probability and gravity, and its combination provides an AI risk index. Based on this index, technological suppliers must intervene to mitigate this risk until it is “residual.” “That humans make automatic decisions based on AI raises various concerns about the logic of artificial intelligence itself; And it is important to know the side effects of this technology, ”insists authority.
The limits to AI intend to anticipate the possible damage to users, which has already forced several administrations to move. A Dutch court considered in 2020 that the country’s government used an analysis system that “stigmatized and discriminated” citizens to track possible fraud to the State; and in New York (United States) the use of algorithms in selection processes to avoid “racial or sexist prejudices” was regulated in 2023. Also in the USA, an analysis of the MIT Technology Review He concluded in 2021 that the Compas system, widespread in the country and informs the judges of the risk of recidivism of prisoners, discriminates certain minorities, especially blacks.
Experts also alert those processes where AI reaches outside. “The models trained outside of Europe that arrive in our continent do not have the best guarantees to develop here because they are not thinking about a European context,” says Alessandro Vencher, professor at the Polytechnic University of Turin (Italy), one of the developers of The Catalan methodology.