Society can fall again with artificial intelligence (AI) into the mistake that was made with social networks and the internet in general, as it warns today The Lancet: When its effects begin to be scientifically studied, mainly on the mental health of minors, and to act based on the evidence, young people will have already adopted the new technology. “If we do not learn from past experiences, we may find ourselves in a similar situation in ten years (…), absorbed by another cycle of media panic and failing to make AI safe and beneficial for children and adolescents,” warns Karen L Mansfield, a research psychologist at the University of Oxford in the scientific publication, the same one that unravels, also today, one of those already established risks: one in 12 children is a victim of sexual abuse or exploitation every year in the world.
“On the Internet you can find everything bad you can imagine, traffic in any kind of things. It already existed before the Internet, but this has been a platform for global promotion and dissemination,” explains Jorge B., known online as @NoobInTheNet, who began to delve into the bowels of the web at just 12 years old. Thanks to the supervision of his parents and the initial guidance of the cybersecurity company Kaspersky, he was able to avoid the darkest paths. He is now 21 years old and works in the IT area of a multinational insurance company.
And the most serious thing is that these doors to harm for children and adolescents are not inaccessible, but quite the opposite, explains Jorge B. “The worst thing you can find on the Internet is not in a deep network that is difficult to find. The best way to hide something is to leave it visible to everyone. You will find it on the surface in the most inappropriate place and time,” he comments in relation to childhood and youth, when, in his words, “you lack a backpack with some experience and a certain technological maturity.”
The worst thing you can find on the internet is not on a hard-to-find deep web. The best way to hide something is to leave it visible to everyone.
Jorge B., @NoobInTheNet
The young specialist highlights among the most common dangers of AI, identity theft through false content, a practice with girls and young people as a prevalent target. Mansfield’s research points to more examples, especially those that have a direct impact on mental health. In this sense, it identifies as potentially harmful “human-like functions”, such as AI agents, or “producing images and video content convincing enough to be indistinguishable from authentic content.” [deepfakes y la desinformación]which could influence children’s emotions and behavior.” The psychologist also points out “content recommendation systems and online diagnostic tools for depression, anxiety or eating disorders, which are increasingly used for self-diagnosis.”
“With human-like AI enhancing or moderating online interactions, the range of potential benefits and harms for children and adolescents is more diverse than social media and online gaming alone have ever been.” Mansfield warns.
Marc Rivero, Lead Security Researcher at Kaspersky and not involved in the study, reaches a similar conclusion: “Artificial intelligence is transforming the digital experience of children and adolescents, but it can also have a negative impact on their mental and emotional health. By personalizing the content they see and suggesting interactions, AI can expose them to inappropriate materials or online groups that promote illegal activities. These influences can increase anxiety, isolation or even lead to risky behavior in the digital environment. Therefore, it is key to protect minors through early digital education, the use of appropriate supervision tools, such as parental controls, and the establishment of an open dialogue with children to teach them to navigate safely and responsibly in the digital world. ”.
By personalizing the content they see and suggesting interactions, AI can expose children and adolescents to inappropriate materials or online groups that promote illegal activities.
Marc Rivero, Lead Security Researcher at Kaspersky
The warning from Rivero and the Oxford researchers is especially relevant because, according to the Kaspersky report Being online: children and parents on the Internet“the majority of children have access to technology from an early age: almost half of Spanish minors (47%) have their first contact with a device connected to the internet before turning 7 years old, 24.5% of Spanish parents never talk to their children about the dangers of the digital environment and 75% recognize that their child did not have sufficient knowledge to use the Internet safely.”
All the specialists consulted agree on three keys: training, information and research so that access to AI resources is done with the greatest possible defenses. Without these, subsequent regulations or bans on the use of the technology are ineffective.
Child abuse
This has been demonstrated by one of the worst scourges of the Internet, where the rules and limits have not prevented one in every 12 children in the world from suffering sexual exploitation or abuse in the world for a year. It is the conclusion of research led by Deborah Fry, professor of Child Protection Research at the University of Edinburgh, which is also published today The Lancet.
Like the Oxford study, this work also points out the danger of the emergence of AI. “Emerging technologies based on advances in both hardware [equipos] as in software [programas] They are using decades of AI research. The ways in which young people interact with it are constantly changing and many experts predict human-like AI in this decade,” Fry warns in the research.
The child protection specialist’s team points out the limitation of research based on screen use times or social networks. He is also suspicious of the measures linked to these two aspects: “The time limits [de navegación] and old [para acceder a aplicaciones] “shift responsibility away from the need to regulate harmful content, placing the onus instead on parents and guardians or on the mass integration of untested age estimation technologies, which have been deemed to present privacy and security risks.” .
Researchers also point out the lack of uniform and homogeneous data to address a global problem. In fact, there is not even unity of criteria in the consideration of sexual abuse and crimes through the Internet.