In the middle of summer, discreetly but relentlessly, the big platforms have been announcing improvements to protect minors. Instagram, YouTube or TikTok have proposed a set of news that coincides with the entry into force this Thursday of a new code for age verification in the United Kingdom and a clear regulatory trend to demand greater responsibility from platforms. In Spain, about 70% of those under 15 years of age had a mobile phone in 2018, according to data from the INE.
The platforms do not admit that their new measures are due to direct pressure from legislation. But it is obvious that big technology is going in the same direction as the regulators, albeit at a different pace and with different priorities. For example, they admit from Google to questions from this newspaper, which support the British effort and other data protection authorities to keep children safer on the internet but, they assure, many of their changes go beyond the requests of any regulation.
Among the specific measures, YouTube will begin to set the most private option for uploading videos to users aged 13 to 17 by default, something similar to what TikTok will do. Thus, adolescents who want everyone to see their videos must expressly change it. TikTok will also warn you about the problems that other users can download the videos and distribute them on other networks. Both platforms will also limit late night new video notifications to younger users. Instagram will also opt for private accounts by default for those under 16, to make it more difficult for unknown users to find them, and to prevent adults from sending them private messages. The Facebook-owned photo network will also limit advertisers’ options to personalize advertising directed at minors. The general minimum age to use these networks is 13 years old.
“The new code may encourage companies to take some kind of action, but ultimately it has been created by the regulator as an extension of the data protection law,” says Michael Veale, professor of Digital Law and Regulation at the University. College London. “For me, it would be very fragile in court. Platforms may find that it encourages them to make moves they already had planned, but if they believed it threatened their business models, they would take it to court and could actually win, ”he adds.
The prominence of British legislation is obvious, but other countries such as Ireland already have theirs in motion as well. It is a growing trend. In its response to this newspaper, Instagram has highlighted that several new regulatory frameworks in data protection and safety of minors and others in project inform their work. Among them, in addition to the new British code, is the United Nations Convention on the Rights of the Child, the Irish Foundations for Minors and the European Directive on Audiovisual Services.
But how old are you
Despite these developments, one of the great mysteries from the beginning of the internet age looms over the debate on the protection of minors: who is on the other side of the screen. The famous joke from the 90s where a dog appears next to a computer and says “on the internet, nobody knows that you are a dog”, is still valid. Platforms can ask for the age of their users, who in turn can calmly lie.
The most recent novelty is that Instagram has announced that it will ask its users who have not given it for their birthday. If after a while, the user has not wanted to give it, he will not be able to continue using the network. In the note where they tell it, the company admits that it is very easy to lie: “We are developing new systems to deal with it,” they say. The great remedy is Facebook’s favorite tool for solving its worst problems, such as moderating illegitimate content or fake accounts: artificial intelligence. Facebook has designed models with various instructions such as detecting the age when they wish a user “happy birthday” or checking the date of birth that someone has given in another of the company’s apps, such as WhatsApp or Facebook.
Age verification has extended systems and there is even an Age Verification Providers Association (AVPA, based in London). “Some platforms say that they use artificial intelligence to detect children who may have inflated their ages to open an account, but that technology is only beginning to develop,” says Alastair Graham, co-chair of the Association. “This type of technology still allows 44% of children between 8 and 11 years old to use social networks in the United Kingdom, according to data from the regulatory office [Ofcom]”.
The Association has obvious interests in enhancing its methods. According to Graham, the risks faced by the youngest in social networks (chatting with adults, pornography, information that can cause eating disorders) require “more effective forms of age verification”, which can range from “estimation techniques such as facial analysis or passport scanning ”, which have already been used by adults for matters such as“ buying alcohol, vaping products or accessing online casinos ”.
Today, a selfie can allow us to calculate the age of a face. With a reasonable margin of error, this software would allow to request proof of age from those under 18 years of age: if it is asked to identify all those under 23 (5 years of margin), only 0.25% of those under 18 are it would strain, according to a report by Yoti, one of the Association’s companies dedicated to age verification. In other words, less than 1 in 100 under the age of 18 would go through the age of 23 for the software.
Why haven’t platforms already opted for such methods? Graham believes that they are delaying the inevitable: “We are reaching a tipping point, where the accumulated weight of all the legislation persuades platforms to bite the bullet and apply a standards-based, independent verification process that protects the privacy, “he says. “It is obvious that they are buying time,” he adds.
Again biometrics no
However, this new step does not seem so simple. Other organizations have deep doubts about this method: What happens to the biometric data stored by unknown companies that verify the age? Although they discard the photos as such, “in commercial systems they retain the digital image, perhaps not a picture but yes the unique identifier with which to compare. It doesn’t matter that there are no eyes or ears on the identifier, it’s still just you, ”says Jen Persson, director of the children’s rights NGO Defend Digital Me.
Persson proposes expanding the way we think and, above all, vigilance with hasty solutions. “Children who want to access places where adults think they shouldn’t be will do so, both online and off. Using high-risk and increasingly invasive technologies such as biometrics for trivial matters on the web is a decision that as a society we will regret for years to come, ”he says.
The remedy may be worse than the disease, warns Persson, concerned about the request for the new better age verification code. “Their approach to asking for age carries the risk of introducing or exacerbating the very problems that the code wants to solve: discrimination, damage to privacy, restricting the participation and processing of excessive and unnecessary additional personal data of children and their relationships.” , He says.
Persson recalls that the code emerges from a concern about the treatment of children’s “data” and its possible profiling, not from the “protection” of minors as such. “A constructive discussion on the technical implications of internet architecture and human rights in the broad sense is lost because of the terribleness of the matter,” he adds.
You can follow EL PAÍS TECNOLOGÍA at Facebook and Twitter or sign up here to receive our weekly newsletter.