‘Harmful’ and ‘likely’: one-third of UK children have adult social media accounts

Children appear to be unphased by regulation, technology or age-appropriate design codes as they simply lie about their age to open online accounts. The UK’s telecommunications regulator, Ofcom, commissioned research that found that 32 percent of British children from eight to 17 lie to create an adult account, and 47 percent of eight-to-fifteen-year-olds have user ages of 16 and over, reports the BBC.
The news comes as the draft Online Safety Bill veers towards the country’s children as the Age-Appropriate Design Code starts to gain traction.
Instagram is piloting Yoti’s face-based age verification for U.S. users who try to increase their ages to being those of adults, as well as vouching by existing adult users. UK children just put in a date of birth of their choice, meaning 60 percent of children under 13 who have social media accounts do so through their own profiles, not of an older relative, for example. This increases to 77 percent of children aged eight to 17. And two-thirds were aided by a parent or guardian to do this, reports TechCrunch.
‘Harmful’
The Ofcom research comes as the UK’s Online Safety Bill, five years in the (re)making, undergoes fresh “tweaks” (with new prime minister Liz Truss stating, “What I want to make sure is we protect the under-18s from harm, but we also make sure free speech is allowed, so there may be some tweaks required”) after new offences were added and politicians tried to add a new category “legal but harmful” content.
The latest responsible secretary of state has said that the legal but harmful part will not apply to adults as that may impact free speech, as reported by TechCrunch, meaning the bill becomes increasingly focused on making the internet safer for children rather than all users.
‘Likely’
The bill is firmly in the spotlight as it follows the UK’s Children’s Code (or Age-Appropriate Design Code) which is being modified and adopted around the world. Its language of encompassing all sites “likely to be accessed by children” is proving seismic.
“The fact that the definition of ‘likely’ is vague does not mean that the need for regulation is vague,” said Professor Sonia Livingstone of the London School of Economics and Political Science in the Privacy, Children and Access to Services session of the PrivacyNama summit of Indian media outlet MediaNama.
“Children clearly are likely to access Instagram, Instagram clearly harvests their data in ways that are not a hundred percent compliant with data protection and not always safe.”
Livingstone noted that “it’s kind of early days” for the code and big platforms have been making changes and, after having made “egregious actions before” for children’s safety, they “took the code as their moment to act.”
“Even when a law sounds very positive, the implementation is lacking,” said the academic and campaigner.
“It has shifted the attention from services which are directly intended for children to services which are ‘likely to be accessed by children’ and that language of likely to be accessed is very powerful.”
The code and Online Safety Bill have caught the attention of civil society in India. Aparajita Bharati of YLAC and The Quantum Hub said there is too much reliance on parental consent in India. She believes politicians should start work on a children’s bill so that once the country’s Data Protection Authority is in place, focus can shift to protecting children.
Compatriot Nivedita Krishna of Pacta reminded participants of the fact most children in India access the internet through parents’ devices, echoing earlier incarnations of the UK bill: “It is important to ensure that the internet is first safe for everyone to use and then you come to addressing the safety of children. If the internet is not safer for everyone, then certainly children are not safe.”
Livingstone, a member of the euCONSENT project for online age verification technologies, hopes for trusted intermediaries – non-commercial – to conduct age checks rather than the platforms themselves. As child protection online continues to develop, the professor hopes that algorithms will also be covered which push content to children, not just the content itself.
‘Invasive’
Meanwhile in California, concerns are being raised around the word “likely” after governor Gavin Newsom signed into law the Assembly Bill (AB) 2273 (or California Age-Appropriate Design Code).
Critics there are concerned that the vague language could lead to “invasive” age verification requirements for a large number of sites, reports Reason, the “the nation’s leading libertarian magazine.”
The California law applies to any “business that provides an online service, product, or feature likely to be accessed by children” and companies must undertake a Data Protection Impact Assessment to judge whether their products could be harmful to children. They must attempt to determine the age of users and, if they find a user to be a minor, must enforce a list of measures such as not collecting or sharing their data.
Reason quotes organizations struggling with interpreting the “best interests of children.”
It finds that age verification or estimation approaches such as that of Instagram are invasive: “Ironically, online businesses could soon enact invasive age-verification requirements to comply with what is supposedly a digital privacy law.”
Article Topics
access management | age verification | biometrics | children | data protection | Ofcom | social media
Comments