FB pixel

Kids are bypassing weak age assurance measure in droves, and it has to change: Ofcom

Numbers show nearly a quarter of 8-17 year-olds say they’re 18 to create social media accounts
Kids are bypassing weak age assurance measure in droves, and it has to change: Ofcom
 

When Instagram says only eight percent of its users are between 13 and 16, it neglects to factor in that many of them are lying. A new Ofcom survey shows that 22 percent of eight-to-17 year olds lie that they are 18 or over on social media apps, easily bypassing flaccid self-declaring age assurance methods.

The UK regulator’s Online Nation report says that “one in five 8-15 year-olds have a user age of at least 18 on a social media platform,” but there are “indications that there are more frequent efforts from services to verify their date of birth.”

Research conducted in August 2024 suggests that “22 percent of 8-17-year-olds (and 20 percent of 8-15s) with a social media profile on at least one of the platforms listed in our study, have a user profile age of at least 18, meaning they are at greater risk of seeing adult content.”

Efforts to apply effective age verification measures for adult content sites and social media will be familiar to anyone following biometrics and digital identity trends globally. And while it is hardly shocking that kids can easily add an extra decade to their age to register for a social media account, the numbers are likely to provide more ammunition for regulators who argue the harms of social media on youth are clear.

The stat loses some of its teeth when Ofcom reveals that “among online teens, generally offensive or ‘bad’ language was the most prevalent potential harm encountered.”

Ofcom to wield hammer against firms that violate Online Privacy Act

That said, there are real dangers in online trends promoting dangerous stunts (which boys are more likely to encounter) and in how social media shapes ideas about body image (typically for girls). And tragic deaths resulting from suicide and self-harm content were part of what kicked off the wider age assurance debate to begin with.

The BBC quotes Ian Mccrae, director of market intelligence at Ofcom, who says “platforms need to do much, much more to know the age of their children online.” Ofcom intends to ensure that happens, as it moves to enforce the Online Safety Act (OSA). McCrae says firms that do not comply could face fines of up to 10 percent of their global revenue.

For context, the annual revenue of huge porn streaming site Pornhub is estimated to be up to $97 billion.

The UK government has also hinted that it could follow Australia, which is about to impose a social media ban for users under 16.

Concerns about deepfakes, disinformation spur new committee

Other new research from Ofcom highlights the impact of deepfakes on the online environment.

“Understanding misinformation: an exploration of UK adults’ behavior and attitudes” says four in ten UK adults surveyed believe they had encountered misinformation or deepfake content in the previous four weeks.

A release says “men, young adults, people from higher socio-economic backgrounds, minority ethnic and LGB+ groups, as well as those with mental health conditions are more likely to say they have come across misinformation.” However, only about 30 percent of people feel they can “confidently judge whether an image, audio or video has been generated by AI.”

Politics is the typical arena for deepfake-driven mis or disinformation, and most people (seven out of ten) encounter it online.

It all adds up to an online world in which trust in institutions has eroded, traditional news media is looked at as suspect, and paranoia has driven three in ten people to believe “there is a single group of people who secretly control the world together.” Deepfake detection is progressing. But so is generative AI.

One thing is for sure: people are worried. Nine in ten of those who encounter misinformation say they are concerned about its societal impact.

To help assuage these fears, Ofcom has created a new Advisory Committee on Misinformation and Disinformation, which will advise the regulator on media literacy, transparency and how providers of regulated services should deal with disinformation and misinformation.

The committee will be chaired by Ofcom board member Richard Allan, aka Lord Allan of Hallam, a specialist in UK technology policy. It is recruiting specialist experts, with a view to finalizing members in early 2025.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Mitek unveils multilayered GenAI fraud detection to stop PAD, injection attacks

Mitek Systems has launched what it calls the first multilayered solution to the growing challenge posed by generative AI for…

 

Authsignal teams with Mattr on terminal to bind palm biometrics with mDLs

New Zealand-based Authsignal has announced the launch of a new palm biometrics terminal, developed in collaboration with Mattr and Qualcomm,…

 

UK grapples with border biometrics expansion and delays

The UK Home Office has provided key updates on its electric border management initiatives during a Justice and Home Affairs…

 

FBI looking at biometric matching algorithms for NGI, issues RFI

The U.S. Federal Bureau of Investigation’s (FBI) Criminal Justice Information Services (CJIS) in Clarksburg, West Virginia issued a Request for…

 

Bhutan charts a digital future with blockchain, bitcoin, and national digital ID

The Kingdom of Bhutan is leveraging digital assets and strategic investments to propel its national development agenda, integrating blockchain technology…

 

Digital ID can help Sri Lanka expand tax base: Deloitte

Sri Lanka seems to be caught in a chicken-and-egg situation regarding its development of digital ID as its ministry sets…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events