FB pixel

Children, interoperability, billion-dollar fines and browser biometrics: the Online Safety Bill

‘If age verification becomes the new cookie pop-up, it is doomed’
Children, interoperability, billion-dollar fines and browser biometrics: the Online Safety Bill
 

The UK government’s effectively crowd-sourced draft Online Safety Bill has managed yet another twist. Gone is the provision to regulate “legal but harmful” online material, which would have forced platforms to simply remove it, due to internal government pressure over free speech.

The result is a set of circumstances which could present an opportunity for biometric age verification and digital identity providers: as the Bill is separating how it serves children and adults, platforms will have to separate children from adults.

Or, to protect free speech in the UK, should the country simply adopt the European Digital Identity?

The Bill covers a wide range of highly significant areas such as end-to-end encryption. Media reports have unsurprisingly flocked around the child protection elements and, as these are potentially to be addressed by biometrics, so has Biometric Update.

Terms and conditions and newly problematic children

The updated Online Safety Bill, increasingly comparable to the kids’ game where child one writes the first line of a story at the top of a page, folds it over out of sight and passes it to child two to write the next sentence, will be unfurled next week in Parliament. It is hoped to become law by summer 2023.

If passed and enacted without further change, it would be down to content providers to follow their own terms and conditions to police what content appears on their platforms. Previous iterations of the Bill (elements of which have been on the legal back burner since 2017) would have allowed ministers to decide what was “legal but harmful” – offensive but not a criminal offense.

The draft has gone halfway there in some respects. Entire categories of content, such as materials which promote self-harm, will be made illegal.

The Bill now relies on content platforms to decide how they manage non-criminal but offensive material. If they ban legal but harmful content in their own terms, their own policing of their platforms will be monitored and enforced by the UK media regulator Ofcom. (Which needs around three years to be ready.)

If they do not ban something offensive-but-non-criminal with their Ts and Cs, users are free to post and share it. But only if they are adults. The Bill states that children must not see or receive harmful content, reports the BBC.

Ofcom will be empowered to fine companies up to 10 percent of their worldwide revenues for any breach of the bill, Michelle Donelan, secretary of state for culture, told the BBC’s Today Programme. The latest draft therefore means the authorities will not stipulate what is legal but harmful, but have more power to force a platform to follow its own rules. For children.

As Ofcom itself has reported, children lie about their age: because they can and because they want to. Ofcom’s research found that 32 percent of British children from eight to 17 lie to create an adult account, 47 percent of eight-to-fifteen-year-olds have user ages of 16 and over.

Platforms will have to publish how they will enforce age limits. Though how the platforms plan to do this – and avoid fines of potentially billions of dollars – is yet to emerge.

“We’re not saying you have to use X specific tech because it would be out of date by next week. This Bill has to last the test of time,” Michelle Donelan told the Today Programme.

“What we are saying is you have to use a range of age assurance technology or age verification technology, but whatever you do, you’ve got to make sure you know the age of these users to know whether they’re 14 or 45.”

Instagram has begun using technology from the UK firm Yoti to undertake facial analysis to estimate user ages in certain circumstances and in certain countries.

Are we ready for age verification?

By not simply banning legal-but-harmful content for adults due to free speech concerns while wanting to protect children, technology for determining user age is one possibility. Total withdrawal from the UK by tech platforms may prove simpler.

Biometric Update asked Iain Corby, executive director of the Age Verification Providers Association, the now-global trade body for suppliers of technology for online age checks, to explain the opportunities and issues for age verification presented by the latest draft Bill.

Corby is involved with euCONSENT, an approach for interoperable age verification at the browser level, successfully piloted in the EU and UK.

BU: How big an opportunity is this for UK tech providers?

IC: Largely as a result of the requirements for age checks for online pornography in the Digital Economy Act in 2017, the UK had a head start in developing privacy-preserving age verification technology, and is still home to the leading specialist suppliers.

But as other countries press on with enforcement of new regulations, the UK is in danger of being overtaken. We are now a truly global trade body, with 27 members from three continents.

How do you see this in terms of site-by-site verification or a push for something overall such as digital ID or euCONSENT type approaches?

IC: If age verification becomes the new cookie pop-up, it is doomed. Users will demand the ability to use one check across multiple sites. So we either crack interoperability or, regardless of the fact it would be an enormous increase in their existing market dominance, the large global platforms will step in to deliver age checks as part of their “log-in with [social media platform X]” offer.

That leaves advertisers, for example, beholden to these platforms if they want to reach an adult-only audience. Competition authorities are well aware of this risk.

Or we can use the new European Digital Identity and other government-issued proofs each time we wish to access the internet – but many would see that as an unwelcome move, regardless of the promises the state may make that it is only age attributes which are shared, and there would be no surveillance of online activity of citizens.

Some still see this as a risk with private providers of age verification services, but they will still have a choice of suppliers, there would be competition where trust will be an important determining factor, and independent regulators will be supervising closely.

So, while today compliance is being achieved site-by-site, a more integrated network solution must be the main prediction for progress in the age verification market in 2023.

Are there any plans for adopting approaches developed for euCONSENT?

IC: The euCONSENT consortium donated the intellectual property created by the project to a new non-profit organisation which is being created to take forward the work to deliver an interoperable, global network of age assurance providers.

We have approached the European Commission, some Member States and the UK government to seek grant funding for what is both a public good and a natural monopoly. We see this improvement to the user experience, which will otherwise be highly disrupted by age checks, as a critical success factor for the extension of age assurance demanded by existing and forthcoming legislation around the world.

Are there any improvements in biometric age estimation?

IC: Machine learning technology inherently improves over time as it analyses an ever-larger set of training data. An interesting development is the attention regulators such as ICO [Information Commissioner’s Office, the UK data regulator] and Ofcom are now paying to how we measure the accuracy of age assurance technologies in a consistent manner.

The ICO recently published a research paper it commissioned from the Age Check Certification Scheme on this question. We are working with both the ICO and Ofcom to host a workshop for platforms in the New Year to explain the thinking behind this work and discuss the opportunities it presents for clarity and consistency of regulation, technical specification and the procurement of solutions.

Are there any mechanisms for referring close biometric estimations (e.g., 12 years old when a site requires a user to be 13) to another form of verification?

IC: Typically, the challenge would come from a 14-year-old who had failed an estimation process to check she was at least 13, and wanted to prove her age in a different way. The lack of accessible databases of record with children’s ages remains a challenge. In some EU countries, this data is more easily accessible and we hope that by demonstrating the benefits of using such data, other states will realise the benefit of allowing access, under appropriate controls, outweigh the perceived risks.

One-way blind checks are a good example of how this can be achieved without exposing children’s sensitive data.

Inclusivity is also important to us as an industry, so it should not only be those lucky enough to hold a passport or be on an official database who can successfully “appeal” a false negative age estimation. The ‘vouching’ process already available for applications for UK Proof of Age Standard Scheme (PASS) cards will be an increasingly important plan B for those of any age without official documents. This allows teachers, doctors and other professionals to provide a reference directly to proof of age issuers.

Are there any discussions among members for reverifying users whose account age approaches 18 years old?

IC: The industries most interested in this today are those which advertise age-restricted products. High fat, salt and sugar food manufacturers face a complete ban on digital advertising because neither they nor the major platforms could guarantee children would not be exposed to their ads.

So, the platforms are under pressure to create a curated audience of age-verified adults, removing the risk of children who lied about their age to open an account before they were 13 from being mis-classified as adults prematurely later on.

Ofcom research showed one third of children already have social media profiles that falsely record them as adults. We are talking to alcohol, gambling, vaping, cosmetic and other age-restricted sectors about how they articulate their need for age-assured audiences to a specified level of confidence based on international standards. This report from the ASA today reinforces the need.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Discrepancy in enforcement between biometric data protection in public, private sectors

UK’s ICO plans to continue keeping fines low for the public sector, while on the continent CNIL has published the…

 

Nigeria tenders $83M digital identity system upgrade and MOSIP integration

Nigeria is planning to implement the MOSIP platform with its digital identity management system and upgrade its biometric capabilities with…

 

Passkey adoption by Australian govt, banks drives wider passwordless authentication

It’s high noon for passwords. Across the Authentication Corral, an inscrutable stranger saunters up and puts their hand on the…

 

‘New era in travel’: airports, airlines continue to be sweet spot for biometrics

A fascinating experiment in biometrics would be to find a privacy conscious person who would generally avoid facial recognition, put…

 

Limitations of FRT apparent in search for United Healthcare CEO’s killer

The murder of United Healthcare CEO Brian Thompson in Midtown Manhattan involved the use of facial recognition technology (FRT) to…

 

OpenID, BIO-key, RSA, SecureAuth showcase at Gartner IAM Summit

The 2024 Gartner Identity & Access Management Summit, running from December 9-11 in Grapevine, Texas, is playing host to names…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events