FB pixel

Opinions on UK Online Safety Act emphasize importance of enforcement

OSA age assurance laws have public support, but regulations are too easy to ignore
Categories Age Assurance  |  Biometrics News
Opinions on UK Online Safety Act emphasize importance of enforcement
 

Online safety legislation is making headlines around the world. But in places where laws have taken effect, are they proving to be effective?

This is the big question for policymakers, and stakeholders are weighing in. Independent, not-for-profit organization Internet Matters has published a report on the UK’s Online Safety Act (OSA), asking that very question through surveys and focus groups conducted soon after the children’s safety protections came into force: has Britain’s law made children safer online?

The answer is, yes and no, with a concerning tilt toward the latter. Per the report, “questions remain about whether current regulations are sufficiently robust, effectively enforced, and adaptable to an evolving digital landscape.” Moreover, “children continue to encounter harmful content at concerning rates, and age checks to manage their experiences online – while widespread – are often seen as easy to circumvent.”

It’s not all bad news. “New laws are making safety measures more prominent across children’s online spaces, with parents and children largely welcoming these changes,” the report finds. Safety features, including parental controls, are more visible and easy to use. Methods for age assurance that are “seen as easy to complete” include “uploading a government ID document, facial age estimation and using a third-party app.”

Fifty-three percent of children reported being asked to verify their age online in a two-month period across a range of platforms, for both new and existing accounts. “Popular platforms and websites where children had been asked to age verify recently included TikTok, Google/YouTube, and Roblox.”

“Where children encountered age verification, 37 percent report using facial age recognition, while others verified their age through a third-party app (24 percent) or government ID (22 percent).

A key finding is that kids support the changes, and are asking for them. “Children understood that these checks exist to support their safety and accepted that being unable to access certain content or features on a platform likely meant it was not suitable for them,” the report says.

“Where children have noticed new safety features or changes to functionalities, their views are broadly positive, particularly towards improved blocking and reporting processes,” which have the support of 90 percent of respondents. “They also view safety measures such as restrictions on contacting certain people (77 percent) and limits on access to functions like livestreams or comments (74 percent) as a good thing.”

Time is of the essence

And yet, says Internet Matters, the OSA is a leaky boat, which has “not delivered the step change needed to meaningfully improve children’s online safety and wellbeing.”

“Many of the issues most important to families, such as managing the amount of time children spend online and the risks of AI, remain unaddressed. As a result, families continue to shoulder the responsibility of keeping children safe online.” Just 22 percent of parents and 31 percent of children believe the government is doing enough to protect children online.

Age assurance is not daunting for children. Forty six percent believe age checks are easy to bypass, with only 17 percent saying they are difficult to fool. “Methods discussed include using a fake birthday, a Virtual Private Network or submitting a video of another person’s face, or even a character, to trick platforms into estimating an older age.” A third of children say they have bypassed age checks; however, this evidently includes self-declaration.

On the issue of social media, opinions are mixed. Rather than a so-called ban for kids under a certain age, respondents preferred “stronger enforcement of the OSA, stricter age-checks and restricting harmful features.” AI is a growing concern, as chatbots usher a slew of new psychological impacts into the mix.

But the most critical issue for parents is time. “Changes introduced under the OSA are broadly welcomed, but do not address what children and parents describe as their most immediate, day-to-day concern: the amount of time children spend online,” says the report. “Alongside this, children frequently encounter AI-generated videos and images, some of which are difficult to identify as artificial, raising concerns about misinformation and inappropriate content.”

As recommendations go, age checks must be “highly effective” and “robust.” Platforms should adhere to principles prioritizing safety by design, age appropriate content, risk-based approaches, and increased media literacy.

Finally – and perhaps most importantly – there must be effective enforcement, accountability and leadership. “Regulation of online services will only be effective if it is backed by robust enforcement and accountability. Government must ensure existing legislation is properly enforced and hold both regulators and platforms to account where it is not.”

If time is the greatest concern, a place to start in addressing how much time kids spend on their devices is to regulate, or ban, design and engineering choices that purposefully make products addictive. This may be the closest one might come to legislating the core problem, rather than an age-check-based solution, as many critics of age assurance technology have advocated: Instagram need not be shut down, causing a free speech furor. Instead, regulators could ban certain addictive algorithms, design models and mechanics, like endless scroll or tailored feeds.

Regardless, that this would potentially achieve the same outcome hinges on the same condition: to be effective, these laws must be enforced.

Better age checks are available, says Verifymy’s Andy Lulham

Commenting on the report, Andy Lulham, COO of provider Verifymy, says “this glimpse into young people’s experiences online tells us just how far we’ve come since the Online Safety Act came into force last year, and how far we have yet to go if we want to properly protect children online.”

“The report includes examples of children bypassing age checks by self-reporting their date of birth – otherwise known as self-declaration – incorrectly. This is a method that Ofcom has already ruled insufficient. It also describes crude attempts at tricking facial age estimation systems, such as drawing on facial hair to appear older. These anecdotes expose weak age checks that would almost certainly fail to meet Ofcom’s criteria for being highly effective.”

“The good news is that highly effective methods like email-based checks, ID scans or robust facial age estimation meet Ofcom’s standards, are easy to deploy at scale and have already proven effective in keeping children away from age-inappropriate content.”

Ofcom not equipped to do what people think it should

An editorial piece in Tech Policy Press also questions the efficacy of the UK’s Online Safety Act, for much the same reason.

Contributing editor Mark Scott, who previously served as a member of the Online Information Advisory Committee for UK regulator Ofcom, writes of “a growing divide between Ofcom, whose ranks have swollen by more than 500 officials in the last three years to meet the regulatory demands of implementing the complex online safety rules, and lawmakers and advocates who say the agency is not doing enough to hold Big Tech accountable.”

“The legislation is primarily focused on holding tech companies accountable for their existing trust and safety policies,” Scott writes. “It also does not give regulators direct power to intervene on specific pieces of potentially problematic content, no matter the clamor for action from advocates or politicians.”

“That nuance – in which Ofcom has statutory powers to hold companies accountable for their policies, not individual content decisions – is getting drowned out by British politicians’ eagerness to clap back against what they perceive as failures by these companies to keep locals safe online.”

Besides which, rapid change in the tech sector can muddy what, exactly, Ofcom is responsible for, as demonstrated by the jurisdictional questions raised when Ofcom launched an investigation into Grok AI’s rampant sharing of sexualized deepfake images of people, including children.

So, should Ofcom be concerned with upcoming elections in the UK, with fears that mis or disinformation campaigns might sway the vote? Again, it is a question of scope. “The Act does not explicitly identify misinformation or disinformation as specific harms that need to be addressed,” says the piece. “However, where such content amounts to a relevant offence, or intersects with a type of content set out in the Act that is harmful to children, the duties on providers will apply.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

UK Home Office raises estimate for passport contract to 12 years, £576M

The UK Home Office has opened a third round of market engagement for its next major passport manufacturing and personalization…

 

US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on…

 

Utah age assurance law for VPN users takes effect this week

Privacy advocates and virtual private network (VPN) providers are up in arms over Utah’s Senate Bill 73 (SB 73), “Online…

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

 

OpenAI rolls out passkeys for ChatGPT, partners with Yubico

OpenAI has introduced new passwordless security settings for ChatGPT accounts, allowing users to opt for passkeys or physical security keys….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events