FB pixel

One, two, Ofcom’s coming for you; three four, EU’s enforcing more

Regulators look to put the fear in platforms with progress on enforcement actions
Categories Age Assurance  |  Biometrics News
One, two, Ofcom’s coming for you; three four, EU’s enforcing more
 

It’s almost Halloween, which means the boogeyman is coming to get you – this year, particularly, if you are an online platform, and your monster is global online safety regulation. Regulators from the EU and UK are on the prowl this season, seeking out those who violate freshly staked laws and hexing them with disciplinary fines.

EC deploys teams for macro, micro enforcement

A release from the European Commission (EC) says it has taken further steps to shield children and teenagers from online risk under the Digital Services Act (DSA).

“First, the Commission has sent information requests to Snapchat, YouTube, Apple App Store and Google Play to understand the measures these companies have in place to protect minors on their services,” it says – its first investigatory step following the adoption of the Guidelines on the Protection of Minors in July 2025.

While the Commission grills the big names, the European Board for Digital Services’ Working Group for the protection of minors has “agreed to take action to ensure compliance with the DSA by smaller online platforms, in coordination with the competent authorities.”

The two-pronged approach sees the Commission mustering special units for enforcement, which will enable them to develop and share common tools to ensure consistency across the EU.

In tandem with its enforcement activity, the Commission is publishing the second blueprint for an EU age verification solution, which “introduces the use of passports and identity cards as onboarding methods, as well as support for the Digital Credentials API.” It also intends to set up an advisory panel to explore “the best approach for the European Union regarding safe online experience for minors on social media services.”

Ofcom sinks its fangs into 4chan with £20K fine

An update from Ofcom outlines the moves the UK regulator has made since March 2025, when the first of its online safety codes became enforceable. In that time, Ofcom has launched five enforcement programmes and opened 21 investigations into the providers of 69 sites and apps.

Of those investigations, 11 are addressed in the October update. They include tackling the distribution of child sexual abuse material (CSAM), monitoring services which take steps to stop UK users from accessing them and “clamping down on providers that ignore legally-binding information requests.”

The latter could well stir up more transatlantic discord, since it targets notorious online forum 4chan, which has already sent one inflammatory legal notice to Ofcom, accusing it of violating U.S. First Amendment rights.

4chan is apparently uninterested in engaging Ofcom at all: the platform has reportedly refused information requests seeking a copy of its illegal harms risk assessment and its qualifying worldwide revenue. For the cold shoulder, Ofcom has fined 4chan 20,000 pounds (about 26,645 dollars), and beginning October 14 “will also impose a daily penalty of 100 pounds per day (about 130 dollars), for either 60 days or until 4chan provides us with this information, whichever is sooner.”

Joining 4chan in “similar failures” are file-sharing service Im.ge and pornography service provider AVS Group Ltd. AVS is also in the crosshairs for “failing to comply with its duty to put highly effective age checks in place to protect children from encountering pornography.”

Youngtek Solutions Ltd. could be next; Ofcom is expanding the scope of its investigation into the pornography service provider on the same grounds.

On the CSAM file, the targets are file sharing services through which illegal content is distributed, such as Krakenfiles, Nippydrive, Nippyshare and Nippyspace. Ofcom notes that some services have chosen to avoid the risk of noncompliance by geoblocking UK IP addresses outright. “This has significantly reduced the likelihood that people in the UK will be exposed to any illegal or harmful content,” says Ofcom – and, as such, it’s enough for them to take the pressure off.

Investigations into Nippybox and Yolobit continue. And, while a forum promoting suicide has also geoblocked UK IP addresses in response to Ofcom’s actions, it “remains on Ofcom’s watchlist” and the investigation remains open “while we check that the block is maintained and that the forum does not encourage or direct UK users to get around it.”

Meanwhile, having identified “serious compliance concerns” with two file sharing services – 1Fichier.com and Gofile.io – Ofcom reports that enforcement activity prompted them to deploy perceptual hash-matching technology, “a powerful automated tool that can detect and swiftly remove CSAM before it spreads further.”

“This is one of the core safety measures set out in our illegal harms Codes, and its adoption marks a significant step forward in reducing the availability of this egregious material online.”

Instagram teen accounts not protecting kids, says Meta whistleblower

Tussling with file sharing sites and fringe death cults is one thing; holding Silicon Valley to account is a task of a different magnitude. But some are trying. A new study conducted by Meta whistleblower Arturo Béjar, the Molly Rose Foundation, Fairplay, ParentsSOS and Cybersecurity for Democracy calls out Meta for lying about what Instagram’s automated Teen Accounts settings actually do, claiming that the feature is “abjectly failing to keep young people safe despite Meta’s PR claims.”

A release from the Molly Rose Foundation, a UK nonprofit that campaigns for internet safety, says systematic review of Instagram’s list of teen safety features found that “less than 1 in 5 are fully functional and two-thirds (64 percent) are either substantially ineffective or no longer exist.”

The findings suggest that users of Teen Accounts are able to view content that promotes suicide, self-harm and eating disorders – and are even getting autocomplete suggestions on where to find it. Messaging fails to filter out “grossly offensive and misogynistic” content. And, perhaps worst of all, “Instagram’s algorithm incentivises children under-13 to perform risky sexualised behaviours for likes and views and encourages them to post content that received highly sexualised comments from adults.”

The report does not directly mention biometric facial age estimation, which Yoti provides to Instagram as part of the Teen Accounts program. Instead, it analyzed 47 safety tools listed in Meta’s policy, and assigned 30 of them a rating of red (non-existent or ineffective), nine a yellow (reduced harm but came with limitations) and eight green (fully functional).

Its sole rating for age verification as a safety measure gives a yellow to “new ways to verify people’s age on Instagram, including privacy preserving selfie videos.”

Age assurance kicks in if you try to amend the age on a Teen Account,” the assessment says. But “it is extremely difficult to report someone who you suspect to be aged under 13, with a complicated and extended reporting flow, friction by design.”

Meta could fix age verification problem if it wanted to: report

Ultimately, “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors” is less about Meta’s tech than its intentions. The research found that “Instagram’s own design features undermine the effectiveness of their own safety tools.” Which means its PR campaigns are mostly just style, with very little substance.

Arturo Béjar, says “Meta consistently makes promises about Teen Accounts, consciously offering peace of mind for parents by seemingly addressing their top concerns including that Instagram protects teens from sensitive or harmful content, inappropriate contact, harmful interactions, and gives control over teen’s use.

“Parents should know, the Teen Accounts charade is made from broken promises. Kids, including many under 13, are not safe on Instagram. This is not about bad content on the internet, it’s about careless product design.”

The report’s call to action is for U.S. Congress to pass the Kids Online Safety Act (KOSA). “Time and time again, Meta has proven they simply cannot be trusted,” says an introduction by Ian Russell, chair of Molly Rose Foundation, whose daughter Molly’s suicide prompted the charity’s creation. “To prevent future tragedies, we need real regulation. In the U.S., that means passing new legislation like the Kids Online Safety Act, which would require social media companies to prevent and mitigate the harms to young people caused by platform design. In the UK, that means strengthening the existing Online Safety Act to compel companies to systematically reduce the harm their platforms cause by compelling their services to be safe by design.”

Fairplay Executive Director Josh Golin echoes the sentiment. “Enough is enough. Congress must pass the bipartisan Kids Online Safety Act now, and the Federal Trade Commission should hold Meta accountable for deceiving parents and teens.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Do biometrics hold the key to prison release?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner In the criminal justice setting there are two questions in…

 

New digital identity verification market report forecasts dramatic change and growth

The latest report from Biometric Update and Goode Intelligence, the 2025 Digital Identity Verification Market Report & Buyers Guide, projects…

 

Live facial recognition vans spread across seven additional UK cities

UK police authorities are expanding their live facial recognition (LFR) surveillance program, which uses cameras on top of vans to…

 

Biometrics ease airport and online journeys, national digital ID expansion

Biometrics advances are culminating in new kinds of experiences for crossing international borders and getting through online age gates in…

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events