UK OSA moves past age assurance into censorship with self-harm content designation

The UK government continues to build out the Online Safety Act (OSA), this week announcing tighter legal requirements for platforms to locate and remove material that encourages or assists serious self-harm. The change means that platforms will have the responsibility to ensure content previously subject to age assurance regulations, which is also being made illegal, is intercepted and removed before reaching children or adult users.
The move is just the first in what is expected to be a series of amendments toughening the OSA by new Technology Secretary Liz Kendall. Ofcom is expected to publish a register of regulated services soon, MLex reports, after the government’s approach to categorization was upheld in court.
Self-harm content to be reclassified as a priority offense
A release from the Department for Science, Innovation and Technology (DSIT) says the OSA will be amended to classify self-harm content as a “priority offense.” It specifies that, while the measure is partly to protect children from content that promotes suicide, eating disorders and “online challenges or hoaxes that may encourage someone to take part in an activity that could cause them harm,” it also aims to help adults with mental health challenges avoid bogus medical advice and potential triggers.
“Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country,” says new Technology Secretary Liz Kendall – apparently unafraid to pick up where outgoing Tech Secretary Peter Kyle left off. “Our enhanced protections will make clear to social media companies that taking immediate steps to keep users safe from toxic material that could be the difference between life and death is not an option, but the law.”
DSIT says it imposes the “strongest possible legal protections, compelling platforms to use cutting-edge technology to actively seek out and eliminate this content before it can reach users and cause irreparable harm.” This suggests a market need for content moderation software that can automate the detection and takedown process.
Verifymy says tech is ready, if Ofcom is willing to act
Andy Lulham, COO at content-moderation provider Verifymy, says “the good news is that the technology is here” – even if it needs a little help from its human friends.
“Today’s content moderation technology is sophisticated and improving every day. AI can detect potentially harmful content and flag it in near real time, making pre-upload checks a distinct possibility. But content moderation remains a complex and nuanced process. This is why the most effective approach combines advanced technology with human expertise, ensuring speed, scalability and sensitivity.”
“The UK government is pressing ahead with a preventative, proactive model of content moderation that will be broadly welcomed, and will reduce the amount of harmful content online. Now, Ofcom must ensure platforms live up to their duty of care and prioritise the safety of their users accordingly.”
With files from Chris Burt.
Article Topics
age verification | children | Ofcom | Online Safety Act | UK age verification | VerifyMy






Comments