FB pixel

UK faces down threat of deepfakes that demean, defraud, disinform

From exploitative ‘deep nudes’ to rampant disinformation, the world is getting faker
UK faces down threat of deepfakes that demean, defraud, disinform
 

New research from Ofcom reveals just how prevalent deepfakes have become in the UK and beyond. According to the online regulator, 43 percent of people over the age of 15 say they have seen at least one deepfake online in the last six months. Worryingly, the number rises to 50 percent among children aged 8-15. Explicit content is a growing problem, as pornographic deepfakes proliferate, causing real-life harms.

More insights on deepfakes have emerged from the recent Westminster Media Forum, where the focus was on tackling the spread of disinformation and the malicious uses of deepfakes. Stakeholders and policymakers gathered to discuss current and emerging disinformation trends in the UK amidst “heightened concerns among regulators and affected parties.”

Sexual deepfakes cause real damage to women whose image has been stolen

Accurate deepfake numbers are difficult to track, but from Ofcom says from “evidence available it appears one of the most common forms of deepfakes shared online is nonconsensual intimate content.” According to a deepfake abuse landscape analysis by digital rights advocacy group My Image, My Choice (MIMC), there are now over 276,000 videos of this type circulating on the most popular deepfake sites, with over 4.2 billion total views. In 2023, more deepfake abuse videos were posted than every other year combined.

Ofcom’s new discussion paper, “Deepfake Defences: Mitigating the Harms of Deceptive Deepfakes,” says that one in seven adults who say they’ve seen deepfake content report seeing sexual deepfakes, or “deep nudes.” Just under two thirds of those say the sexually explicit deepfake was of a celebrity or public figure. Fifteen percent say it was of someone they know. Six percent say the deepfake they saw was a fake version of themselves. Seventeen percent believed it depicted a person under the age of 18.

Ofcom says evidence shows the vast majority of sexually explicit deepfakes are of women, many of whom suffer from PTSD or anxiety as a result of being targeted. Deepfakes, says the regulator, are “already doing serious harm to ordinary individuals – whether that is by being featured in nonconsensual sexual deepfake videos or falling victim to deepfake romance scams and fraudulent adverts.”

Generative AI is playing an intensifying role, making deepfakes easier and cheaper to produce than ever before. But it is also a key tool in the deepfake defense arsenal, especially as advances in synthetic identities help developers create data sets that are comprehensive and fair.

Defense against deepfakes falls into four broad categories: Ofcom

In terms of a defensive strategy, Ofcom identifies four “broad categories of intervention” that are available to at-risk users and organizations: prevention, embedding, detection and enforcement.

Prevention “involves efforts to block the creation of harmful deepfakes, with model developers introducing safeguards and adjusting their technology to make it more difficult to create harmful content.”

Embedding “entails attaching information to content to indicate its origins.” For example, invisible watermarks, provenance metadata, or labels on AI-generated content.

Detection “means using tools to reveal the origins of content, regardless of whether information has been attached to it in the ways described above.” Deepfake detection and liveness detection are increasingly part of biometrics, digital identity and cybersecurity providers’ key tools.

Finally, enforcement “involves setting and communicating clear rules about the types of synthetic content that can be created using GenAI models and related tools, as well as about the types of content that can be shared on online platforms. It may also involve acting against users that breach those rules, for example by taking down content and suspending or removing accounts.” Which is to say, make rules and enforce them.

Ofcom’s website has the full deepfake defense report.

Westminster forum considers policy solutions to range of deepfake harms

Conversations from the Westminster Media Forum reflect similar concerns and a related effort to ensure regulations are tough enough to smother the deepfake threat. Thoughts from representatives in media, tech, law and regulation weigh in on the disinformation threat to pillars of our society, such as trust in a fair and free press, a functional economy and the democratic process.

A dominant theme emerges: in short, the necessity of formal rules and frameworks on AI. To quote presenter Tami Hoffman, director of news distribution and commercial innovation at major UK media outlet ITN, “the current situation is that we can’t rely on technology to solve this issue of policy and the law and regulation and a reverse framework that pulls it all together.” That, she says, is up to policymakers.

Hoffman argues that “hallucination is not a but, but a feature” of AI platforms that generate deepfake content. LMS or large language models, she says, are, in essence, probability machines. And “with any form of probability, there is always a risk percentage chance that you won’t get the preferred outcome. This is exacerbated by the economics sector, because in the race to secure market dominance and first mover advantage, we’ve seen tech companies release an update to the public and then develop it on the fly. These models are being road tested in the real world rather than in a lab.”

She recommends, as a thought experiment, “swapping out tech companies for car manufacturers or baby food producers testing their product on an unselected public instead of in the lab.” The refrain is familiar: Big data firms are poor shepherds of their own compliance. “We cannot let tech companies write the rules for their own businesses.”

Hoffman posits a focus on four priority concepts to shape policy: safety, promoting public sector broadcasters who can serve as verified sources of information, copyright protection from data harvesting, and developing a code of practice for all public institutions.

Tech industry perspective puts onus of responsibility on regulators, users

Javahir Askari, policy manager of digital regulation for trade association techUK, brings the industry perspective: never too gloomy, focusing on opportunities as well as challenges, and ultimately pushing responsibility toward everyone but tech firms.

“There are a significant number of both opportunities and also threats that deepfakes pose to democracy, citizens and the economy,” Askari says. She argues that the deepfake threat is often sensationalized, saying “there hasn’t yet been a large-scale threat across the board, and I think it offers a really good opportunity for us across sectors to try and establish both defensive strategies, but also proactive tools.”

Askari pushes for more international cooperation by policymakers to quash deepfakes, and an “online safety sandbox” for collaborative exchange among developers. “There are areas of best practice which would aid the identification of authentic media and would really address the rising threat of disinformation while still allowing companies to innovate in the content provenance space,” she says.

Finally, she says, people will just need to get better at knowing what’s real and what’s not. “One of the things that tech UK would really kind of call for is better cross-collaborative working around media literacy, whether that’s, an integral part of children’s education in schools, or even for adults – teaching the UK’s population, you know, how to be more media literate.”

Kidron addresses three questions from youth excited, troubled about AI

Event moderator Baroness Beeban Kidron, expert advisor for the UN Secretary-General’s High-Level Advisory Body on Artificial Intelligence and chair of 5Rights Foundation (among other titles), believes young people are already well aware of the potentials and dangers of AI, and poses three core questions asked to her by youth, relating to the technology.

“First is that, given that young people get most of their information online, should we balance an increasingly untrustworthy material with tougher rules on anonymity, so it’s easier to match mis- and disinformation with those who post it.”

Second, “should the creators of foundation models, the products and services built from them, and those that distribute and those that use them all have a separate and specific legal responsibility for what they enable, distribute and produce.”

And finally, a question that points back to the Ofcom research: “given that girls suffer disproportionately from abuse online and the rise and Andrew Tate-type attitudes and deepfakes, et cetera, should misogyny be a standalone offense?”

Can regulation possibly keep up with the rapid evolution of generative AI models driven by a ravenous tech sector? Will biometrics and AI tools be able to defeat AI fraud in the great deepfake detection battle? Who has the final responsibility to ensure the world is not flooded with misinformation and disinformation, sexually exploitative deepfake content and an endless parade of talking virtual heads?

The answers are works-in-progress. Of course, you could always ask ChatGPT.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics inroads sure to be affected by regulatory U-turns

A nine-figure biometrics acquisition highlights the intersection of responsible AI and face biometrics, one of the key themes of the…

 

Liquid identity verifications surge past 60M as Japan leans into chip-scanning

Liquid has reached the 60 million digital identity verification milestone with its online KYC service, with a surge in verifications…

 

Car dealerships rev up digital ID verification to counter rise in identity fraud

Whether it’s a fake credit history, a phony license or a test driver with a stolen identity who makes tracks…

 

GovTech to deliver $10 trillion in value by 2034, says WEF

At the meeting of the World Economic Forum (WEF) in Davos this week, tech is front and center – and…

 

Davos discusses digital wallets, AI economy

This year’s Davos World Economic Forum (WEF) is bringing not only tense trade talks between the U.S. and Europe but…

 

ASEAN updates guidance on deepfakes

The threat of deepfakes is entering high-level discussions from Southeast Asia to Davos. The Association of Southeast Asian Nations (ASEAN)…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events