Deepfake ecosystem develops around apps, services as detection fights to keep pace
Deepfakes are the topic du jour in the biometrics and identity verification industries, which are increasingly involved in the global effort to detect deepfakes and prevent the serious harms they can cause to individuals and social structures. Convening a number of experts from industry and government, the Detecting Deepfakes Summit is the latest forum to peer under the hood when it comes to deepfakes, and how the world can adequately prepare to face the threat.
“This is such a difficult problem,” says panelist Kay Chopard, executive director of the Kantara Initiative. The issue is global and touches everything from the courts and the world of organized crime to Hollywood stars to the online lives of women and children being victimized by abusers.
Among regulators, researchers and private firms, efforts are ongoing to contain the spread of harmful deepfake content. But often, a deepfake only needs to be shared for a brief time to reach many people, and its impact can reverberate even after it is taken down. Meanwhile, standards and legislation lag behind the technological development of generative AI and other tools for creating ever-more sophisticated deepfakes.
“We really have to have ways to come together,” Chopard says. “We have to find ways to agree.”
Finding consensus on a topic as complex and dynamic as deepfake abuse is difficult. Money and resources earmarked for defense and deepfake detection are not proportional to the investments being made on the criminal side. As the upward curve of deepfakes appearing in daily life rises and the accompanying regulatory curve slopes much more slowly, the problem will only become more urgent.
Survivor voices underline seriousness of deepfake threat
Jodi Leedham of Refuge, which assists women who have suffered domestic abuse, lays out the reality of what it means to speak of a “deepfake problem.” Viral examples like images of the Pope in a puffer jacket might be seen as harmless. But, says Leedham, “96 percent of deepfakes are porn.” And some research suggests virtually all of those pornographic deepfakes depict women. Which is to say, deepfakes are specifically a threat to women’s rights. “We can’t escape or move away from the fact that we are seeing this as a gendered issue.”
Platforms that host deepfakes often make it difficult for victims to have exploitative content taken down. That’s a particular problem when time is of the essence, Leedham says. A deepfake can make the rounds quickly, and even minimal exposure can lead to damage to a survivor’s mental health, reputation and online life. At that point, “the horse has left the barn.”
Jess Rose Smith from Ofcom says the UK communications regulator noticed public concern about deepfakes growing around publicized cases such as the Taylor Swift deepfakes. But, it “also noticed that while many well-known cases featured public figures, deepfakes were already starting to do serious harm to ordinary individuals” – mainly with sexually explicit content.
Deepfake content moderation at scale a ‘major, major issue’
The critical issue of time also applies in broader social contexts, such as elections. Ami Kumar, whose company Contrails.ai worked on the recent Indian election, notes that since many young voters get their news from social media, deepfakes have become a much bigger political problem. Furthemore, the proliferation of deepfakes-as-a-service is creating scale, and the deluge of fake content is flooding through moderation measures.
“Content moderation of deepfakes at scale is a major, major issue,” he says. “Because once the file has been consumed, the impact has been made.The seed of the idea has reached your mind.” Even political deepfakes that were debunked before the Indian election, Kumar says, “still remain as a point of conversation in most of India. The impact still lives on to this day.”
Detection must occur before the deepfake reaches hundreds of thousands of eyes, “pretty much at the level where the user is uploading it.” Yet while public content platforms can police the sharing of deepfake content, encrypted private messenger apps are a different story.
Defending integrity of age assurance tech
In tandem with deepfakes that mimic or generate face biometrics, there is the threat of AI-enabled voice agents and other audio deepfakes. Kay Chopard notes the increasing prevalence of the so-called “grandparent call,” in which voice cloning is used to scam elders out of money.
The diversity and complexity of the deepfake ecosystem will increase over time, as will its profitability. Ofcom has developed a tripartite taxonomy for deepfakes, classifying them into those that demean, defraud or disinform. Rose Smith says organized crime is a particularly powerful driver of increasingly lucrative deepfake fraud, but that sexually explicit deepfakes have become much easier for anyone to create using so-called “nudify” apps.
Meanwhile, Iain Corby of the Age Verification Providers Association (AVPA) points out that while most people associate age assurance with the job of keeping children out of restricted spaces, it also serves the opposite purpose: making sure adults aren’t pretending to be children in order to lure kids online.
The sinister gravity of such deepfake threats is part of what led AVPA to launch Project DefAI, joint UK/Swiss project funded by InnovateUK and Innosuisse, which also involves IDIAP Research Lab, the Age Check Certification Scheme and the Swiss firm Privately. As an industry body repping the tech for proving a person’s age, Corby says, “it was existential to us that the industry as a whole was able to defend against attacks enabled by this new technology.”
Emerging laws on AI patchy on deepfakes
Legislation and standards development is always slower than innovation, but there are ongoing efforts to try and draw legal barriers around the deepfake economy. Panelist Felipe Romero-Moreno, a researcher at the University of Hertfordshire, summarizes a few of the major initiatives and what they require.
The UK Online Safety Act of 2023, the EU AI Act, the U.S. deepfake task force and select state-level projects all have a different threshold of responsibility for platforms, with the AI Act being the most demanding in terms of requirements to identify and remove deepfake content.
In certain respects, the GDPR offers protections against deepfakes, by way of personal information, particularly as applied to training data. But it is not comprehensive or specific enough to be a key legal tool, and data protection legislation generally is still not properly aligned to meet the deepfake threat.
Global nature of threat calls for cross-border efforts
A common theme among panelists is the need for collaboration. The problem of deepfakes is broader than we realize, says Chopard; “it’s going to take a lot of cross-border work, everyone thinking more creatively” to solve it. Researchers, policymakers, educators and private enterprise all have a role to play. Much as individual interventions can be vulnerable, a combination of prevention, embedding, detection, enforcement can be potent.
Yet there are massive issues at play, each of which brings a new ethical wrinkle – and the defense has no time outs. Experts have suggestions on how to stay in the race, from simplifying processes to upping investment to offering incentives for different actors in the space. Innovation is happening. But there is no silver bullet.
Kumar argues that since the system that traditionally sorted accepted fact from nefarious fiction (i.e. traditional media) is broken, and deepfakes keep flooding the online space fast and furious, the situation requires immediate and close attention. And while Jodi Leedham raises a profound philosophical question – “if we think back to a decade ago, did we envision that this is where technology would lead us to?” – the fact is, we are here. And we need help.
“There are things that can be done,” says Kay Chopard. “It’s people willing to take on that challenge that are going to make a difference.”
Article Topics
biometrics | deepfake detection | deepfakes | digital identity | fraud prevention | identity verification | Project DefAI
Comments