Microsoft calls for AI deepfake fraud to be made illegal as fake Harris ad circulates
It took less than ten days after Joe Biden stepped away from the U.S. presidential election for a deepfake of his presumed replacement to surface on social media. However, the video, in which deepfake audio of Kamala Harris is heard speaking lines that denigrate her own experience and qualifications, may not have caused the stir it did if it hadn’t been shared as a “parody” by the user who owns the platform.
Elon Musk’s retweet of a political deepfake – in violation of Twitter’s policies on falsified content, which Musk may have tossed out along with the name – is merely the latest uptick in what is poised to be a skyrocketing prevalence of deepfakes in our online lives. Along with increases in financial fraud and the sexual exploitation of children, the incident is part of a wave that has caused Microsoft to call on U.S. congress to impose strong regulations on AI-generated deepfakes through a new legal framework.
Kids, seniors suffer harms of AI deepfake fraud
In language credited to Microsoft Vice Chair and President Brad Smith, the company asserts that “AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse and manipulation – especially to target kids and seniors.”
“One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans. In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children.”
As a contribution to the effort, Microsoft has published a 42-page report entitled “Protecting the Public from Abusive AI-Generated Content.” The white paper is intended to “encourage faster action against abusive AI-generated content by policymakers, civil society leaders and the technology industry.” It says that both public and private sectors must play a role, and that “technology companies must prioritize ethical considerations in their AI research and development processes.”
The report, Smith says, does three specific things. It illustrates and analyzes the harms arising from abusive AI-generated content. It explain’s Microsoft’s approach to the problem of AI-generated deepfakes. And it offers policy recommendations for lawmakers working to regulate AI.
Policy measures Microsoft is pushing for stem in part from the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, which several major tech firms launched at the Munich Security Conference in February. Priorities are addressing deepfake creation, detecting and responding to deepfakes, and promoting transparency and resilience.
Deepfake fraud statute, labeling of fake content among policy solutions
The report outlines three ideas that “may have an outsized impact in the fight against deceptive and abusive AI-generated content.”
A federal “deepfake fraud statute” would give law enforcement officials, including state attorneys general, “a standalone legal framework to prosecute AI-generated fraud and scams as they proliferate in speed and complexity.”
“Although there are current existing federal fraud statutes that could be revised and enhanced to address synthetic content,” says the report, “the most comprehensive way to approach this issue would be to enact a new federal synthetic content fraud statute to encompass both civil and criminal provisions. The statute could also provide for criminal penalties, civil seizure and forfeiture, as well as injunctive and other equitable relief.”
Microsoft notes “a useful, albeit imperfect template” for Congress to consider in drafting a law: the 2009 Truth in Caller ID Act, which makes it illegal “to cause any caller identification service to knowingly transmit misleading or inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value.”
Beyond a statute, labeling of synthetic content using “state-of-the-art provenance tooling” should be a requirement for AI system developers. “This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”
Finally, “we should ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”
Microsoft also has a message for its fellow tech titans who have perhaps been too eager to push AI innovation to the detriment of public safety. “Enacting any of these proposals will fundamentally require a whole-of-society approach,” it says. “While it’s imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action.”
The term deepfake was coined in 2017, the year a fake lip-sync video of former President Obama was released. So, while the technology is often framed as an emerging threat, it has already had seven years to develop in its current sophisticated form. It is a mainstream concern, with recent research from Jumio showing that three quarters of U.S. adults worry about the effect of political deepfakes on the 2024 election.
Microsoft’s Brad Smith may be right when he says “the danger is not that we will move too fast, but that we will move too slowly or not at all.”
White House Executive Order on AI ripens with new commitment
Yet there is movement on the government level. The FCC has already banned robocalls using voice deepfakes. And the Biden-Harris administration recently announced new actions on AI following the President’s Executive Order – and the promise of a very big name to help make sure AI development does not overshoot civic responsibility.
Issued last year, the Executive Order on AI included commitments from 15 leading U.S. AI companies. A release from the White House says that now, “Apple has signed onto the voluntary commitments, further cementing these commitments as cornerstones of responsible AI innovation.”
According to the release, action stemming from Biden’s order is on schedule. The announcement includes a comprehensive table listing the various government actions on AI, which range from hiring initiatives to evaluations and published reports.
CDT wants GenAI devs to build in safeguards against deepfake fraud
Nonprofits are also pitching in. The Center for Democracy and Technology (CDT) has published a report on “Election Integrity Recommendations for Generative AI Developers.” It calls for usage policies that “prohibit the generation of realistic images, videos and audio depicting political figures or political and electoral events,” as well as related political uses. It also recommends active product interventions that flag falsified content – for instance, interface pop-ups – and the promotion of “authoritative sources of election-related information.”
Enforcement is key, as is transparency. Developers must “proactively enforce usage policies on elections at all times, not just during active election periods” and “adequately resource and staff policy and enforcement teams.”
Writing in the introduction, report author Tim Harper, CDT’s senior policy analyst on democracy and elections, notes that 2024 is the first year in which AI is a going concern in democratic elections. But it will not be the last; the problem of AI deepfakes impacting elections is likely to increase as the technology continues to develop, enabling ever-more realistic fake content. The current mobilization against deepfakes could be the beginning of a virtual forever war.
“Although we are halfway through this election year,” Harper writes, “it remains imperative for AI developers to quickly develop election integrity programs employing a variety of levers including policy, product, and enforcement to protect democratic elections this year and beyond.”
Article Topics
Center for Democracy & Technology | deepfake detection | deepfakes | elections | fraud prevention | generative AI | Microsoft
Comments