Senate deepfake bill introduced after House passes companion legislation

The bipartisan Senate version of legislation passed out of the House Committee on Science, Space, and Technology in late September that would authorize the immediate funding of research into ways to detect deepfakes was introduced Wednesday in the Senate by Sen. Jerry Moran (R-KA), and Rep. Catherine Cortez Masto (D-NV), both members of the Senate Committee on Commerce, Science, and Transportation.
The House version of the bill, HR 4355, the Identifying Outputs of Generative Adversarial Networks Act (IOGAN Act), was introduced by Rep Haley Stevens (D-MI) and Anthony Gonzalez (R-OH), along with co-sponsors, Reps Jim Baird (R-IN) and Katie Hill (D-CA).
Both bills are nearly identical, and would require critical research to accelerate development of technology to identify deepfakes and, then, presumably, prevent them as best as possible.
The House committee report explained that “the intent of this legislation is to accelerate the progress of research and the development of measurements, standards, and tools to combat manipulated media content, including the outputs of generative adversarial networks,” and “recognizes that [the] National Science Foundation [NSF] is already making investments in the area of manipulated or synthesized content through its Secure and Trustworthy Cyberspace and Robust Intelligence programs.” The committee encouraged NSF “to continue to fund cross-directorate research through these programs, and others, to achieve the purposes” of the legislation, “including social and behavioral research on the ethics of these technologies and human interaction with the content generated by these technologies.”
In passing the bill out of committee to the full House, the committee said it fully “intends for NSF and the National Institute of Standards and Technology [NIST]” to implement the legislation and “to work with other agencies conducting work on detecting manipulated and synthesized content, including DARPA, IARPA, and the agencies that participate in the Networking and Information Technology Research and Development (NITRD) program to ensure coordination and avoid duplication of effort.”
In its January 15 Update to the 2016 Federal Cybersecurity Research and Development Strategic Plan RFI Responses paper, NITRD stated “research work that leverage advances in … generative adversarial networks and simulation to generate cyberattack scenarios with limited real world data will be critical to the continuous advancement in cyber defense against evolving adversarial activities.”
Similarly, the November 2019 Executive Office of the President’s National Science and Technology Council Artificial Intelligence Research and Development Interagency Working Group’s 2016–2019 Progress Report: Advancing Artificial Intelligence R&D, stated NIST’s “new FARSAIT program is already supporting several research projects related to standards and benchmarks, including projects to assess the performance of generative adversarial networks and AI-based robot systems to improve detection and correction of accidental bias in AI systems and to measure the vulnerability of AI image-recognition tools to adversary attacks.”
FARSAIT is the Fundamental and Applied Research and Standards for AI Technologies initiative NIST stood up in 2018 that’s “designed to advance the fundamental and applied AI research at NIST,” the agency said.
Under FARSAIT, the National Cybersecurity Center of Excellence (NCCoE) in October published draft NISTIR 8269, A Taxonomy and Terminology of Adversarial Machine Learning “as a step toward securing applications of AI, specifically Adversarial Machine Learning (AML), and features a taxonomy of concepts and terminologies.”
The agency said the document “can inform future standards and best practices for assessing and managing ML security by establishing a common language and understanding of the rapidly developing AML landscape.”
Public comments on NISTIR 8269 are due December 16, 2019.
One month ago, the Senate passed the Deepfake Report Act, another piece of bipartisan legislation introduced by Sen. Brian Schatz (D-HA) and Sen. Rob Portman (R-OH) that would direct the Department of Homeland Security (DHS) to conduct an annual study of deepfakes and other types of similar content.
“Fake content can damage our national security and undermine our democracy,” Schatz said, declaring it would “direct the federal government to learn more about the scope of deepfake technology.”
Portman declared that, “Addressing the challenges posed by deepfakes will require policymakers to grapple with important questions related to civil liberties and privacy. This bill prepares our country to answer those questions and address concerns by ensuring we have a sound understanding of this issue.”
Schatz and Portman’s legislation would mandate that DHS assess the technologies that are being used and developed to spawn deepfakes, the exploitation of deepfakes by foreign and domestic entities, and available countermeasures in order to help policymakers and the public better comprehend the threats that deepfakes pose to national security and election security.
The House companion bill, though, HR 3600, was introduced in June by Rep. Derek Kilmer (D-WA), and only received its fourth co-sponsor, Rep. Ann Kuster (D-NH) a week ago. It’s yet to move out of committee.
As is the case with all the biometrics legislation that’s been introduced in the 116th Congress this year which has never been passed by Congress, as Biometric Update reported, neither has any of the ten bills that address in one way or another the problem of deepfakes which have been introduced in Congress this year, three by Republicans and seven by Democrats. One was passed by the Senate, and three were passed by the House. The one loner – an amendment to a version of the National Defense Authorization Act for Fiscal Year 2020 – which not surprisingly was passed by both chambers of Congress — has been in conference committee since June and has one mention of deepfakes as an amendment to the rules committee print 116-19.
Submitted by Rep. Norma Torres (D-CA), it would require the “Department of Defense provide a briefing on its efforts to address manipulated media content, specifically deepfakes, from adversarial sources, and provides a $5 million increase for the Department of Defense’s Media Forensics Program.”
That’s it. And it’s unclear whether even this admittedly paltry effort will make it into the final defense package, given the Senate version seems to be holding sway. As congressional staffers and defense and intelligence officials explained to Biometric Update, the real efforts – “and money” – going toward combating deepfakes “are going on” over at DARPA and at a “few other” classified defense department and Intelligence Community components.
When Gonzalez announcing his bill in the House, which quickly passed out of committee, he declared, “deepfakes are not new a new phenomenon … you will remember the famous scene where Forrest Gump was filmed shaking hands with historic presidents. At that time, the technique was revolutionary and very expensive and difficult to reproduce—only big Hollywood studios could afford to reproduce deepfakes with such images. Fast forward a few decades, and we now live in a world where advancements in technology and computing power has increased exponentially.”
Gonzalez warned that, “Given the national security and societal implications that undistinguishable deepfakes can pose for our country, my legislation directs the National Science Foundation [NSF], in consultation with other federal agencies, to conduct research on the science and ethics of deepfakes.”
As the specter of deepfakes has rapidly accelerated in just the last several years with no clear method of identifying and stopping it from posing an ever increasing threat to not only national security, but to political elections and society in general, as Biometric Update has been reporting, the IOGAN Act would direct NSF and NIST to support research to accelerate the development of technologies that could facilitate the detection of deepfakes.
As Gonzalez’s office noted in a statement at the time his and his colleagues’ bill passed out of committee, “Advancements in computing power and the widespread use of technologies like artificial intelligence over the past several years have made it easier and cheaper than ever before to manipulate and reproduce photographs, video, and audio clips potentially harmful or deceptive to the American public. The ability to identify and label this content is critical to preventing foreign actors from using manipulated images and videos to shift US public opinion.”
Similarly, Moran and Masto’s legislation would also require funding of research into ways of detecting deepfakes, noting that their “legislation would help raise awareness of deepfakes and determine ways to combat the rising threat of this technology. Their companion bill introduced this week would also direct NSF and NIST “to support research to accelerate the development of technologies that could help improve the detection of deepfakes.”
“As technology continues to evolve, so do the complexity and frequency of digital threats to Americans,” Moran said, adding, “Deepfakes can be means for a variety of ill-intentioned uses, but the technology poses a specific threat to US voters and consumers by way of misinformation that is increasingly difficult to identify.”
He said his and Masto’s legislation would “assist the federal government to effectively coordinate its efforts to address this threat by accelerating research and development of deepfake technology detection.”
“In the last decade, technology has completely revolutionized Americans’ lives. Yet that innovation also requires Congress to ensure that we have guardrails in place to protect our country from the malicious use of technology,” Masto added, emphasizing that, “Recently, deepfake technologies have been used to spoof the voices of leaders in other countries, to spread misinformation during democratic elections, and to confuse and defraud consumers.”
She said the legislation must be passed promptly “so that we can understand how to better identify deepfake technology, devise comprehensive strategies to stop it, and to ensure we’re educating … all Americans on ways they can protect themselves.”
The IOGAN Act would further direct NIST to establish “measurements and standards relating to this technology, as well as develop a report on the feasibility of public-private partnerships to detect deepfakes.”
“Recent technological advances have reshaped the world we live in, but with that come new threats to our national security that must be addressed,” Gonzalez reiterated, asserting that the time has come to address the threat. “It is critical that we learn to identify and combat deepfake technology now to stop scammers and foreign entities who would seek to do harm to the American public.”
Stevens added that “the development of deepfake technology has made it easier to create convincing fake videos, which have already been used for malicious purposes,” noting it’s imperative that we “better understand deepfakes and learn how to prevent the proliferation of fake news, hoaxes, and other harmful applications of video manipulation technology.”
Baird, the ranking member on the House Subcommittee on Research and Technology, said the “legislation will play a critical role in funding more government research to detect this rapidly developing technology,” to which Hill sounded the cautionary warning that if nothing is done, and soon, “As technology advances, the proliferation of fake videos and misinformation will only get worse … and stop [ping] these damaging manipulations from spreading” will only get more difficult.
“The IOGAN Act is an important step towards better understanding and responding to the risks from deepfakes,” which “are a serious and growing problem that can be used to spread disinformation, harm reputations, and commit fraud,” said Center for Data Innovation Director Daniel Castro. He praised the legislation for “wisely avoid[ing] prescriptive rules for the underlying technology which has legitimate uses and will likely be integrated into many commercial video editing tools, and instead calls for research on better detection tools, collaboration between the public and private sectors, and the development of voluntary standards.”
Under the Identifying Outputs of Generative Adversarial Networks Act, the NSF director, “in consultation with other relevant federal agencies, shall support merit-reviewed and competitively awarded research on manipulated or synthesized content and information authenticity, which may include:
• Fundamental research on digital forensic tools or other technologies for verifying the authenticity of information and detection of manipulated or synthesized content, including content generated by generative adversarial networks;
• Fundamental research on technical tools for identifying manipulated or synthesized content, such as watermarking systems for generated media;
• Social and behavioral research related to manipulated or synthesized content, including the ethics of the technology and human engagement with the content;
• Research on public understanding and awareness of manipulated and synthesized content, including research on best practices for educating the public to discern authenticity of digital content; and
• Research awards coordinated with other federal agencies and programs including the Networking and Information Technology Research and Development Program, Defense Advanced Research Projects Agency, and the Intelligence Advanced Research Projects Agency.
In addition NIST’s director “shall support research for the development of measurements and standards necessary to accelerate the development of the technological tools to examine the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content,” while also conducting outreach to:
• Receive input from private, public, and academic stakeholders on fundamental measurements and standards research necessary to examine the function and outputs of generative adversarial networks; and
• Consider the feasibility of an ongoing public and private sector engagement to develop voluntary standards for the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content.
On September 26, less than two weeks after Gonzalez and Stevens introduced the IOGAN Act, the House Committee on Science, Space, and Technology’s Subcommittee on Investigations and Oversight convened a hearing “to explore the enabling technologies for disinformation online, including deepfakes, explore trends and emerging technology in the field, and to consider research strategies that can help stem the tide of malicious inauthentic behavior.”
The hearing was punctuated by gasps from committee members when shown a deepfake of committee members Rep. Michael Waltz (R-FL) and Rep. Donald S. Beyer (R-VA) speaking in artificial voices. The video was created for the committee by the Computer Vision and Machine Learning Lab at SUNY-Albany in partnership with the Cybersecurity Policy Initiative at the University of Chicago.
It was compelling – and disturbing – to say the least.
“The threat represented by the proliferation of information operations designed to deceive and manipulate users on social media demands a unified, forceful response by the whole of society,” Graphika Chief Innovation Officer Camille Francois told the committee as one of three witnesses who testified.
The day before the hearing, the committee had already assembled to consider the legislation, at which time Gonzalez offered an amendment in the nature of a substitute to make technical corrections and conforming changes. The amendment was agreed to by voice vote.
Beyer also introduced an amendment to the amendment to provide for “fundamental research on technical tools for identifying manipulated or synthesized content, such as watermarking systems for generated media,” which was also agreed to by a voice vote. Rep. Jennifer Wexton (D-VA) followed by introduced an amendment to the amendment to “include research on public understanding and awareness of manipulated or synthesized content, including research on best practices.” It was also accepted by voice vote, at which time committee Chairwoman Eddie Bernice Johnson (D-TX) gaveled that the committee report the bill to the full House with the recommendation that it be approved. Her motion was accepted.
The bill now waits to be sent to the full House for a vote.
In her opening statement at the hearing, Committee Chairwoman Johnson explained that while “not every member of this committee … is well-versed in what a ‘deep neural network’ is or how a ‘GAN’ works, we have a sense already that the federal government is likely to need to create new tools that address this issue.”
And although House Speaker Nancy Pelosi (D-CA) herself was the victim of a deepfake and it would seem she’d want to put the bill to a House vote, because she’s so embroiled on whether it’s more important that the House be convened to vote on impeaching the President, it’s anyone’s guess as to whether she’ll take the time to move this simple piece of legislation past the finish line.
Article Topics
biometrics | deepfakes | identity verification | legislation | research and development | United States
Comments