House unanimously passes deepfake bill; Senate version still in committee
The bipartisan House Identifying Outputs of Generative Adversarial Networks Act (IOGAN Act), which supports critical research to accelerate the development of technology to identify deepfakes that could erode public discord, scam the American public, and endanger national security, was passed by the House Monday with unanimous support. The bill had been passed out of the House Committee on Science, Space, and Technology in late September with six Democratic and three Republican co-sponsors.
As Biometric Update previously reported, it was unclear whether Speaker of the House Nancy Pelosi – embroiled in impeachment proceedings against President Donald Trump – would bring the bill to the full House for a vote.
But, according to House staff members familiar with the legislation who spoke to Biometric Update on background Tuesday morning, there was a significant number of Democratic House legislators who wanted the bill put up for a vote, implying that the move now bequeaths impetus for the Senate to do the same on its bipartisan Senate version of the legislation, which was introduced on November 20, by Sen. Jerry Moran (R-KA) and Rep. Catherine Cortez Masto (D-NV), both members of the Senate Committee on Commerce, Science, and Transportation. The bill is yet to make it out of that committee. However, congressional sources told Biometric Update there’s “likely to be movement now” that the House version has passed, as one of explained Tuesday.
The House committee report which accompanied the bill when it was passed out of committee, explained that “the intent of this legislation is to accelerate the progress of research and the development of measurements, standards, and tools to combat manipulated media content, including the outputs of generative adversarial networks,” and “recognizes that [the] National Science Foundation [NSF] is already making investments in the area of manipulated or synthesized content through its Secure and Trustworthy Cyberspace and Robust Intelligence programs.”
The committee encouraged NSF “to continue to fund cross-directorate research through these programs, and others, to achieve the purposes” of the legislation, “including social and behavioral research on the ethics of these technologies and human interaction with the content generated by these technologies.”
In passing the bill out of committee to the full House, the committee made clear that it fully “intends for NSF and the National Institute of Standards and Technology [NIST]” to implement the legislation and “to work with other agencies conducting work on detecting manipulated and synthesized content, including DARPA, IARPA, and the agencies that participate in the Networking and Information Technology Research and Development (NITRD) program to ensure coordination and avoid duplication of effort.”
The House version, HR 4355, was introduced by Rep Haley Stevens (D-MI) and Anthony Gonzalez (R-OH), along with co-sponsors, Reps Jim Baird (R-IN) and Katie Hill (D-CA).
The House and the Senate versions are nearly identical, and would require critical research to accelerate development of technology to identify deepfakes and, then, presumably, prevent them as best as possible.
“Recent technological advances have reshaped the world we live in, but with that come new threats to our national security that must be addressed,” Gonzalez said Monday following House passage of the bill, noting that. “It is critical that we learn to identify and combat deepfake technology now to stop scammers and foreign entities who would seek to do harm to the American public.”
Gonzalez’s office said in a statement Monday that, “(d)eepfake technology has developed rapidly over the past several years with no clear method of identifying and stopping it from becoming a major national security threat. The IOGAN Act directs the National Science Foundation and the National Institute of Standards and Technology to support research to accelerate the development of technologies that could help improve the detection of such content,” explaining that, “(a)dvancements in computing power and the widespread use of technologies like artificial intelligence over the past several years have made it easier and cheaper than ever before to manipulate and reproduce photographs, video and audio clips potentially harmful or deceptive to the American public. The ability to identify and label this content is critical to preventing foreign actors from using manipulated images and videos to shift U.S. public opinion.”
On September 26, less than two weeks after Gonzalez and Stevens introduced the IOGAN Act, the House Committee on Science, Space, and Technology’s Subcommittee on Investigations and Oversight convened a hearing “to explore the enabling technologies for disinformation online, including deepfakes, explore trends and emerging technology in the field, and to consider research strategies that can help stem the tide of malicious inauthentic behavior.”
The hearing was punctuated by gasps from committee members when shown a deepfake of committee members Rep. Michael Waltz (R-FL) and Rep. Donald S. Beyer (R-VA) speaking in artificial voices. The video was created for the committee by the Computer Vision and Machine Learning Lab at SUNY-Albany in partnership with the Cybersecurity Policy Initiative at the University of Chicago.
“The threat represented by the proliferation of information operations designed to deceive and manipulate users on social media demands a unified, forceful response by the whole of society,” Graphika Chief Innovation Officer Camille Francois told the committee as one of three witnesses who testified.
The day before the hearing, the committee had assembled to consider the legislation, at which time Gonzalez offered an amendment in the nature of a substitute to make technical corrections and conforming changes. The amendment was agreed to by voice vote.
Beyer also introduced an amendment to the amendment to provide for “fundamental research on technical tools for identifying manipulated or synthesized content, such as watermarking systems for generated media,” which was also agreed to by a voice vote. Rep. Jennifer Wexton (D-VA) followed by introduced an amendment to the amendment to “include research on public understanding and awareness of manipulated or synthesized content, including research on best practices.” It was also accepted by voice vote, at which time committee Chairwoman Eddie Bernice Johnson (D-TX) gaveled that the committee report the bill to the full House with the recommendation that it be approved. Her motion was accepted.
In her opening statement at the hearing, Committee Chairwoman Johnson explained that while “not every member of this committee … is well-versed in what a ‘deep neural network’ is or how a ‘GAN’ works, we have a sense already that the federal government is likely to need to create new tools that address this issue.”
The Senate version – the text of which was not available when Biometric Update first reported on the bill’s introduction — states:
• Research gaps currently exist on the underlying technology needed to develop tools to identify authentic videos, voice reproduction, or photos from manipulated or synthesized content, including those generated by generative adversarial networks;
• The National Science Foundation’s focus to support research in artificial intelligence through computer and information science and engineering, cognitive science and psychology, economics and game theory, control theory, linguistics, mathematics, and philosophy, is building a better understanding of how new technologies are shaping the society and economy of the United States;
• The National Science Foundation has identified the ‘10 Big Ideas for NSF Future Investment’ including ‘Harnessing the Data Revolution,’ and the ‘Future of Work at the Human-Technology Frontier,’ in with artificial intelligence is a critical component;
• The outputs generated by generative adversarial networks should be included under the umbrella of research described in paragraph (3) [of the bill] given the grave national security and societal impact potential of such networks; and
• Generative adversarial networks are not likely to be utilized as the sole technique of artificial intelligence or machine learning capable of creating credible deepfakes. Other comparable techniques may be developed in the future to produce similar outputs.
The Senate version would also require “the Director of the National Science Foundation, in consultation with other relevant federal agencies, shall support merit-reviewed and competitively awarded research on manipulated or synthesized content and information authenticity, which may include:
• Fundamental research on digital forensic tools or other technologies for verifying the authenticity of information and detection of manipulated or synthesized content, including content generated by generative adversarial networks;
• Fundamental research on technical tools for identifying manipulated or synthesized content, such as watermarking systems for generated media;
• Social and behavioral research related to manipulated or synthesized content, including the ethics of the technology and human engagement with the content;
• Research on public understanding and awareness of manipulated and synthesized content, including research on best practices for educating the public to discern authenticity of digital content; and
• Research awards coordinated with other Federal agencies and programs, including the Networking and Information Technology Research and Development Program, the Defense Advanced Research Projects Agency, and the Intelligence Advanced Research Projects Agency.
The Senate bill would further require that “the Director of the National Institute of Standards and Technology shall support research for the development of measurements and standards necessary to accelerate the development of the technological tools to examine the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content,” and that the NIST director “shall conduct outreach to:
• Receive input from private, public, and academic stakeholders on fundamental measurements and standards research necessary to examine the function and outputs of generative adversarial networks; and
• Consider the feasibility of an ongoing public and private sector engagement to develop voluntary standards for the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content.
Meanwhile, Hill sources say “we may begin seeing movement on more of these bills,” referring to the ten bills that address the problem of deepfakes introduced in Congress this year, three by Republicans and seven by Democrats.
Meanwhile, there’s also the “slew” of biometrics legislation that’s been introduced in the 116th Congress this year which have never been passed by Congress, as Biometric Update has reported.
However, given the seemingly more worrisome concerns on the Hill over deepfakes, there could be movement in the House on its version of the Deepfake Report Act, the Senate committee-reported version of which, S 2065, was passed on October 24, with an amendment by unanimous consent, and referred to the House. The bipartisan legislation was introduced by Sen. Brian Schatz (D-HA) and Sen. Rob Portman (R-OH), and it would direct the Department of Homeland Security (DHS) to conduct an annual study of deepfakes and other types of similar content.
The importance of the bill was perhaps no better expressed than the committee report’s citing of a statement by Clint Watts, Distinguished Research Fellow and Senior Fellow, Foreign Policy Research Institute and Alliance for Securing Democracy, German Marshall Fund, during his testimony before the July 13, House Permanent Select Committee on Intelligence hearing, The National Security Challenges of Artificial Intelligence, Manipulated Media, and Deepfakes.
“Over the long term, deliberate development of false synthetic media will target US officials, institutions, and democratic processes with an enduring goal of subverting democracy and demoralizing the American constituency,” Watts forewarned in earnest, adding that, “In the near and short term, circulation of deepfakes may [even] incite physical mobilizations under false pretenses, initiate public safety crises, and spark the outbreak of violence. The recent spate of false conspiracies proliferating via WhatsApp in India offer a relevant example of how bogus messages and media can fuel violence. The spread of deepfake capabilities will likely only increase the frequency and intensity of such violent outbreaks.”
Watts went on to give notice that “China and Russia will continue to use deepfake technologies to discredit domestic dissidents and foreign detractors, incite fear, and promote conflict inside Western-style democracies, and distort the reality of American audiences and audiences of American allies.”
The committee report itself alerted that, “(a)s cyber-enabled warfare increasingly becomes the norm, national security experts warn that if the federal government does not take swift action to address persistent purveyors of information warfare, deepfake technologies will only continue to become more sophisticated and widely used in disinformation campaigns launched by our nation’s foreign adversaries, most notably China and Russia.”
“Fake content can damage our national security and undermine our democracy,” Schatz said, declaring it would “direct the federal government to learn more about the scope of deepfake technology.”
“This bill prepares our country to answer those questions and address concerns by ensuring we have a sound understanding of this issue,” Portman added.
Schatz and Portman’s legislation would mandate that DHS assess the technologies that are being used and developed to spawn deepfakes, the exploitation of deepfakes by foreign and domestic entities, and available countermeasures in order to help policymakers and the public better comprehend the threats that deepfakes pose to national security and election security.