Deepfake threats reveal contradictions in Trump administration’s AI governance plan

In March, the White House Office of Science and Technology Policy (OSTP) announced it had received more than 10,000 public comments on the Trump administration’s AI Action Plan, a response hailed by administration officials as a symbol of democratic engagement and public investment in the future of AI. However, the AI Action Plan as it currently stands celebrates technological ambition while sabotaging its foundations.
The thousands of comments submitted to OSTP reveal a consistent and urgent set of concerns across industry, academia, civil society, and the financial and technological sectors. These concerns collectively reflect a deep unease about fragmented governance, the erosion of national competitiveness, and the lack of a coherent, forward-looking AI strategy, especially one that could position the United States for long-term technological leadership amid rising geopolitical threats.
They reveal not just interest, but alarm. They paint a portrait of an AI landscape that is increasingly vulnerable to fraud, espionage, and deception. Not because of insufficient innovation, but because of insufficient safeguards. Until the administration acknowledges that deepfake detection is not a threat to free speech but rather a defense of truth, its vision of American AI leadership will remain ideologically bold but strategically hollow.
OSTP and the Networking and Information Technology Research and Development National Coordination Office and National Science Foundation (NSF) published the Request for Information (RFI) in February to obtain public input on the development of an AI Action Plan as was directed by Trump’s January Executive Order, Removing Barriers to American Leadership in AI.
While the plan claims to reassert U.S. dominance in AI innovation by eliminating regulatory hurdles and expanding technological freedom, beneath the celebratory rhetoric is a sharp ideological fracture that pits economic ambition against scientific censorship, and national security against absolutist interpretations of free speech. At the center of this contradiction is the ever-growing threat of deepfakes.
Across the thousands of comments received, few issues drew more urgent and consistent attention than the growing threat of synthetic media used to deceive, defraud, and destabilize. Global biometric firms such as Pindrop and iProov, along with cybersecurity consortia like the Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG) and the Financial Services Sector Coordinating Council (FSSCC), outlined the scale of this threat and the structural vulnerabilities it exposes in both public and private infrastructure.
Their recommendations were clear: without serious federal investment in deepfake detection, liveness verification, and authenticity standards, the very foundation of economic, governmental, and social trust will erode. Meanwhile, however, the Trump administration has simultaneously worked to dismantle the very systems capable of responding to such threats. Under the pretext of “restoring freedom of speech,” federal support for deepfake research has been slashed.
$328 million in NSF grants were eliminated, including projects that were focused on disinformation, biometric spoofing, election security, and AI risk modeling. The administration’s justification was that these initiatives amounted to censorship by the federal government, a view that’s also embedded in the language of Trump’s Executive Order, which claims the administration of former President Joe Biden used federal influence to “suppress speech the government did not approve,” which isn’t true.
The implications of this policy shift are stark. The same government that touts AI as a cornerstone of national competitiveness is now defunding the very tools that demonstrably are needed to protect its digital ecosystem. Deepfakes are not a theoretical issue.
According to a 2025 Ponemon Institute study, 42 percent of U.S. security professionals reported that senior executives or board members at their organizations had been impersonated by AI-generated media, often to induce fraudulent financial transfers. Pindrop, whose audio authentication tool Pindrop Pulse has outperformed global competitors and won multiple federal cybersecurity competitions, reported a 683 percent surge in deepfake voice attacks in 2024. In some cases, major financial institutions now experience up to seven synthetic voice fraud attempts per day.
The problem is no longer verifying identity; it is verifying reality. As Pindrop argued, the fundamental question about security systems has shifted from “Are you the right person?” to “Are you even a real person?” Deepfake impersonations are no longer confined to satire or social media; they have been used in corporate sabotage, job applicant fraud, and even international deception.
In one notorious 2024 incident, a U.S. senator was tricked into a Zoom meeting with a fake Ukrainian official, manipulated by foreign adversaries using advanced video generation tools. In another case, North Korean operatives used synthetic facial and vocal software to pass themselves off as American software developers and secure remote contracts with U.S. firms, thereby gaining privileged access to corporate systems.
Despite these warnings, though, the Trump administration’s AI Action Plan appears disinterested in addressing synthetic threats head-on. In fact, federal implementation language equates the detection of disinformation with ideological repression. Programs designed to help distinguish fact from fabrication, and which are critical for election integrity, consumer protection, and national security, have been recast as tools of coercion. Deepfake detection, by this logic, becomes a form of censorship rather than an essential defensive measure.
Framing the problem this way is fundamentally at odds with the consensus emerging across industries. iProov, another global leader in biometric authentication, argued that trust – not deregulation – is the key barrier to widespread AI adoption. It drew on data to show that American consumers and businesses are increasingly wary of AI not because of privacy concerns, but because of doubts about system integrity. The U.S. now trails nations like China, Singapore, and the U.K. in AI uptake not due to a lack of ambition, but rather because Americans no longer know what and who to trust.
iProov highlighted the rise of synthetic identity fraud as being one of the greatest financial and national security risks of the AI era. In 2024, identity fraud rose 45 percent, with nearly $3 billion in losses attributed to deepfake-powered scams. More than 115,000 known attack vectors now exist for facial spoofing and synthetic biometric impersonation. In one case, fake identities enabled a network of hackers to penetrate U.S. supply chain management software and extract sensitive proprietary data.
Both iProov and Pindrop propose a suite of reasonable, actionable responses. Pindrop recommends a federal Deepfake Summit, the integration of real-time detection tools into anti-money laundering protocols, and mandatory liveness verification for any AI system interacting with sensitive information. iProov calls for standardized biometric evaluation metrics from NIST and national training programs to improve media literacy and synthetic content detection across sectors.
These and other proposals do not advocate censorship; they call for security and trustworthiness. Their absence from the Trump administration’s AI roadmap is not a policy oversight, but rather an ideological omission. The administration’s elevation of unfiltered digital speech over empirical cybersecurity preparedness needs reveals a deep rift between political messaging and technical necessity. The belief that AI can be made “safe” simply by deregulating its development and removing guardrails around its outputs ignores the entire category of threats that depend not on censorship, but on deception.
This dissonance is echoed in dozens of other public comments submitted in response to the RFI. Organizations like the AI Applied Consortium, NetChoice, FSSCC, and M3AAWG all independently flagged deepfakes as a top-tier AI risk. They argue that public-private collaboration, open-source detection models, watermarking standards, and secure-by-design architectures must be part of any legitimate national AI strategy. They do not treat synthetic media threats as hypothetical because they are not.
The dominant concern that emerged across all stakeholder feedback is consistent: the U.S. must develop a coherent, risk-sensitive, innovation-friendly AI regulatory framework. This means regulation that distinguishes between low- and high-risk applications, avoids stifling small businesses, and empowers developers while safeguarding public trust. And nowhere is this balance more essential than in addressing the rapid proliferation of synthetic media.
What the White House has failed to grasp is that trust is not a byproduct of deregulation but is a prerequisite for adoption. Without it, transactions become frictional, institutions lose legitimacy, and innovation grinds to a halt. AI, more than any previous technological shift, depends on the credibility of its outputs. If Americans cannot believe what they see or hear, they will not trust the systems producing those outputs, no matter how efficient, advanced, or deregulated they may be.
Article Topics
AI Action Plan | biometric liveness detection | biometrics | cybersecurity | deepfake detection | deepfakes | iProov | Pindrop | U.S. Government
Comments