FB pixel

US Senators move to ban AI-driven impersonation scams as fraud losses surge

US Senators move to ban AI-driven impersonation scams as fraud losses surge
 

United States Senators Shelley Moore Capito and Amy Klobuchar have moved to confront one of the fastest growing consumer threats of the generative AI era by introducing the bipartisan Artificial Intelligence Scam Prevention Act.

The bill is aimed squarely at AI-driven impersonation scams that use cloned voices, synthetic images, and fabricated video calls to trick victims into sending money or divulging sensitive personal information.

If enacted, the bill would mark one of the most direct federal responses yet to consumer harm caused by generative AI.

Rather than regulating how AI systems are built, the bill focuses on how they are misused, treating synthetic impersonation as an evolution of traditional fraud rather than an entirely new category of crime.

Lawmakers backing the measure say that distinction is intentional, and necessary, as AI tools rapidly become embedded in everyday communications.

The legislation comes amid mounting evidence that AI-assisted fraud is accelerating faster than existing consumer protection laws can respond.

While precise figures specifically isolating AI-driven impersonation scams are still emerging, multiple credible estimates and data points indicate that the monetary damage tied to AI-assisted fraud overall is already substantial and rapidly growing, and these include a significant contribution from impersonation tactics.

The Federal Bureau of Investigation (FBI) reports millions of fraud complaints resulting in more than $50 billion in total losses since 2020, with a growing share attributed to deepfake and synthetic identity schemes.

According to recent data from the Federal Trade Commission (FTC), Americans lost close to $2 billion last year to scams initiated through calls, texts, and emails, with phone-based fraud producing the highest per-victim losses.

Recent research and industry projections suggest that fraud losses enabled or amplified by generative AI could reach roughly $40 billion in the U.S. by 2027, up from about $12 billion in 2023, reflecting a compound annual growth rate of more than 30 percent as criminals deploy AI to craft more convincing scams and bypass traditional defenses.

Surveys indicate that when individuals do fall victim to AI voice-cloning scams, a large proportion report financial loss, with many victims losing between hundreds and thousands of dollars, and a smaller share losing five-figure amounts.

Regulators and consumer advocates say generative AI has supercharged those schemes, allowing criminals to convincingly mimic family members, bank representatives, government officials, and even corporate executives at scale.

The Artificial Intelligence Scam Prevention Act seeks to close what lawmakers describe as a widening legal gap. At its core, the bill would make it explicitly illegal to use AI to replicate any person’s voice or image with the intent to defraud.

“Artificial intelligence has allowed scams to become more sophisticated, making it easier for fraudsters to deceive people – especially seniors and children – into giving up their personal information or hard-earned money,” Klobuchar said. “Our bipartisan legislation will help take on scammers who use AI to copy someone’s voice or image.”

“While there is incredible potential with artificial intelligence, we must also be vigilant in protecting against harmful uses of the technology, especially when it comes to fraud and scams,” Capito added.

While impersonation fraud is already unlawful, Klobuchar and Capito argue that many statutes still hinge on outdated definitions written decades before synthetic media existed.

By clearly covering AI-generated voices, images, prerecorded messages, text messages, and video conference calls, their bill is designed to ensure that prosecutors and regulators can act without having to stretch analog-era laws to fit digital deception.

A central feature of the bill is the formal creation of an interagency advisory committee on AI-enabled fraud.

The committee would be charged with coordinating enforcement and intelligence sharing among agencies including the FTC, Federal Communications Commission, and Department of the Treasury, which oversees financial crime and sanctions enforcement.

Klobuchar and Capito say coordination is essential given that AI scams often straddle telecommunications networks, online platforms, and the financial system simultaneously.

The bill also would codify the FTC’s existing ban on impersonating government agencies and legitimate businesses, elevating agency rules into statutory law.

Supporters argue that this change would give the FTC stronger authority to impose civil penalties and seek restitution for victims, rather than relying primarily on injunctive relief.

The legislation also would update the Telemarketing and Consumer Fraud and Abuse Prevention Act and the Communications Act of 1934, both statutes that have not been meaningfully revised since the 1990s to reflect modern communications technologies.

Consumer protection officials have been warning for months that AI-driven scams are becoming both more convincing and more difficult to detect. The FTC and FBI have reported a surge in so-called family emergency scams in which criminals use short audio clips scraped from social media to generate near-perfect voice clones.

Victims are often pressured into acting quickly, believing they are helping a child or relative in immediate danger. Similar techniques have been used to impersonate corporate executives in wire fraud schemes targeting finance departments.

Reaction to the bill has been largely positive among consumer advocates and financial institutions, which have borne the brunt of AI-enabled fraud losses.

Banking groups have repeatedly urged Congress to establish a clear federal standard rather than leaving institutions to navigate a patchwork of state laws and voluntary guidelines.

By focusing on intent to defraud, rather than the mere creation of synthetic media, supporters say the legislation avoids sweeping in legitimate uses of AI for satire, accessibility, entertainment, or artistic expression.

Not surprisingly, technology companies are watching closely. Major platforms have rolled out their own defensive measures in recent months, including call-screening tools, scam detection algorithms, and provenance signals for AI-generated content.

Still, industry groups have cautioned that enforcement alone will not stop overseas actors operating beyond U.S. jurisdiction.

Some have called for the advisory committee created by the bill to prioritize international cooperation and information sharing, particularly as AI models capable of producing realistic voice and video clones become smaller and easier to run locally.

Privacy advocates, meanwhile, are urging lawmakers to ensure that anti-fraud efforts do not quietly expand surveillance of private communications. They warn that pressure to detect AI scams could collide with encryption and user privacy protections if not carefully constrained.

The bill itself does not mandate new monitoring requirements, but critics note that the practical impact will depend heavily on how regulators implement and enforce its provisions.

As Congress heads into 2026 with multiple AI-related bills under consideration, the scam prevention proposal underscores a broader shift in Washington’s approach to AI.

After years of abstract debates about future risks, lawmakers are increasingly responding to concrete, measurable harm already hitting consumers’ phones, inboxes, and bank accounts.

Whether the new framework can keep pace with the speed and adaptability of AI-driven fraud remains an open question, but supporters argue that failing to modernize the law would leave Americans even more exposed in a world where hearing a familiar voice is no longer proof of who is really on the line.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Ring and Flock call off integration as scrutiny of camera-to-police partnership intensifies

Amazon-owned Ring and Flock Safety have canceled their planned partnership, stepping back from an integration that would have linked one…

 

MOSIP pursues democratization of digital identity with unconference conversations

A democratic vision of digital identity is central to the non-profit, open-source mandate of MOSIP. As the organization and the…

 

Liveness is king: FaceTec’s Jay Meier in conversation with Chris Burt 

It’s best, says Jay Meier, to think about identity management as a system of symbiotic systems. Which is to say,…

 

Ofcom fines Kick, threatens 4chan as OSA enforcement steadily dials up

UK regulator Ofcom has faced criticism for being too slow and lenient with its power to enforce the Online Safety…

 

Innovatrics, ROC improve rankings in NIST ELFT, rising to 2 and 3 respectively

Innovatrics is celebrating success in the latest National Institute of Standards and Technology (NIST) Evaluation of Latent Fingerprint Technologies (ELFT)…

 

Meta plans launch of facial recognition to smart glasses in ‘dynamic political environment’

Meta is reportedly planning to roll out facial recognition capabilities for its smart glasses as early as this year, taking…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events