FB pixel

AI voice fraud draws new congressional scrutiny

Lawmakers are zeroing in on how synthetic voices are used in scams as multiple bills target deepfakes, impersonation, and fraud
AI voice fraud draws new congressional scrutiny
 

U.S. Sen. Maggie Hassan is escalating congressional scrutiny of the fast-growing AI voice-cloning industry, pressing four major companies to explain what they are doing to stop scammers from turning synthetic speech tools into engines of fraud.

In letters dated April 16 to ElevenLabs, LOVO, Speechify, and VEED, the New Hampshire Democrat and ranking member of the Congressional Joint Economic Committee demanded detailed answers about what they are doing to prevent bad actors from using their services.

Hassan wants to know whether the companies monitor for scam-related uses, verify that a person has consented before their voice is cloned, detect attempts to imitate public figures and minors, watermark AI-generated audio, preserve provenance information, and report bad actors to law enforcement.

The letters amount to more than another general warning about the harms of AI. They reflect a more specific congressional concern that voice models have become highly usable, widely accessible, and increasingly difficult for ordinary people to detect.

“In recent years, global criminal networks have used deepfake voice programs, along with other new AI tools, to target more people with increasingly personalized and believable digital scams, fueling a booming scam industry that surpasses the global drug trade as an illicit industry,” Hassan told the companies

“Protecting Americans from these financial losses will require collaboration between the public and private sectors, and AI companies [including yours] are on the frontlines of this effort,” Hassan added.

Hassan repeatedly frames the problem in operational terms. She is not only asking whether companies prohibit fraud in their terms of service, but whether they enforce those policies, how often they update scam phrase lists, how many violators they have caught, when they ban users, whether those users can return under new accounts, and whether law enforcement receives information that the public does not.

That focus matters because the threat is no longer hypothetical. Hassan pointed out that the Federal Bureau of Investigation’s (FBI) 2025 Internet Crime Report, released this month, shows victims lost $893 million to AI-related scams in 2025, a figure that underscores how quickly synthetic media is being absorbed into familiar fraud schemes.

Cryptocurrency and AI-related scams were among the costliest, the FBI said.

The FBI also said the Internet Crime Complaint Center received 1,008,597 total complaints, an increase from 859,532 in 2024. Phishing/spoofing, extortion, and investment schemes were the most frequently reported complaints. Americans over 60 reported approximately $7.7 billion in losses, up 37 percent from 2024.

Industry and consumer advocates have been warning about the same trend. Consumer Reports said in its March 2025 assessment of AI voice-cloning products from Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify that it “found a majority of the products assessed did not have meaningful safeguards to stop fraud or misuse of their product.”

Consumer Reports said the platforms should automatically flag and prohibit audio containing phrases commonly used in scams and other fraud, a recommendation that closely tracks the questions Hassan posed in her letters to some of the same companies.

Those letters lay out why lawmakers are alarmed. Hassan cited research finding that people are poorly equipped to identify AI-generated voice clones and notes that these systems can create convincing synthetic voices from only a brief audio sample.

Hassan highlighted how easy it has become to pick from prebuilt voice libraries or generate synthetic voices in many languages.

ElevenLabs, for example, is described as offering thousands of voices in dozens of languages; LOVO more than 500 voices in 100 languages; Speechify more than 1,000 voices in over 60 languages; and VEED more than 35 voices capable of speaking dozens of languages.

Hassan said romance scams, impersonation scams, and so-called grandparent scams have manipulated victims into believing a loved one is in danger.

She noted a 2025 case involving New Hampshire families who were allegedly tricked by an AI-generated imitation of a relative’s voice, as well as 2024 reports from Merrimack County, New Hampshire, where residents received scam calls from voices made to sound like family members or law enforcement.

Voice cloning has also been used against businesses to bypass voice-based authentication or impersonating executives to authorize transfers of large sums of money.

Another striking element of Hassan’s inquiry is how directly it targets platform design choices. Several of her questions ask whether the companies require real-time audio for verification, whether they demand authentic non-public recordings before allowing a clone, and what mechanisms they use to determine whether submitted audio is genuine.

She also wants to know whether the companies detect when users try to create “no-go” voices, such as politicians and celebrities, and whether they can tell when a user succeeds in bypassing those safeguards anyway.

Her letters also probe whether the companies permit the cloning of minors’ voices or the creation of synthetic child-like voices, and if so, what protections they have in place against exploitative misuse.

Hassan’s line of questioning dovetails with one of the central critiques of the sector: many voice-cloning products historically relied more on user promises than on meaningful technical guardrails.

Consumer Reports said most leading products it examined lacked strong technical mechanisms to stop nonconsensual voice cloning and recommended both identity-focused controls and automatic scam phrase detection.

Hassan is asking the companies whether they have gone beyond self-attestation and basic policy language to adopt the kind of systems critics say are necessary.

Hassan’s oversight push comes as Congress considers a more formal legislative answer. Senate bill S.3982, the AI Fraud Accountability Act of 2026, would establish a federal framework aimed squarely at digital impersonation fraud.

Introduced last month by Republican Sen. Tim Sheehy and Democrat Lisa Blunt Rochester and referred to the Senate Committee on Commerce, Science and Transportation, the bill would amend the Communications Act of 1934 as amended to create a criminal prohibition on using a “digital impersonation” in interstate or foreign communications with intent to defraud someone of money, documents, or anything of value.

A companion bill was introduced in the House by Republican Vern Buchanan, vice chairman of the House Committee on Ways and Means and chairman of the House Democracy Partnership, and Democratic Rep. Darren Soto.

Both bills define digital impersonation broadly to cover convincingly fabricated or altered audio or visual depictions of either an identifiable real person or even an imaginary person presented as genuine.

The bill would authorize penalties of up to three years in prison, include forfeiture provisions and establish extraterritorial federal jurisdiction, a notable provision given that many scam operations originate abroad.

“We are seeing a disturbing rise in AI-generated voice clones and deepfake videos that convincingly impersonate loved ones, business executives, government officials, and trusted institutions to steal money,” Buchanan said.

“Congress must act to stay ahead of these threats by modernizing federal law to keep up with emerging technology. The AI Fraud Accountability Act makes clear that if you use AI to defraud Americans, you will be prosecuted,” Buchanan added.

The AI Fraud Accountability Act would create a civil and regulatory enforcement route through the Federal Trade Commission (FTC). A violation would be treated as an unfair or deceptive act or practice enforceable by the FTC. The bill is structured not only to punish fraudsters after the fact, but also to make digital impersonation fraud a matter of consumer protection enforcement.

The bill also contains a standards and governance component. It would require the Secretary of Commerce, acting through the National Institute of Standards and Technology (NIST), to convene a working group within 30 days of enactment to develop best practices for recognition, detection, prevention, and tracing of digital impersonations used in fraud.

The working group would include representatives from the Department of Justice, FTC, federal, state, and local law enforcement, private sector industries such as financial services, telecommunications, health care, retail, and digital platforms, as well as scientists and engineers with expertise in digital forensics and AI.

NIST would then be required to publish best practices and update them annually.

That structure is revealing. Hassan’s letters seek data from the companies about what safeguards exist, how effective they are, and where the gaps remain. The bill, by contrast, tries to build the enforcement and technical architecture that would follow from such findings.

In that sense, Hassan’s letters and the AI Fraud Accountability Act are complementary. Hassan is gathering the kind of information Congress would need to judge whether voluntary industry practices are working. The AI Fraud Accountability Act, if law, would supply the beginnings of a statutory answer if lawmakers conclude those practices are not working.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

U.S. bill would mandate operating system-level age verification

A bipartisan House bill introduced this week, HR 8250, would require operating system providers to verify the age of every…

 

NADRA Technologies Limited partners on biometric onboarding, IDV platform

NADRA Technologies Limited (NTL), the commercial arm of Pakistan’s National Database and Registration Authority (NADRA), has signed a memorandum of…

 

Nearly 40% of Gen Z report fraud losses as scams shift online: TransUnion

Gen Z is increasingly being targeted by online scammers: Nearly 40 percent of Gen Z consumers reported losing money to…

 

Vietnam mandates face biometrics for mobile device registration

A facial recognition process is now required for new mobile device registrations in Vietnam. The policy took effect April 15…

 

UK social engineering scams jump 62% as fraud tactics shift: BioCatch

While the United States is battling with credit card fraud and identity theft, UK consumers are being targeted by increased…

 

AI agent delegation via MCP has gaps a Murderbot could walk through

The introduction of Model Context Protocol (MCP) open standard developed by Anthropic has advanced the data-sharing capabilities of AI agents…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events