AI impersonation attacks against US officials growing more sophisticated, FBI warns

Federal authorities are warning that a years-long impersonation campaign targeting senior U.S. officials has not only continued, but that is has also grown more sophisticated, increasingly blending AI–generated voice and text with classic social engineering techniques to deceive victims into revealing sensitive information, contacts, and in some cases money.
The message from federal authorities is blunt: in an era of AI-enabled deception, authenticity can no longer be assumed, even when the voice on the other end sounds exactly right.
The latest alert, issued by the Federal Bureau of Investigation (FBI) on December 19, updates a public service announcement that was first released in May and reflects activity dating back to at least 2023 when malicious actors began posing as high-ranking government figures to exploit trust and familiarity.
According to the updated alert, attackers have impersonated senior state government officials, White House and Cabinet-level leaders, and members of Congress. Targets have included not only policy professionals and business figures, but also family members and personal acquaintances of officials.
The common thread across cases is an attempt to leverage the authority and credibility of well-known names to lower skepticism at the outset of a conversation.
The campaign typically begins with a text message or phone call that appears to come from a current or former senior official. In many cases, the outreach is tailored, referencing issues the recipient is known to work on, such as trade, security policy, or diplomatic affairs.
After a brief exchange, the attacker almost immediately urges the victim to continue the conversation on an encrypted messaging platform such as Signal, Telegram, or WhatsApp.
Once the conversation moves off traditional SMS or email and into a private messaging app, the interaction often becomes more elaborate and more dangerous.
Within these encrypted channels, impersonators may continue discussing policy topics to maintain credibility, propose meetings with senior U.S. leaders, or suggest that the target is under consideration for a corporate board appointment or other prestigious role.
In parallel, some actors escalate to direct requests.
Victims have been asked to provide one-time authentication codes that would allow the attacker to synchronize a device with the victim’s contact list, supply personally identifiable information and scans of sensitive documents such as passports, wire funds overseas under fabricated justifications, or introduce the impersonator to other trusted associates.
The FBI stressed that these requests are not random. Access to a victim’s contact list can allow attackers to rapidly expand the operation by mapping personal and professional networks and launching secondary impersonations that appear even more convincing because they reference real relationships.
In several cases, the alerts noted, attackers were able to exploit permissions granted to messaging applications, which often request access to contact data during setup or updates.
The advisory underscores that the use of AI has sharply reduced traditional warning signs. Attackers are now employing AI-generated voice messages that closely match the cadence, tone, and accent of real officials, a technique commonly referred to as voice cloning.
Text messages may incorporate publicly available photographs, realistic signatures, and writing styles drawn from speeches, interviews, or prior correspondence. Visual cues are no longer reliable indicators, and even experienced professionals can struggle to distinguish authentic communications from synthetic ones.
This warning builds on a string of high-profile incidents that have made clear how disruptive AI-enabled impersonation can be.
This past summer, U.S. officials disclosed an operation in which attackers used AI voice and text tools to impersonate Secretary of State Marco Rubio, reaching out to foreign ministers and senior domestic officials in an apparent attempt to gain access to sensitive information or accounts.
Earlier in the year, an AI impersonator posed as a senior White House official and contacted governors, senators, and business leaders after exploiting access to a compromised contact list. In each case, the realism of the communications delayed detection and increased the potential damage.
Federal investigators, including the FBI, have emphasized that these campaigns are not limited to financial fraud. While some victims have been pressured to transfer funds, others were targeted for intelligence gathering purposes, influence operations, or access to protected systems and networks.
The blending of generative AI with traditional social engineering marks a shift toward what security officials describe as intelligence-grade deception, capable of undermining trust in official communications themselves.
The FBI’s latest alert urges recipients to treat any unexpected message purporting to come from a senior official with caution, regardless of how convincing it appears. Verification should be conducted through independently obtained contact information rather than replying directly to the message.
Officials also warn against clicking links, opening attachments, or downloading applications sent by unverified contacts, and stress that no legitimate government official will ask for authentication codes, sensitive documents, or emergency financial transfers through messaging apps.
More broadly, the alert reflects growing concern inside the national security community that AI-driven impersonation threatens the basic assumptions underpinning diplomatic, political, and corporate communications.
As voice synthesis and text generation continue to improve, attackers no longer need to breach secure networks to cause harm. Instead, they can exploit human trust, publicly available data, and the informal communication habits that have become commonplace at senior levels of government.
The updated warning concludes with a call for heightened vigilance and cultural change. Two-factor authentication should be enabled and never bypassed, contact permissions tightly controlled, and verification norms reinforced even among familiar colleagues.
Some officials now recommend establishing shared secret phrases within families and teams to confirm identity in sensitive situations.
Article Topics
AI fraud | biometrics | deepfake detection | FBI | fraud prevention | generative AI | law enforcement | synthetic identity fraud | synthetic voice | voice biometrics






Comments