Agentic AI is supercharging the deepfake crisis: How companies can take action

By: Alix Melchy, Vice President of AI of Jumio
Fraud is no longer limited to stolen passwords. It now includes synthetic identities, digital forgeries, and AI-powered deception that is easy for bad actors to create. This growing sophistication of fraud is already having a big impact, with the FBI reporting that complaints around deepfake AI videos have more than doubled, and financial losses have nearly tripled this year.
It’s evident that deepfakes are becoming increasingly difficult to detect, especially without advanced technological assistance. With the current scale and sophistication of modern attacks, AI is the only viable path forward for organizations to secure operations, as it improves the accessibility of advanced defense systems, lowers friction for users, and is more affordable, enabling greater scalability.
The steep price of deception
The financial cost of deepfakes on businesses is becoming more severe as AI-driven fraud becomes more sophisticated. Deloitte’s Center for Financial Services predicts that generative AI could enable fraud losses to reach $40 billion in the United States by 2027, from $12.3 billion in 2023, a compound annual growth rate of 32%.
To put that into perspective, consider a finance worker is invited to a team call with their Chief Financial Officer and several other staff members to discuss a high-value transaction. Everyone looks and sounds exactly as they should. This worker believes they’re on a legitimate call, so they approve the request to transfer $25 million of company funds. It’s only later realized that every single person on that call was a deepfake, a sophisticated scam that was brought out by a fraud ring. This is a real situation that cost a multinational firm millions in 2024. This scenario is becoming more common as AI becomes more sophisticated and more accessible.
The scaling of autonomous fraud with AI agents
The accessibility of generative AI continues to lower the barrier for sophisticated scams. It’s easier than ever for bad actors to manipulate video and voice recordings for fraudulent purposes. According to a recent identity survey, 74% of participants admitted that deepfake videos and voice recordings look and sound real, with an alarming rate of 41% of participants not being confident in their ability to spot a deepfake.
Generative AI accelerated these fraud attempts, but agentic AI will make it even worse. Agentic AI is lowering the cost of fraud by reducing human capital. This is evident in recent research from IBM’s X-Force, that found AI could draft a phishing email in five minutes, a task that once took human experts 16 hours. This same automation can carry out multi-step fraudulent operations and complicated attack chains without large teams of skilled coders. Consequently, AI agents are lowering the knowledge base that is needed for advanced coding, allowing even low-skilled actors to execute sophisticated fraud schemes. This also means that major organizations are no longer the sole target. Fraudsters are now benefiting from targeting smaller companies because committing fraud has become cheap and easy, making smaller payouts worth the effort.
Not only can agentic AI carry out these attacks quickly with little user knowledge, but a network of them can also be deployed at scale. In just minutes, they can run millions of simultaneous fraud attempts across different platforms without constant oversight or requiring a whole team to perform those attacks. This is creating a rise in ghost operators in fraud rings, turning agentic AI into a plug-and-play service for cybercrime. This is shattering the traditional identity verification approach, forcing companies to navigate the incoming massive scale of autonomous fraud with a broken security paradigm.
Fighting fire with fire: Layered AI-powered defenses
As agentic AI propels fraud to a whole new level, the best way to keep your company secure is by fighting fire with fire, or in this case, AI with AI. To do so, companies need to implement multi-layered AI defense strategies that make it exponentially harder for bad actors to succeed. Enterprises can’t rely on traditional verification methods that add more layers of friction or collect more personal data as that would deter customers. Instead, businesses need to rethink digital identity protection to reduce fraud and fraud-related losses, but to also preserve customer trust and digital engagement.
To achieve this, organizations’ defense systems should contextualize individual actions, granularly isolate scopes of impact, and rely on ongoing reassessments of authorization. In other words, a highly secure system doesn’t just check a user’s identity once but continuously evaluates what the user is doing, where they are doing it, and why they are doing it.
The pillars of defense
To stay secure against agentic AI-driven scams, companies should approach this risk through identity intelligence, going beyond just verifying who a person is to build a more accurate, long-term view of an individual user’s risk profile. This allows organizations to separate genuine users from deepfakes and mitigate any risks before they cause significant harm. When building this layered defense, companies should implement the following key strategies:
- Continuous authorization: To verify identity, organizations must go beyond a simple login, requiring robust validation to verify the human user and differentiate them from automated bots. Secure defense systems should implement progressive friction based on risk, with no assumption of ongoing authorization within a live session, but a continuous reassessment based on a user’s actions. An advanced liveness detection solution designed to defend against deepfakes can help companies to confirm human presence in real-time, stopping spoofing attacks before they happen.
- Monitoring across transactions: Instead of analyzing individual interactions and transactions in isolation, it’s crucial for businesses to analyze transactions across their entire network. This comprehensive monitoring enables behavioral analytics to identify patterns of fraudulent behavior and anomalies, detecting repeat attempts or patterns and emerging threats, including undercover robots, before any damage has occurred.
- Layered risk signals: Using layered risk signals throughout the lifecycle of users—not just during onboarding— can provide companies with detailed information on potential risks, especially from internal sources like employees who can be fouled or whose access can be hijacked to compromise a company’s key assets. Companies can continuously check the reputation of users’ email addresses, phone numbers, and IP addresses to see if any of those channels have previously been used for fraudulent activity, identifying fraud rings that are deploying AI agents at scale. This process must be governed by strict rules for data handling, collecting only what’s necessary, and ensuring any personal data is not stored in an unsecure location or being misused to meet the privacy expectations of customers.
- Limit the scope of impact: Adopting a zero-trust architecture can help prevent widespread damage from a breach. By segmenting access to critical controls and implementing multiple approvals for high-stake actions, organizations can limit the effects of an attack.
The future of the AI arms race
As agentic AI evolves and becomes the driving force behind rising fraud rings and deepfakes, companies need to implement a more comprehensive defense strategy that integrates identity intelligence in all their security operations. By doing so, businesses can neutralize AI-driven threats that can bypass traditional security checks.
Those who fail to build a secure defense system now will only fall more vulnerable as fraudsters weaponize new technology and platforms. This risks ruining an organization’s reputation and can bring on substantial financial losses. Companies that consistently update their defense systems will be in a better position to stay ahead in this technological arms race.
About the author
Alix Melchy is the VP of AI at Jumio, where he leads teams of machine learning engineers across the globe with a focus on computer vision, natural language processing and statistical modeling.
Article Topics
AI agents | AI fraud | deepfakes | digital identity | generative AI | identity verification | Jumio | synthetic identity fraud







Comments