Generative AI has ushered in a new era of fraud, say reports from Plaid, SEON

The overall transformational impact of the tech we have come to call AI is still up in the air. The tech’s value is predicated on future dominance; at present, its biggest name, OpenAI, is financed by debt. Moreover, when it comes to generative AI, there is significant pushback from those with a stake in creating culture, and a growing sense that many people just don’t like what AI does.
Meanwhile, the tech has undoubtedly arrived; unfortunately, the most convincing proof we have is the proliferation of generative AI-enabled biometric identity fraud. “Generative AI has lowered the barrier to creating fake personas, falsifying documents, and impersonating real people at scale,” says a new report from Plaid, “Rethinking fraud in the AI era.”
“As a result, fraud losses are projected to reach $40 billion globally within the next few years, driven in large part by AI-enabled attacks.”
The warning is familiar. What’s different about Plaid’s approach to the problem is “network insights” – “each person’s unique behavioral footprint across the broader financial and app ecosystem,” understood as a system of relationships and long-standing patterns. In these combined signals, the company says, can be found “a resilient, high-signal lens into intent, risk and legitimacy.”
The paper offers an interesting timeline of recent technological shifts and the fraud prevention tools and techniques that have been developed in response. The dot-com boom relied on identity verification grounded in credit bureau data. As the internet grew, that was replaced by knowledge-based authentication. The gig economy facilitated widespread government ID checks. The digitization of financial services and the emergence of cryptocurrencies necessitated liveness detection. And social media bright behavioral analytics to the fore.
“The industry is overdue for its next wave of fraud-fighting innovation,” the report says. “The question is not whether change is needed, but what unique combination of data, insights, and analytics can meet this moment.”
The AI era needs its weapon of choice, and it needs to work continuously. “AI driven fraud is exposing the limits of identity controls that were designed for point in time verification rather than continuous assurance,” says Sam Abadir, research director for risk, financial (crime & compliance) at IDC, as quoted in the Plaid report. “As digital interactions become more autonomous and interconnected, institutions need signals that persist across the lifecycle and hold up at scale.”
The report sketches out the kinds of signals Plaid is pointing to: “in practice, a typical sequence might have a certain number of credit transactions, a certain number of merchants paid in a given time period, and a certain number of transfers out of a savings account.” Device attributes and velocity of data across accounts also offer useful metrics. Should established patterns change suddenly, it could trigger a potential fraud alert.
“Crucially, financial data becomes exponentially more powerful when viewed across networks, not just within a single organization,” Plaid says. Behavior and patterns across the whole ecosystem offer a more complete picture.
Plaid says now is “mission-critical” time to adapt with improved fraud protection measures. Plaid offers the Plaid Protect product, which utilizes its network insights model for robust protection against deepfakes, synthetic media and other emergent fraud techniques.
Automation won’t solve everything: SEON
A report from SEON likewise highlights the current fraud conundrum: “fraud and financial crime were supposed to become more manageable as AI matured and automation moved into production,” says a note from CEO Tamás Kádár introducing the “SEON AI Reality Check: 2026 Fraud & AML Leaders Report.”
“Instead, 2026 is the year many leaders are confronting a more complicated reality: AI is working in production, yet the biggest constraints are increasingly the systems, data and operating model around it. In high-growth, high-risk environments, headcount and budgets are not shrinking in step with automation, and operational pressure continues to rise faster than efficiency gains.”
Kádár is not passing judgment on AI, but rather on how teams use it. More does not automatically mean better. “Teams we see are not just deploying more models. They are building a foundational layer, a common protocol, that allows different AI tools and data sources to speak the same language. For agentic workflows to be truly successful, they cannot operate in a vacuum. They require a standardized, unified view of the risk context to make intelligent decisions.”
The overarching message is that “AI is real, embedded and widely trusted, but it has not materially reduced the scope of fraud and AML operations.” Fraud continues to scale, enabled by the same AI boom. “Last year, 56 percent of leaders disagreed or strongly disagreed with the statement, ‘fraud losses are growing faster than our revenue’,” says SEON’s data. “This year, that figure has dropped to 35 percent, a shift that suggests many organizations now feel losses pressing closer to, or even outpacing, growth. As AI becomes a tool for both defenders and attackers, simply keeping up has become a larger, not smaller, task.”
The proposed solution is in keeping with a more holistic conception of identity and fraud prevention, that does not see automation as a catch-all solution. “Winners in the next era of fraud and risk prevention will be the organizations that pair explainable automation with better data foundations, truly unified fraud/AML workflows and teams empowered to act as designers of intelligence, not just operators of tools.”
Article Topics
AI fraud | continuous authentication | financial crime | financial services | fraud prevention | generative AI | identity verification | Plaid | SEON






Comments