Scale of AI fraud makes legacy identity verification inadequate

Sometimes, you just have to tell yourself, “I’m good enough.” Then again, if you’re a digital identity security system, you’d be wrong. A new report from PYMNTS Intelligence and Trulioo looks at “When ‘Good Enough’ Isn’t Enough” – which is to say, when deploying digital identity verification in the age of bots and agents.
The report says the bare minimum approach costs financial institutions up to 34 billion dollars a year, as they expand into digital channels without accounting for the amplified risk of fraud at scale.
“Just over three in four institutions say identity processes prevent them from expanding customers, markets or geographies, while revenue losses from KYC/KYB failures average 3 percent,” says the report. “Synthetic identity fraud, regulatory violations and account takeover represent the costliest threats.”
Perhaps even worse than the measurable losses are the missed opportunities. “‘Good enough’ identity systems keep financial services firms confident, but not competitive,” the report says.
The overarching message is, don’t get too comfortable with your legacy KYC systems, even if they seem fine to you.
Wise to cool market jets on agentic AI
An article in Payments Journal looks at the “considerable fanfare surrounding the emergence of agentic commerce, followed by a race to build the supporting infrastructure.” Specifically, it seeks insights from the evolution of biometric authentication and digital ID cards.
“None of these technologies,” it says, “are close to achieving ubiquity. That said, this year will bring more frequent and tangible interactions with each of them for consumers, merchants, and financial institutions alike.” 2026 could be the year that biometric payments finally take off, as the technological capability aligns with customer experience, demand and comfort.
The expectation, however, should not for them to become commonplace. “The point is that people will start to see this in the wild and not just in very controlled environments,” such as pilots and trials.
The same goes for digital identity, according to Christopher Miller, lead emerging payments analyst at Javelin Strategy & Research. Miller says the rollout of digital ID has been “a classic case of uneven awareness and uneven availability. Some states have had this for almost 10 years, and then other states still can’t get out of their own way.”
“You had a chicken and the egg problem,” he said. “Why should merchants go to the trouble of building the infrastructure to accept digital IDs if nobody had digital IDs? Well, why should I get a digital ID if nobody’s going to accept it? It’s a classic problem, but the availability problem is mostly over. We can say with reasonable confidence that within a decade or so, every state is going to issue something.”
Lights brighter than expected for much-hyped agentic commerce
The lessons for agentic commerce point to choice as a necessity, and tempered expectations for adoption – or, in other words, “a more methodical rollout of agentic commerce than some reporting has suggested.”
“Almost nothing was even remotely in production in 2025, which is an important call-out because people talked like it was happening and there were these huge growth waves,” Miller says. “No, false. That sets up 2026 to be the first encounter that many people across the entirety of agentic tech will have with the products, ranging from the consumers using them to the merchants accepting them to the payment processes interacting with them.”
There will be hiccups and corrections. That, he says “is a natural part of the development of emerging tools – but this is happening under a pretty bright glare.”
Are AI agents the next big insider threat?
Geoff Shcomburgk, Yubico’s VP for Asia Pacific and Japan, published a piece late last year citing statistics from the Office of the Australian Information Commissioner (OAIC) for July to December 2024, showing that human error accounted for nearly 30 per cent of data breaches, highlighting the severity of insider risk.
As cyber tactics become more sophisticated, AI agents could easily become the next big insider threat. “The intersection of human error and increasingly advanced cyber tactics emphasises the fundamental importance of strong identity verification in effective cyber resilience, and especially within the financial services sector,” Schomburgk says.
“Failing to adopt stronger identity verification exposes institutions to regulatory scrutiny, financial losses and reputational damage. In the banking sector, where customer trust is paramount, the impacts of a breach are especially severe.”
Jimmy Astle, director of machine learning at Red Canary, has the following to say regarding agentic AI, in comments sent to Biometric Update:
“Agentic AI is moving out of the lab and into real-world corporate systems – used for scanning documents, augmenting workflows, and taking actions once reserved for humans. That shift has significant ramifications for data privacy, especially if AI tools are deployed without clear governance, strong access controls, and careful oversight.
“Data privacy in the agentic era starts with treating AI like any other user that accesses corporate systems – it must be secured at the identity layer. Organizations should keep their access privileges tight, maintain clear visibility into which data AI agents can retrieve and act on, and control which users are able to prompt them. From there, employees need clear usage policies and security teams should regularly review how their AI systems behave in practice. Privacy checks should also be built directly into user workflows from day one to ensure consistent and widespread compliance.”
Article Topics
agentic commerce | AI agents | cybersecurity | digital identity | financial services | Javelin | Red Canary | Trulioo | Yubico







Comments