FB pixel

AI agents alone can’t be trusted in verification

AI agents alone can’t be trusted in verification
 

By Eugeny Malyutin, Head of LLM at Sumsub

Agentic AI – autonomous systems that can complete entire tasks without human help – is promised as the next frontier in artificial intelligence – with over $2.5 billion being funneled into startups building AI agents this year so far.

But there’s a growing gap between the promise and the reality. Too often, these systems are being sold as replacements for human oversight, rather than tools that support it. That’s not just misleading – in many high-risk and highly regulated sectors, such as financial and banking technologies, it’s dangerous.

AI is still incredibly useful. It’s great at spotting unusual patterns in massive datasets, flagging recurring suspicious behaviors, and analysing highly-detailed or subtle things, such as spotting whether an image is a deepfake. But these benefits only hold if AI is used responsibly – with human oversight at key points to catch blind spots, double-check results, and avoid cascading failures.

The main issues with AI agents

One of the biggest risks comes from what’s known as compounding errors. Even a very accurate AI system – for example, 95% – becomes far less reliable when it’s chained to a series of compounding and related decisions. By the fifth hypothetical step, accuracy would drop to 77% or less. Unlike human teams, these systems don’t raise flags or signal uncertainty. That’s what makes them so risky: when they fail, they tend to do so silently and exponentially.

Take the case of Replit’s AI coding assistant, which was found deleting codebases, fabricating reports, and covering its own mistakes. That wasn’t a glitch – it was a sign of what can happen when autonomous systems are allowed to operate without supervision. In sectors where accuracy and accountability are non-negotiable, such as finance or identity verification, these failures can have enormous consequences: regulatory fines, reputational damage, or permanent loss of clients.

Another major issue is visibility. The more complex or proprietary an AI system is, the harder it becomes to understand how it reached a decision. Relying fully on AI agents for one process – especially when trained on private datasets or wrapped inside third-party software – risks turning critical workflows into black boxes. When something goes wrong, it may be impossible to know when, how or why.

AI alone can’t outpace 2025’s fraud landscape

This opacity is particularly dangerous in the fight against fraud, which is only getting more advanced. In 2025, fraudsters aren’t using fake passports and bad Photoshop. They’re using AI-generated identities, videos, and documents that are nearly impossible to distinguish from the real thing. Tools like Google’s Veo 3 or open-source image generators allow anyone to produce high-quality synthetic content at scale.

These new tools are already changing the global fraud landscape. According to recent data, traditional forgeries are down everywhere – 46% globally – except in Europe it actually rose 33%. It’s a sign of how varied and fast-changing fraud threats have become.

Fully autonomous AI systems struggle to keep up with this pace. They’re often trained on past data and lack the flexibility to deal with new, adaptive fraud tactics or zero-day vulnerabilities. The more businesses rely on third-party agents without close monitoring, the more likely these gaps will be exploited.

Evolving regulatory risk

That’s more important than ever as regulation tightens. Under the UK’s new Online Safety Act, businesses face fines of up to £18 million – or 10% of global revenue – for failing to meet verification duties, such as age checks or risk assessments. The penalties grow if regulators find systemic non-compliance or signs that checks were conducted using flawed or fabricated processes. AI agents operating without oversight could make these problems worse, not better.

This isn’t just about misinformation or potentially harmful content online. In 2025, several new regulations will raise the bar for verification across multiple sectors. The UK’s ECCTA will require mandatory ID checks for company directors starting in autumn. Later in the year, the Data (Use and Access) Act will introduce new standards for digital identity providers. In this environment, the only realistic, scalable, and safe approach is hybrid: a layered system where AI and humans work together.

Where AI really adds value

Responsible and effective use of AI means using multiple models to cross-check results to avoid the domino effect of one error feeding into the next. It means assigning human reviewers to the most sensitive or high-risk cases – especially when fraud tactics evolve faster than models can be retrained. And it means having clear escalation procedures and full audit trails that can stand up to regulatory scrutiny.

This hybrid model offers the best of both worlds: the speed and scale of AI, combined with the judgment and flexibility of human experts. As fraud becomes more sophisticated, this balance will be essential.

Companies operating across multiple markets, languages, and document types will also have a clear advantage. Their systems will be trained on more diverse data, making their AI tools more capable and more accurate.

About the author

Eugeny Malyutin is Head of LLM at Sumsub, where he develops scalable AI solutions to enhance user verification and combat digital fraud. With over 8 years of experience in machine learning and data engineering, Malyutin has a proven track record in building high-load systems, recommendation engines, and social network analytics for leading tech companies.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

OpenAI rolls out passkeys for ChatGPT, partners with Yubico

OpenAI has introduced new passwordless security settings for ChatGPT accounts, allowing users to opt for passkeys or physical security keys….

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

 

Google Wallet supports Aadhaar verifiable credentials in India

Google has added support for Aadhaar Verifiable Credentials in India, allowing users to store and present their digital Aadhaar ID…

 

India scales farmer ID system for payments with KPMG support

The India office of influential accounting firm KPMG has explained how it supported the advancement of the country’s Digital Agriculture…

 

Digital ID systems fail migrants due to policy gaps, Caribou finds

A new report by research organization Caribou has warned that digital ID systems around the world have continued to deepen…

 

Hopae launches eIDAS 2.0, AMLR onboarding readiness tool

Hopae has launched a free self-assessment tool to help financial institutions offering customer onboarding and identity verification to evaluate their…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events