AI agents raise awkward authorization questions

Weak authentication has long contributed to the volume of account takeover attacks, but Gartner warns that AI agents will cut the amount of time it takes from account exposure to exploitation in half.
The new capability AI agents bring to the fraud table is the automation of more steps, Gartner explains, like social engineering with deepfaked voices.
Tech vendors will respond by introducing products to detect AI agents, the firm forecasts in its “Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity.” Businesses can boost their protection against AI agent-assisted exploits by moving to passwordless multi-factor authentication (MFA) tools, like passkeys, for their phishing resistance.
Trulioo CTO Hal Lonas makes a similar point in an opinion piece for CIO, suggesting that the same technology can be used for finding vulnerabilities by both malicious actors and proactive organizations carrying out red-teaming exercises.
Business verification seems like an ideal use case for AI agents, Lonas writes. but because regulatory requirements must be met in different ways depending on the circumstances, addressing this challenge with AI agents could give rise to another, with explainability.
Agentic AI is already being applied to payments, for example by Klarna, which as Anonybit Co-founder and CEO Frances Zelazny points out, ties its actions into identity management.
It also raises the question of how to prove that a request being handled by an AI agent, such as for a financial transaction, is legitimate. Who is responsible if an AI agent is compromised by an injection attack, and the consumer reports that they did not authorize the payment?
Machine-to-machine authentication, Zelazny writes in a LinkedIn post, is a critical emerging challenge. The concept is familiar in IoT networks, enterprise IT and automated business processes, but building authentication into processes which are more dynamic and less structured is a new challenge. Cryptographic mechanisms to confirm the request has not been altered and can be trusted must be introduced.
Verifiable credentials can be part of this answer, Zelazny argues, but the agent must be bound to a user through biometrics to close the “Circle of Identity” – her term for a framework that provides continued trust through the identity lifecycle.
Fortunately, agentic AI can also improve the security, scalability and privacy protection of biometric authentication, according to Chetu Director of Operations Anshu Raj. AI agents can analyze biometric data more effectively for improved recognition rates, for instance, Raj argues in a recent Biometric Update guest post.
User privacy can be protected through the use of the kinds of technologies, like multi-party computation (MPC) and zero-knowledge proofs (ZKPs), enabled by Anonybit’s technology.
Article Topics
AI agents | Anonybit | biometric authentication | biometrics | Chetu | fraud prevention | Gartner | identity management | Trulioo







Comments