Speed check: KYA sets guardrails for agentic AI’s rapid evolution

By Hal Lonas, Trulioo Chief Technology Officer
There’s no doubt agentic AI will soon deliver thousands of agents serving their human masters across the digital landscape, but the pressing question is: How long will it take those agents to begin replicating complex human decision-making?
It’s one thing to tell an agent to reorder milk once a week. It’s quite another to ask it to book a complete trip for your family.
We juggle so many variables when we book travel. We manage expenses, timing, travel flexibility, destination, activities, weather concerns, number of travelers, accommodations and local transportation.
It’s a complicated process and different from one person to the next. But our brains are really good at breaking down those factors and setting priorities.
Is agentic AI close to that stage in its development? Could an agent manage that level of complexity?
The technology is almost there, but it would need guidance. We would either have to tell it how we behave or let it watch us book travel and ask us questions about the choices we make.
But no matter how the technology evolves, it needs guardrails.
Building trust in AI agents
There are different modalities of how AI agents might misbehave.
There is always the concern over whether an agent is working in your best interest. If you turn travel booking over to an AI agent, how do you know you have the best fare possible? How does the agent interact with the airline technology?
Even without an AI agent, people planning a trip often use a VPN and turn off their cookies because they don’t want the fact that they’re shopping for travel to affect pricing. For instance, a recent study found that when some airlines see a single person booking travel, they increase the price because they assume it’s business travel, which is reimbursed.
Trust is the currency of the digital economy, and fairness to consumers will need to be a top priority for AI agents.
Fighting the many faces of fraud
Agentic fraud can take many forms and affect every side of a digital transaction.
Merchants, for instance, have built great defenses against bots, which are basically smart browsers that scrape websites for products and prices to add to a database. Now merchants are suspicious about lowering their defenses when an agent could fraudulently claim to be shopping but really just performing the same bot scraping. It’s a simple but prevalent fraud threat.
Bad actors can elevate the fraud threat by using an agent that claims to be operating on behalf of someone legitimate or doesn’t have permission to do what it’s doing. Those transactions can hide money laundering, payments fraud or merchandise theft.
On the flip side, a person’s agent might go to what looks like a legitimate merchant, buy something and never receive it. There needs to be a reputational check on the merchant’s agent to prove it can be trusted.
The rise of those fraud threats has sparked a debate around who holds liability in agent interactions.
Today, if someone buys something online with a credit card, the person and card company share some liability for the transaction. If something goes wrong, the card company might agree it was a bad merchant and, as a result, not make the person pay for the transaction. Other times, the cardholder is responsible.
Now, merchants and card companies are worried about shifting liability in an agentic world where people can say the agent made the transaction. So who covers the liability in a transaction under dispute? It’s a big question without an answer right now.
Know Your Agent: The solution starts with identity
There could easily be billions of agents running around in a few years. How are we going to track that many agents and know they’re verified? The only way to do that is through AI and Know Your Agent processes.
Agentic identity verification – whether the agent developer is a merchant, a payments company or a third party – will be the hinge on which a new era of digital transactions turns. That verification then must extend to whoever is using the agent.
I see people eventually going to agent stores, much like app stores, to find the best one to do a job, such as booking travel. An agent store serves as one mechanism to ensure we know the developer of the agent, its intent and that a fraudster hasn’t tampered with it.
The second mechanism is an agent passport, which brings together the developer’s identity, the user’s identity, the intent of the application, guardrails for the agent and financial details, such as a credit card. All of that has to come together in a cryptographic, verifiable way that shows the developer, intent and code have stayed pristine over time.
The timing of agentic AI’s evolution to complex decision-making remains a critical question. But there’s another, even more important, one: How do we maintain secure, trusted digital transactions during that evolution? The answer always starts with identity.
About the author
Hal Lonas is Chief Technology Officer at Trulioo.
Article Topics
AI agents | digital trust | fraud prevention | identity access management (IAM) | identity verification | Know Your Agent | KYA | Trulioo






Comments