FB pixel

AI agents and verifiable credentials: A match made in heaven?

Categories Biometric R&D  |  Biometrics News
AI agents and verifiable credentials: A match made in heaven?
 

With the arrival of AI agents, businesses are looking at growing future opportunities. One of them is granting access to AI agents to our verifiable credentials (VCs) to act on our behalf – whether it’s booking a holiday trip to Mexico, submitting a tax report through Google, or setting up a stock trade on Robinhood.

But before AI agents acquire the possibility of managing our daily tasks through online services, technologists will have to answer some difficult questions. Among them is how can we trust AI agents to accurately represent information about us while interacting with systems, partners and other agents.

“The interactions of verifiable credentials with these agents will be really valuable,” says Diana Jouard, product manager at Ping Identity. Instead of costly integrations and vetting processes, service providers will be able to rely on these cryptographically secure and verifiable claims, she adds.

Jouard spoke during a webinar exploring the intersection of AI agents and digital identity organized this week by the U.S.-based identity management company

The topic of AI agents has become closely watched in the identity industry. The autonomous systems are usually defined as software tools designed to automate tasks and achieve complex objectives. Experts believe they could become a future digital workforce and help even bolster biometric authentication.

“There’s a lot of disruption possible from this,” says Peter Clay, senior enterprise security architect at business consultancy PA Consulting.

But as with any new technology, companies will have to decide how to approach it in relation to verifiable credentials and identity. Businesses must keep in mind the “good old fashioned” rules of keeping data secure, verifying identities and applying zero-trust principles, adds Clay.

“You can’t stick your head in the sand and ignore this stuff,” he says. “It’s coming, it’s going to change the way organizations work – hopefully for the better.

AI agents may also bring unexpected risks. Giving AI agents more power than they should have or allowing them to make decisions based on the wrong data could derail their intentions, explains Fred Kwong, chief information security officer (CISO) of DeVry University in the U.S. state of Illinois.

Using AI agents will require different ways of approaching access controls, including defining what tasks will it be allowed to do and what data it can access. Organizations will need to establish proper controls around who can provide the agent data and add to its knowledge base, who can modify and delete or update information when it gets stale.

Privacy regulation will also play a role: If an AI model has learned everything there is to know about the user, how will removing this information look like?

“I’m not sure our controls are moving at the same pace that technology is growing,” he says.

The disadvantage of technologies such as Large Language Models (LLMs) which sit at the core of AI agents is that they could become susceptible to social engineering attacks and attempts from bad actors to engineer them to share information they shouldn’t. AI agents may also be “self-interested” or structured in a way to advertise certain companies or influence their users’ behaviors because of corporate interests, according to Jouard.

“It’s way more complicated,” says PA Consulting’s Clay. “The testing becomes much, much harder, and you’ve got to start thinking about it much earlier stage of development than a lot of people realize.”

Although AI agents are still in their infancy, many organizations are considering rolling them out. Businesses, however, will have to consider the justification and use cases associated with agents, understand if there is truly a return on investment (ROI) and how much friction could an agent cause compared to helping an organization, says Kwong.

The rise of quantum computing, which is predicted to come over the next 10 years, may mean that many of the controls introduced by businesses and organizations could become antiquated. Combining the power of quantum computing, which has the potential to break some types of encryption, with Large Language Models (LLMs) could also present challenges in the future.

“That’s a bit of a scary world for me,” says Kwong.

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Keyo’s palm scanning device now half smaller and even more versatile

The palm payments market has been estimated to be worth US$68.4 million in 2024 and will continue to rise. By…

 

MOSIP explains how its integration with OpenG2P facilitates social protection services

MOSIP says that with robust and dynamic digital ID systems, governments can effectively intervene to bring assistance to people suffering…

 

EU inches towards gradual rollout of biometric border controls

The EU’s biometric Entry-Exit System is set for a 180 day rollout after Members of European Parliament on the Civil…

 

New Zealand seeks selfie guidance, liveness capabilities for biometric capture

New Zealand’s Department of Internal Affairs (DIA) is looking for new face biometrics capture technology that delivers better quality images…

 

UK online child safety rules finalized by Ofcom ahead of July deadline

New rules have been set for protecting UK children from online harms with the publication of the Protection of Children…

 

Live facial recognition should shape future of policing, says former UK PM

Former UK Prime Minister Tony Blair is a fan of live facial recognition. A report in The Times quotes Blair…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events