FB pixel

AI agents and verifiable credentials: A match made in heaven?

Categories Biometric R&D  |  Biometrics News
AI agents and verifiable credentials: A match made in heaven?
 

With the arrival of AI agents, businesses are looking at growing future opportunities. One of them is granting access to AI agents to our verifiable credentials (VCs) to act on our behalf – whether it’s booking a holiday trip to Mexico, submitting a tax report through Google, or setting up a stock trade on Robinhood.

But before AI agents acquire the possibility of managing our daily tasks through online services, technologists will have to answer some difficult questions. Among them is how can we trust AI agents to accurately represent information about us while interacting with systems, partners and other agents.

“The interactions of verifiable credentials with these agents will be really valuable,” says Diana Jouard, product manager at Ping Identity. Instead of costly integrations and vetting processes, service providers will be able to rely on these cryptographically secure and verifiable claims, she adds.

Jouard spoke during a webinar exploring the intersection of AI agents and digital identity organized this week by the U.S.-based identity management company

The topic of AI agents has become closely watched in the identity industry. The autonomous systems are usually defined as software tools designed to automate tasks and achieve complex objectives. Experts believe they could become a future digital workforce and help even bolster biometric authentication.

“There’s a lot of disruption possible from this,” says Peter Clay, senior enterprise security architect at business consultancy PA Consulting.

But as with any new technology, companies will have to decide how to approach it in relation to verifiable credentials and identity. Businesses must keep in mind the “good old fashioned” rules of keeping data secure, verifying identities and applying zero-trust principles, adds Clay.

“You can’t stick your head in the sand and ignore this stuff,” he says. “It’s coming, it’s going to change the way organizations work – hopefully for the better.

AI agents may also bring unexpected risks. Giving AI agents more power than they should have or allowing them to make decisions based on the wrong data could derail their intentions, explains Fred Kwong, chief information security officer (CISO) of DeVry University in the U.S. state of Illinois.

Using AI agents will require different ways of approaching access controls, including defining what tasks will it be allowed to do and what data it can access. Organizations will need to establish proper controls around who can provide the agent data and add to its knowledge base, who can modify and delete or update information when it gets stale.

Privacy regulation will also play a role: If an AI model has learned everything there is to know about the user, how will removing this information look like?

“I’m not sure our controls are moving at the same pace that technology is growing,” he says.

The disadvantage of technologies such as Large Language Models (LLMs) which sit at the core of AI agents is that they could become susceptible to social engineering attacks and attempts from bad actors to engineer them to share information they shouldn’t. AI agents may also be “self-interested” or structured in a way to advertise certain companies or influence their users’ behaviors because of corporate interests, according to Jouard.

“It’s way more complicated,” says PA Consulting’s Clay. “The testing becomes much, much harder, and you’ve got to start thinking about it much earlier stage of development than a lot of people realize.”

Although AI agents are still in their infancy, many organizations are considering rolling them out. Businesses, however, will have to consider the justification and use cases associated with agents, understand if there is truly a return on investment (ROI) and how much friction could an agent cause compared to helping an organization, says Kwong.

The rise of quantum computing, which is predicted to come over the next 10 years, may mean that many of the controls introduced by businesses and organizations could become antiquated. Combining the power of quantum computing, which has the potential to break some types of encryption, with Large Language Models (LLMs) could also present challenges in the future.

“That’s a bit of a scary world for me,” says Kwong.

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

 

Zenoo integrates Trinsic, Sumsub for advanced digital ID onboarding options

Onboarding and compliance orchestration engine provider Zenoo has formed a pair of partnerships to give its customers a broader range…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events