FB pixel

Generative AI raises questions about biometric security

Experts and policymakers tackle a future in which new conveniences equal new risks
Categories Biometric R&D  |  Biometrics News
Generative AI raises questions about biometric security
 

Yet another sector is raising red flags about the potential harms of artificial intelligence, this time with regard to biometric security. Academics, cybersecurity experts and governments are asking questions about whether generative AI has the ability to compromise biometric authentication systems, and what consequences could result.

A recent podcast from The Economist brought together several prominent voices to discuss the issue and what it means amid the increasing prevalence of biometric authentication in daily life.

“It’s feasible that as generative artificial intelligence comes of age, spoofs of my face or voice could leave the door wide open to hackers,” says Kenneth Cukier, a senior editor of The Economist, on its weekly tech podcast, Babbage. “We have already seen the power of deepfake audio and videos.”

“As biometrics are being used more and more widely, and generative AI improves, what can be done to reduce the risks? What if AI becomes powerful enough to render biometrics nearly obsolete?”

Biometrics are safe because they are unique. Historically, a faceprint has been much harder to replicate than a password. But this same strength becomes a bigger risk if that data is compromised. Speaking on the podcast, Bruce Schneier, a security technologist at Harvard University and author of A Hacker’s Mind, points out that, ironically, the relative immutability of fingerprints, retinal scans and face biometrics only makes them safer if they’re protected.

“One of the biggest risks we don’t talk about a lot,” Schneier says, “is that you can’t recover from a failure. If I have a password and my password gets stolen, I can create a new password. That’s easy. If I’m using my thumbprint, and it gets stolen… I kind of can’t get another thumb. Biometrics are not something you can create on the fly. They’re singular, they’re public, and they’re all you’ve got.”

Deepfakes are another major challenge. Katina Michael, a professor at Arizona State University who studies augmented intelligence and the future of society, says that the barrier to access is lowering for the technology required to create accurate and effective spoofs. “Duping a system, masquerading as someone, getting through defenses, is increasingly becoming possible – especially if we don’t have live detection in the biometrics.”

APAC nations push to regulate controversial uses of AI

The Economist’s conversation is a thought experiment. But governments continue to wrestle with the policy implications of large language models (LLM) and generative AI.

In South Korea, the Personal Information Protection Commission announced the assembly of a research group to audit and update laws so that they provide adequate protection of biometric data. As reported by the National Law Review, a statement from the commission included the acknowledgment that, “biometric information by its nature is both unique to an individual and immutable” and that “the impact from its misuse or leakage was recognized to be greater.”

South Korea joins China and Singapore among Asia-Pacific nations introducing regulatory guidelines for generative AI systems. Australia is also considering a ban on what it deems to be high-risk instances of AI, including deepfakes and algorithmic bias.

The EU, meanwhile, continues to play a lead role in adopting regulatory frameworks for AI, this time as part of the U.S.-EU Joint Statement of the Trade and Technology Council, published on May 31 following a ministerial meeting in Luleå, Sweden. The rapid growth of AI and other developing technologies took center stage, with the council pledging, as a key outcome, “robust transatlantic cooperation on emerging technologies for joint U.S.-EU leadership.”

“The United States and the European Union are committed to deepening our cooperation on technology issues, including on artificial intelligence (AI), 6G, online platforms and quantum,” reads the statement. “We are committed to make the most of the potential of emerging technologies, while at the same time limiting the challenges they pose to universal human rights and shared democratic values.”

Pointing to the existing Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management as a good start, the new joint statement pledges to create expert groups in three areas of concern. The first will define and standardize the lexicon and taxonomy of AI. The second will work toward cooperation on standards and tools to manage risk, and the third is devoted to monitoring existing and emerging concerns.

In addition to addressing generative AI, the statement says that the U.S. and the EU “are advancing collaboration in the promising area of digital identity and have held a series of U.S.-EU technical exchanges and an event to engage subject matter experts from government, industry, civil society, and academia.”

“We intend to develop a transatlantic mapping of digital identity resources, initiatives, and use cases,” it says, “with the aim of advancing transatlantic pre-standardization research efforts, facilitating interoperability, and streamlining implementation guidance while respecting human rights.”

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

UN says law enforcement should not use biometrics to surveil protestors

Law enforcement agencies should not use biometric technology to categorize, profile or remotely identify individuals during protests, the United Nations…

 

How to explain the EUDI Wallet? Industry and citizens discuss Europe’s digital ID

The European Digital Identity (EUDI) Wallet is well on its way towards becoming a reality. To explain the major impact…

 

Decentralize face authentication for control, stronger protection: Youverse

The implementation method of biometric face authentication has become increasingly important in recent years due to the limitations of traditional…

 

Researchers develop display screens with biometric sensor capabilities

Traditional display screens like those built into smartphones require extra sensors for touch control, ambient light, and fingerprint sensing. These…

 

Meta, porn industry and Kansas governor weigh in on age verification

As Europe mulls how to restrict access to certain content for minors, Meta offers its own solution. Meanwhile, U.S. states…

 

As national U.S. data privacy law becomes more likely, critics emerge to point out flaws

The push for comprehensive privacy legislation in the U.S. is gaining momentum, as the proposed American Privacy Rights Act 2024…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events