FB pixel

Professor asks ‘who thought face biometrics were a good idea?’

Academics work to catch research and policy up with technology
Professor asks ‘who thought face biometrics were a good idea?’
 

The second wave of algorithmic accountability, as identified by legal scholar Frank Pasquale, entails asking if facial recognition and other biometric technologies are safe and beneficial enough to be developed, a new article from Brookings explains.

Mark MacCarthy, an adjunct professor with Georgetown University’s Communication, Culture & Technology program, writes that after the first wave of algorithmic accountability, in which computer scientists looked into accuracy and bias issues, “advocates and critics are asking developers and users” what benefits certain technologies are delivering, and if they are worth their attendant risks and tradeoff’s. MacCarthy suggests that asking how facial recognition will be used in commercial applications seems like the right approach.

Consent rules and private rights of action act as barriers to values-based technology management, however. The combination of the two has led to numerous costly legal disputes, as in the case of Illinois’ Biometric Information Privacy Act (BIPA).

Those disputes have resulted in debates about what constitutes legal standing that MacCarthy says are bewildering to the average person, are troublesome for companies, and do not reflect rational U.S. policy.

A national strategy with a pre-emptive law and regulation at the national level would be good steps to take, he argues. That regulation would be better served with a federal agency responsible for its implementation and enforcement than a private right of action. Consent should also be re-examined as a touchstone criterion, as one of the main effects seems to be preventing the public good of improving algorithm performance.

Unfortunately, MacCarthy writes, the legislation recently proposed by Senators Sanders and Merkley would carry many of the problems with BIPA to the national level, but without the consistency of pre-empting state laws.

A better approach, according to MacCarthy, is one that takes in the values-based approach he favors, which he suggests will find that “central uses of the technology will be questionable because of their underlying purposes.” What the “central uses of the technology” are is not identified by MacCarthy. This being the case, however, MacCarthy says people may not mind a legal approach that hinders adoption with expensive lawsuits.

There is a better way to manage technology, according to the article, which is to have people sincerely answer the question “who thought this was a good idea.”

Australian professor receives support for AI trustworthiness research

University of New South Wales (UNSW) School of Computer Science and Engineering Scientia Professor of AI Toby Walsh has been named one of 14 Australian Laureate Fellows and received $3.1 million from the Australian Research Council to further pursue his research into how AI systems can be developed that are worthy of human trust.

AI touches people’s lives in ways they may not even be aware of, Walsh tells the UNSW Newsroom, and he is concerned about automation driving increasing inequality. Adverse effects of AI could also include filter bubbles resulting in the proliferation of fake news, or election tampering.

Algorithms can be flawed, but are not held accountable the same way people are, according to Walsh. His research looks into how AI systems can be built and verified as fair, explainable, auditable, and respectful of privacy.

Action is needed at many levels, Wash says, citing the reticence of some companies to sell facial recognition to police without national laws in place.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

 

World targets central IDV, AI agent management role with selfie biometrics

World’s latest update positions the company as an identity verification provider for the world of agentic AI, with new tools…

 

Idenfy launches MCP server to bring live API docs into AI assistants

iDenfy has launched an official Model Context Protocol (MCP) server, which gives developers the ability to plug the company’s live…

 

Anthropic adds limited biometric ID verification from Persona to Claude

Anthropic is introducing identity verification on its AI chatbot platform Claude for a “small number of cases.” For its verification…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events