Professor asks ‘who thought face biometrics were a good idea?’

Academics work to catch research and policy up with technology
Professor asks ‘who thought face biometrics were a good idea?’

The second wave of algorithmic accountability, as identified by legal scholar Frank Pasquale, entails asking if facial recognition and other biometric technologies are safe and beneficial enough to be developed, a new article from Brookings explains.

Mark MacCarthy, an adjunct professor with Georgetown University’s Communication, Culture & Technology program, writes that after the first wave of algorithmic accountability, in which computer scientists looked into accuracy and bias issues, “advocates and critics are asking developers and users” what benefits certain technologies are delivering, and if they are worth their attendant risks and tradeoff’s. MacCarthy suggests that asking how facial recognition will be used in commercial applications seems like the right approach.

Consent rules and private rights of action act as barriers to values-based technology management, however. The combination of the two has led to numerous costly legal disputes, as in the case of Illinois’ Biometric Information Privacy Act (BIPA).

Those disputes have resulted in debates about what constitutes legal standing that MacCarthy says are bewildering to the average person, are troublesome for companies, and do not reflect rational U.S. policy.

A national strategy with a pre-emptive law and regulation at the national level would be good steps to take, he argues. That regulation would be better served with a federal agency responsible for its implementation and enforcement than a private right of action. Consent should also be re-examined as a touchstone criterion, as one of the main effects seems to be preventing the public good of improving algorithm performance.

Unfortunately, MacCarthy writes, the legislation recently proposed by Senators Sanders and Merkley would carry many of the problems with BIPA to the national level, but without the consistency of pre-empting state laws.

A better approach, according to MacCarthy, is one that takes in the values-based approach he favors, which he suggests will find that “central uses of the technology will be questionable because of their underlying purposes.” What the “central uses of the technology” are is not identified by MacCarthy. This being the case, however, MacCarthy says people may not mind a legal approach that hinders adoption with expensive lawsuits.

There is a better way to manage technology, according to the article, which is to have people sincerely answer the question “who thought this was a good idea.”

Australian professor receives support for AI trustworthiness research

University of New South Wales (UNSW) School of Computer Science and Engineering Scientia Professor of AI Toby Walsh has been named one of 14 Australian Laureate Fellows and received $3.1 million from the Australian Research Council to further pursue his research into how AI systems can be developed that are worthy of human trust.

AI touches people’s lives in ways they may not even be aware of, Walsh tells the UNSW Newsroom, and he is concerned about automation driving increasing inequality. Adverse effects of AI could also include filter bubbles resulting in the proliferation of fake news, or election tampering.

Algorithms can be flawed, but are not held accountable the same way people are, according to Walsh. His research looks into how AI systems can be built and verified as fair, explainable, auditable, and respectful of privacy.

Action is needed at many levels, Wash says, citing the reticence of some companies to sell facial recognition to police without national laws in place.

Related Posts

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics