FB pixel

Affective biometrics: Too new, too inaccurate for police work, says think tank

Affective biometrics: Too new, too inaccurate for police work, says think tank
 

Most people knowledgeable about face biometrics have at least some misgivings about how it is used. It is safe to say that most people who know about affective biometrics have misgivings about it being used at all.

Of course, affective biometric systems try to discern moods, future actions and deception by analyzing expressions and body language with algorithms. The market for the systems is growing.

Many people feel they might eventually trust facial recognition with their liberty and life because they believe it will be a simple yes or no transaction: I was not at the bank when it was robbed, and computer vision will prove that.

But who trusts their face to be blank, particularly during unguarded moments, of which everyday life is full?

Could my sideways glance be the deciding evidence of larcenous greed or malicious lust for software and, eventually, a hiring manager or a judge?

The center left think tank The Brookings Institution sees affective biometrics being far too experimental for law enforcement to use in ascertaining guilt much less predicting criminal actions, as some boosters use to promote the technique.

Brookings, in fact, has called for a ban on federal use of affective biometrics. This is hardly new turf for the think tank. It continues to offer guidance on biometric regulations in the United States and beyond.

Genuflecting at the altar of possible virtuous outcomes, researchers in a post note that audio and video systems can pick out some military veterans who might try to kill themselves. Video systems can diagnose fatigue in drivers, too.

But spotting lies or criminal intent, in the words of Brookings, “are clearly beyond the capacity of affective computing.”

Canada tried and abandoned Avatar, its experimental algorithmic interviewing project, according to the post, which lists other black eyes for the technique.

The AI Now Institute at New York University two years ago urged a ban for government and private sector use. Institute leaders found that the science lags far behind the marketing of, for example, apps sold by HireVue.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events