The possibilities of biometric surveillance grow with unsettling questions, critics allege

Three recent articles, two journalistic and one research, have looked at questionable biometric surveillance tools and found reason to worry about how their use can be reconciled with a reasonable person’s expectation of privacy. Emotion recognition and demographic classification are identified as particularly threatening.
One of the three pieces, in fact, harshly judges the first significant stab at comprehensive regulation of AI in high-risk applications.
That article, published by AI research house the Ada Lovelace Institute, points to the progress of government control efforts of face biometrics as an indicator for how AI as a whole might be reined.
Draft EU legislation made public in late April suffers from “an outdated distinction between identification and classification” and depends too much on industry self-regulation and certification, according to the institute.
Indiscriminate identification and categorization are treated as separate activities, though both violate privacy and can harm individuals when they mischaracterize them.
For instance, institute researchers say that the draft would allow algorithms to categorize people remotely and in real time as a particular ethnicity or sexual orientation based on their appearance alone.
Likewise, they found that even as framers of the document tried to think of near-future technological change and use, they “fail[ed] to adequately grapple with the risks of emotion recognition.”
The article warns (as others have) of the danger of trusting machine vision code to discern a person’s mood. Some have likened the effort to phrenology on a scale that 19th century probers of head bumps could never have imagined.
A new piece in the Atlantic about the controversial case for emotion recognition found another analogy.
In 1967, a U.S. psychologist traveled to a remote tribe in Papua New Guinea to see if he could prove that humans have a few universal emotions. The article recounts the frustration everyone in the experiment experienced.
The psychologist was ill-prepared to work productively with the tribe, and the reaction of extremely isolated people with the effort is anyone’s guess.
In other words, it is easy to underestimate the complexity of something as simple as a smile. That is not dissuading billions of dollars from being invested in emotion recognition systems — offshoots of facial recognition software.
The technology is “based on questionable methodologies” that pose a threat because they already are being deployed.
As noted in the article, job applicants today are being rejected for positions because “their facial expressions or vocal tones don’t match those of other employees.” Without real skepticismon the part of business owners and public officials, perceived emotions could become one more piece of digital evidence that can forever alter lives.
The third piece, in The Conversation, adopts a similar tone regarding voice recognition. Its author proposes banning virtually all uses of voice biometrics outside of identity authentication.
In particular, voice profiling should never be used in marketing interactions with individuals, in political campaigns and for government tasks without warrants.
That might be an overreaction, but, as the article explains, some scientists say they can use algorithms to describe a person’s health, gender, race, weight and height.
Article Topics
accuracy | Ada Lovelace Institute | AI | biometric identification | biometrics | emotion recognition | ethnicity recognition | facial recognition | privacy | surveillance | voice biometrics
Comments