A market for emotion recognition grows without tackling deep concerns by the public
Private and government researchers see few unassailable obstacles to — and plenty of money in — making emotion recognition a commonplace AI analysis tool. But one essential building block continues to be studiously overlooked by insiders.
There is little new in concerns raised in recent news coverage about algorithms that go beyond trying to biometrically identify people to try discerning emotions.
Indeed, the fact that the x recognition industry pushes ahead with invasive and covert-friendly monitoring products without addressing a growing public trust deficit is becoming the news.
Plain vanilla facial recognition technology is raising hackles in the United States and the European Union. Increasingly uncertain citizens fear real, personal harm due to abuse by government and commercial deployment.
This is unlikely to be another anti-vaccination movement where some people get worked up over fact-free alarmist messaging casting doubt on decades old laboratory science. Simply walking out one’s front door can make someone an unwitting participant in the surveillance infrastructure. The pool of outrage about privacy intrusions could dwarf the loud minority of anti-vaxxers today.
OneZero, an online publication defining how technology is impacting and will impact people, published an article this month that finds little more than junk science and overheated marketing supports growing accuracy claims by startups.
Another publication, Global Government Forum, analyzed a paper by UK human rights advocate Article 19 that finds that AI emotion readers are illegal under existing international law.
Development and doubts go back years. A psychologist named Paul Ekman in the 1970s began research on spotting what he called micro expressions.
In 2017, an American Civil Liberties Union report written up in The Guardian claimed that “dubious behavioral science” underpinning a U.S. Transportation Security Administration emotion recognition system that could “easily give way to implicit or explicit bias.”
Manual efforts to read emotions in a person’s face and mannerism was rolled out in U.S. airports in 2007. Psychologists have been poking holes in the notion of digital or human face-reading ever since.
Government officials in the West keep the faith if for no other reason than they do not want to be explaining, after a terror attack, why a potentially affective program however unrealistic had been zero-funded.
China’s authoritarian regime is deploying applications with abandon because they might work and there is no meaningful public resistance to government surveillance.
And while businesses are far less aggressive in investing in emotion recognition today, consumer-facing firms never stop doing the math that leads to a sale. Cautious advertising and marketing executives are biding their time until the risk-reward equation shifts.
It would not be the first dodgy concept that they have embraced. See subliminal messaging.
In fact, technology analyst firm Market Research Engine in January published an extensive paper on emotion recognition, predicting that the market will have grown from $5 billion in 2015 to $85 billion in 2025.
The companies lured into the market include Hong Kong-based Find Solution AI.
Noldus, in the Netherlands, sells algorithms that calculate “action units” that executives say reveal shoppers’ inner feelings. And another Northern European company, Sweden-based Visage Technologies, offers an app capable of reading emotions, estimating age and calculating gender.