Another wet blanket on heated claims about emotion recognition
Claims that facial recognition algorithms can discern emotions are harder to believe in the wake of new research.
In fact, not even the mind can read emotions by viewing expressions unless it knows the context of an expression, according to a paper written by a pair of researchers from the Massachusetts Institute of Technology and Katholieke Universiteit Leuven.
The scientists started their emotion recognition research project from the common industry assumption that a frown only means sadness, for example. At least some previous studies looking at how people recognize emotions photographed models, not professional actors, who were asked simply to express a generic emotion with their faces.
AI has been trained this way since the 1990s, when an MIT Media lab professor published a report called Affective Computing. The category also has been called sentiment analysis by retail analysts.
It looks for knit brows, grimaces and wide eyes, for example.
But as any couple that has survived many years attest, a raised eyebrow is not just a raised eyebrow. The viewer cannot accurately infer an expression’s meaning without knowing what the subject is thinking.
That is context, and the best way to know for sure another’s thoughts is by asking. Even then, the information volunteered is being filtered at best.
Someone laughing in surprise after seeing an auto accident at a street corner might somehow be responsible for causing the mishap. Or that person could be realizing that a coincidence had prevented her from being in the street at the wrong time.
Cultural contexts also are mixed in with physical emotion responses. Some cultures bear up under sorrow with stoicism while others release frenetic wailing that is infectious.
It is hard, maybe impossible, to code for these variables.
And then there are biases. A 2019 article in the Harvard Business Review found a study that saw algorithms assign more negative emotions to some ethnicities than others.
The researchers suggest that future developers abandon still images for dynamic stimulus and seek out a greater spectrum of cultural contexts.
As it is, governments and businesses could be depending on emotion recognition to make faulty or disastrous decisions, such as whether to search the man raising algorithmic suspicions in line with gritted teeth who actually is fighting a losing battle with food poisoning.
A company called ParallelDots makes a consumer-analysis system called ShelfWatch to note expressions in stores.
Another firm, Oxagile, says its algorithms boost buyers’ “operational efficiency.”
Lists of emotion recognition firms around the globe can be found here and here.
In 2018, industry analyst firm Gartner predicted that 10 percent of personal devices will be capable of recognizing emotions. The algorithms will be used to help diagnose mental and emotional ailments and customize schooling for children.
Today, one of the selling points used by automakers when discussing facial recognition on drivers is spotting expressions of rage, which could be related to compelling world news or a missed goal in the playoffs.
Even some industry insiders see such claims as optimistic, both in terms of timing and technical capability.
Microsoft researcher Kate Crawford, promoting her book Atlas of AI, said flawed thinking goes into marketing claims that machine learning soon will be able to suss out intentions, urges, plans and the like from facial recognition. It is too complex, says Crawford. Microsoft is involved in the field, too.
Article Topics
accuracy | affective biometrics | AI | biometrics | biometrics research | emotion recognition | facial recognition
Comments