Can AI spot this emotion? It’s tagged deep concern

Maybe the best news to come out of interviews being done by Microsoft researcher Kate Crawford for her new book Atlas of AI is that she has kept her job after sharing her thoughts on the topic.
Crawford is part of a dismally small (and all-woman) set of industry insiders from high-profile companies calling out the fallacies and shortcomings of AI development and marketing.
Two Google executives tasked with making sure ethics was sewn into their employers’ AI development were not fortunate enough to keep their jobs after doing precisely what they were charged with doing.
Crawford is a senior principal researcher at Microsoft Research as well as a research professor of communication and science and tech studies at the University of Southern California. Atlas of AI examines the costs and payoffs of algorithm-augmented life.
The term AI is too abstract for most people (including a lot of CEOs), and that is a problem given how it increasingly affects lives, from suggesting content to biometric surveillance.
It is being marketed — by Microsoft and others — as green, unbiased, democratic and largely ready take on life-and-death matters, but the industry is wrong about that, according to Crawford.
Interviewed in The Guardian, Crawford said that in writing her book, she went to extremes — for instance, visiting a mine producing tech-destined raw materials — to understand for herself the costs, impacts and intellectual efforts that result in algorithmic decision-making.
It is her position that most people think of AI as electronics chewing innocent bits of data, but much underappreciated and unseen human labor goes into the end product.
AI is really a human endeavor making digital systems seem autonomous, according to Crawford.
And as with everything created by people, it is biased.
Industry marketing tries to convince prospective buyers and regulators that any bias in coding is insignificant, easily eliminated by tapping ever-bigger databases to train algorithms.
The message might be that accidental and malicious bias was a problem until, maybe, 2020. New, perfectly balanced databases will dilute the substandard material. End of problem.
Crawford seems to be leaning toward some kind of mass filtering of public and private databases, rigid standards to minimize bias and a recognition by everyone involved that bias cannot be eliminated.
She is far less optimistic about emotion recognition, or affective computing — an area that Microsoft plays in. Crawford has company with her misgivings.
In the Guardian interview, she says claims that thoughts, intentions, urges and plans can be read from facial expressions is deeply flawed. Unreliable as it might be, the market is predicted to reach $37.1 billion.
A Vice article found four software companies – Cerence, Eyeris, Affectiva and Xperi — selling or preparing to sell emotion-detecting algorithms to carmakers.
Those are relatively small fry, though. Microsoft has been researching and developing emotion recognition since at least 2015, and it is a feature with Face API, which is part of Azure Cognitive Services.
So, it is ominous that developers of AI, particularly emotion recognition, continue to steamroll over concerns about bias, performance and maybe even legitimacy. But Microsoft, at least, allows an employee to sound the alarm.
That’s progress, right?
Article Topics
accuracy | affective biometrics | AI | biometrics | biometrics research | emotion detection | emotion recognition | Microsoft | standards
Comments