Lots of skepticism and worry about emotion recognition not a turnoff for entrepreneurs
As happens with all promising new tech, initial coverage of emotional recognition tended to be rosy until an alarmist, and justifiably so, wave of opinions arrived.
Some believe emotion recognition could play a role in ethical AI. Even industry insiders — developers, executives, policy wonks — are not optimistic.
A June Pew Research Center study found that when asked, 70 percent of those experts said ethical code written for the public good will not be a part of most AI by 2030.
If emotional recognition, or affective computing, survives, a third wave might rise with convincingly practical roles for the idea despite misgivings.
Has it reached that stage? Can developers create AI that can ethically, productively and accurately tease out a person’s feelings?
That is impossible to say right now, but some recognizable names in ethical AI are associating themselves with early product development.
The Washington Post, which is owned by AI proponent and Amazon founder Jeff Bezos, this week ran a largely optimistic piece spotlighting former Google data researcher Alan Cowen, who has formed Hume AI.
Cowen has some fans with money. Venture capital firm Aegis Ventures invested $5 million in Hume AI.
He also has created the Hume Initiative, a non-profit charged with developing methods to regulate what he calls empathic AI.
A set of initiative guidelines reportedly was written by Ben Bland, chair of an IEEE committee working on emotion AI standards; Danielle Krettek Cobb, who founded Google’s Empathy Lab; and Dacher Keltner, a University of California – Berkeley psychology professor.
Also involved with the Hume Initiative is Massachusetts Institute of Technology researcher Karthik Dinakar.
According to the Post story, Hume AI is training its AI platform on “hundreds of thousands of facial and vocal expressions from around the world” to make algorithms “more empathetic and human.”
It is unclear how deeply Hume AI will go in its search for an ethical product. Most if not all training datasets in use today have dubious histories when it comes to sourcing massive datasets of photographs.
Other efforts have found favor with the U.S. National Institutes of Health and the National Science Foundation, both hoping their grants can get emotion recognition products to consumers.
One such effort is research by Adela Timmons, director of the Technological Interventions for Ecological Systems Lab at Florida International University.
Timmons’ team has developed models that intuit when two people are likely to fight — with an 86 percent accuracy rate. Future work is expected to look at larger family dynamics.
And there is no shortage of people posing critical questions of would-be innovators including Hume AI and its non-profit sibling. A VentureBeat article this week made it clear that some informed observers feel this is not the beginning of a redemptive wave for emotion recognition.
There is skepticism also in an affective computing article published last month in Wired. The piece looked at some roles AI is being pressed into.
One focus is a tool, OurFamilyWizard, created to help divorced couples communicate without overtones of loathing. As Wired pointed out, there are a number of similar algorithm-based tools: TalkingParents, Amicable, coParenter and Cozi.
So, whether the products take off or, indeed whether emotion recognition earns public trust, there appears a supply of entrepreneurs ready to capitalize on the AI capabilities. (And it must be noted that it takes a particular confidence to put one’s affective computing innovation in a room with divorced parents.)