AI researchers won’t be playing ‘trust fall’ with Facebook. Or China
There are organizations that AI researchers trust less than Facebook, according to a new paper, and they are all Chinese.
The only organizations that researchers trust less than Facebook to handle AI in the public interest are the Chinese government and military as well as Chinese tech companies Baidu and Alibaba.
U.S. institutions fared only marginally better and recent efforts by Facebook do not seem to be doing much to build trust in it.
Presumably, survey respondents have a poorer view of the person dating their daughter, but that was not a choice in the global survey for Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers.
The work was completed by a British-United States team led by a Cornell University researcher examining attitudes of machine learning and AI researchers.
Global as it is, the paper cannot help but pay much attention to sentiments about U.S. government organizations, including the military.
Lead researcher Baobao Zhang put a fine point on the matter in a Cornell marketing post. She said that tech executives and government officials talk a good game about wanting trustworthy AI but seem unable to make a coherent effort to achieve that goal.
Zhang did not specify which nations she was referring to. However, the comparatively poor showing of the United States when it comes to earning trust from AI researchers would seem to point a finger at U.S. lawmakers, regulators and businesses.
Chinese players are held in even lower regard, according to the survey. Its government and military are at the bottom. Tencent instills the same level of trust as the U.S. military. Baidu and Alibaba instill less.
Respondents placed their greatest trust in international research and non-governmental science organizations.
The European Union and, to a lesser extent, the United Nations have both won the confidence of a significant segment of researchers, according to the paper.
The only other actor, as described in the report, that engendered meaningful levels of trust was the government of the nation where a researcher does their work. It is not uncommon for people generally to feel a problem exists, but mostly in another backyard.
(Perhaps the exception that proves the rule is the trepidation many respondents have for military governance of AI. Fewer survey takers said they trust the military of the nation where they research innovations.)
Companies doing something right are Elon Musk’s OpenAI, Microsoft, DeepMind (a British subsidiary of Alphabet) and Google, another unit of Alphabet. All of them fell between “not too much trust” and “fair amount of trust.”
There are indications that a dark-horse candidate for finally getting some heat on this puzzle in the United States are doctors, who deal daily with privacy regulations and advanced algorithms.