AI industry deprioritizes ethics for profits through 2030, says large panel of experts
Talk of giving the AI industry an ethical core is pretty much just that, according to a new Pew Research canvass of industry insiders.
Sixty-eight percent of the 602 survey respondents said that “ethical principles focused primarily on the public good will not be employed in most AI systems by 2030″ (Pew’s emphasis).
A growing number of experts in and outside the industry feel that, globally, people will reject AI unless they trust it. Building the many fundamental aspects of ethics into AI is the only way — short of forcing it on populations — to make the algorithms commonplace.
It is important to note that Pew, which worked on the survey with Elon University, described the result as a non-scientific canvassing of selected people. It only reflects the views of people known to the organizations who were surveyed.
Participants were businesspeople, policymakers, software developers, activists and researchers.
The stakes go beyond biases, although biases are top-tier concerns. Pew researchers said there is a growing impression that “AI will affect what it means to be human, to be productive and to exercise free will.”
Respondents found room for optimism in AI’s further development.
For one thing, according to the report, “No technology endures if it broadly delivers unfair or unwanted outcomes.”
Advances will add positively to humanity, according to some survey takers. The Pew report focuses largely on beneficial roles AI might play in medicine, diagnosis, treatment and health care generally.
And the industry is not precisely running from ethical practices, development and operations. The survey picked up the sentiment that “market and legal systems will drive out the worst AI systems,” although it would appear that that largely will not happen during the next eight years.