Carnegie Mellon professor touts increasing emphasis on ethics and philosophy in AI community
Efforts by computer scientists to understand the impact of artificial intelligence on people and society and to build ethical AI have been increasing significantly, and undermining a widely-held legacy impression that AI specialists are more concerned with algorithms and robots than people, according to an opinion piece by Carnegie Mellon University associate professor of computer science Ariel Procaccia published by Bloomberg.
Procaccia points to the launch of two new interdisciplinary conferences, the Conference on Fairness, Accountability and Transparency and the Conference on AI, Ethics, and Society (AIES), for which more than 400 research papers were submitted, combined, for peer review. He points out that these papers typically are only produced after months or years of work. The practical suggestions they make for designing ethical AI systems involve addressing two related challenges, according to the editorial; the meaning of “fairness” in mathematical systems, and how to apply this abstract concept to modify algorithms or their results.
At the 2018 International Conference on Machine Learning, two of the five papers chosen for awards among 2,473 submissions deal with fairness. One of those references the moral and political philosopher John Rawls, who promoted a theory of “justice as fairness.”
“This is representative of a much larger trend. Philosophy has long played a key role in the investigation of intelligence, and the discipline is back in vogue among AI researchers grappling with ethical questions,” Procaccia writes. “Philosophy and ethics have also become indispensable components of AI education. Computer scientists are now teaching courses on ethics and AI at leading academic research centers like Carnegie Mellon, Cornell and Stanford that complement those offered by philosophers. The curriculum of Carnegie Mellon’s new bachelor’s degree in AI, which I helped design, includes a mandatory course in ethics.”
Procaccia notes that these developments “set the stage for ethical AI,” but also that if corporate tech giants want to preserve the status quo, then only massive regulation would prevent them from derailing such efforts. He says that is unlikely, as publicity concerns, conscientious employees, and the leadership of academics and prominent ethical AI innovators at Microsoft, IBM, and Google will continue to encourage ethical AI research. While he warns that repressive regimes could use AI to boost their authority, and super-intelligent systems without the best interests of humanity as a limiting factor could be developed, he says the AI community is eager to address these challenges.
Article Topics
artificial intelligence | Carnegie Mellon University | ethics | research and development
Comments