Tech giants and civil society seek to institutionalize AI ethics
As artificial intelligence becomes more controversial, tech giants are putting together councils and review processes to ensure ethical uses of the technology, or at least reassure the public.
Google has appointed a team of philosophers, engineers, and policy experts to an advisory council to guide it through the moral hazards associated with AI technologies, MIT Technology Review reports. The Advanced Technology External Advisory Council (ATEAC) was announced a MIT Technology Review’s EmTech Digital conference in San Francisco by Google Senior Vice President for Global Affairs and Chief Legal Officer Kent Walker.
Among the several appointees are Dyan Gibbens, a former drone company founder and CEO, and Heritage Foundation president Kay Coles James. Google announced a set of AI principles last June after the company’s involvement in U.S. Air Force drone project Maven drew an employee backlash and criticism in the press, according to the Technology Review, while the Heritage Foundation has been accused of spreading misinformation about climate change.
Walker said the council will consider emerging AI risks, such as that posed by deep fakes.
Microsoft to add ethics review
Harry Shum, Microsoft’s executive vice president for AI and research, told an EmTech audience that the company will “very soon” integrate an ethics review with its standard audit checklist for products being prepared for launch, GeekWire reports. GeekWire also says the company also plans to include altered versions of photos in its training datasets to improve their performance when faced with different skin colors, lighting conditions, and certain physical traits.
The pre-release checks will include consideration of AI ethics, as well as privacy, security, and accessibility. Shum noted the risk of misinformation, propaganda and invasions of privacy that could be increased by AI.
Microsoft has been relatively aggressive calling for facial recognition to be regulated, and recommendations by the AI and Ethics in Engineering and Research (Aether) council the company formed last year have reportedly resulted in significant sales being blocked, though SensibleVision CEO George Brostoff pointed out to Biometric Update last year that the group had not publicly published any guidance on the technology. GeekWire notes that some researchers have called for a specialized government agency, perhaps modeled after the National Transportation Safety Board, to help regulate AI risks.
Amazon co-sponsors AI fairness research
The National Science Foundation and Amazon have committed to a three-year, $20 million dollar program to research fairness in AI systems, according to the Seattle Times. As many as nine projects will be selected for grants of up to $7.6 million.
Each organization will contribute $10 million, according to an Amazon blog post, which says the topics explored will include transparency, explainability, accountability, potential bias, mitigation strategies, fairness validation, and inclusivity.
The NSF is the largest funding body in the federal government for university research in computer sciences, but its proposed 2020 budget cuts its funding of more than $8 billion by $1 billion.
Some have expressed concern with the program, however. University of Washington Information School Assistant Professor Nicholas Weber said it puts researchers in a predicament. “These corporate co-sponsors are more or less piggybacking on the thorough and unique system of peer review that NSF organizes,” he told the Times.
AI Ethics Institute launching fellowship program
The Montreal AI Ethics Institute, which was formed last year, is developing a fellowship program as part of its project to build a global network for AI ethics, the Montreal Gazette reports.
The Institute has formed partnerships with several largest companies in the sector, as well as educational institutions, according to the report. Topics explored at the Institute include what jobs AI systems can perform, and whether having automated systems do those jobs instead of people is desirable, as well as issues of bias and “mathwashing.”
“People assign a higher degree of trust to numerical systems, to machine learning systems, or to any of these systems that are digital or numerical, compared to human-driven systems,” Institute Co-founder Abhishek Gupta told the Gazette. “That notion itself is flawed because where is the data generated from and who’s capturing the data and who’s deciding what to capture in those datasets? It’s humans — and there are inherent biases that get captured in those datasets, and they get propagated into the system.”
The Gazette article reviews several other controversies and potential ethical issues related to AI, and possible solutions, such as “data trusts.”
Article Topics
Amazon | artificial intelligence | biometrics | dataset | ethics | Google | Microsoft
Comments