FB pixel

3 case studies making the case that AI can be developed ethically

Categories Biometric R&D  |  Biometrics News
 

artificial-intelligence-edge-computing-biometric-facial-recognition-apple

By now, virtually all organizations capable of making or managing artificial intelligence products, including biometrics, know the importance of creating ethical principles to corral the powerful technology.

Of course, acknowledging the need for principles and developing them are two different things. The same is true for adopting ethical principles and actually implementing them.

Researchers at the University of California, Berkeley have written a report to highlight noteworthy efforts at implementing principles and to be used as a guide for other organizations seeking to implement .

The report was produced by the university’s Center for Long-Term Cybersecurity. It looks for lessons in three implementation case studies involving Microsoft Corp.’s AI, Ethics and Effects in Engineering and Research (AETHER) committee, the non-profit research laboratory OpenAI Inc. and a think tank, the Organisation for Economic Co-operation and Development, or OECD.

The report also outlines 35 other ongoing efforts to implement principles. The projects examined include crafting tools, frameworks, initiatives and standards, all aimed at developing “trustworthy” AI.

In the first case study, Microsoft executives created the AETHER committee expressly to keep AI work aligned with the firm’s core values. It is actually seven working groups of employees companywide trying to find answers to consequential questions emerging from AI development and use by Microsoft and its customers.

Questions raised have focused on a mix of sensitive and tactical topics including “bias and fairness, reliability and safety, and potential threats to human rights and other harms,” according to the report. All of these threats have been cited by skeptics of facial recognition, which Microsoft operationalized principles for last year.

The OpenAI case study focuses on its staged release of GPT-2, an unsupervised AI language model, over nine months last year. Ordinarily, a model would be released in its entirely, but the outfit chose “an experiment in responsible disclosure.”

OpenAI leaders were concerned that if they were not deliberate, their model might be deployed to synthetically generate, among other things, misleading news items, abusive content or phishing attacks. So they released increasingly larger models (measured in parameters).

Between each release, OpenAI issued documentation “exploring the societal and policy implications of the technology” as well as conventional technical papers.

In the third case study, 42 nations agreed a year ago to five principles and five recommendations for artificial intelligence development and use that had been developed by the 36-member OECD. It followed up in February 2020 by launching the AI Policy Observatory to guide signatories with implementation.

Beyond publishing implementation guidelines, the observatory also keeps a live database of all AI policies and initiatives as well as global AI development metrics.

One of the primary aims of the OECD’s efforts is to present an alternative to “AI nationalism” — secretive development that pits nation against nation. That development model, which resembles the race for more effective nuclear arms — will lead to dangers and damage that could be avoided through cooperative development, in the OECD’s eyes.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics