FB pixel

Pushing ethics up the AI-development chain: Concerns about research funding and training get louder

Categories Biometric R&D  |  Biometrics News
Pushing ethics up the AI-development chain: Concerns about research funding and training get louder

A trio of recent articles on AI, each from a unique angle, are posing ethics questions that are more fundamental than how industry can demonstrate more responsibility in getting algorithms into the market.

One piece examined how researchers are being prompted to contemplate possible negative as well as positive impacts their work could have on society.

A second asked if researchers should police themselves when it comes to accepting funding.

The third goes yet another step further up the tech development process to recommend reshaping computer science education itself.

It started with a December report in Nature on the Neural Information Processing Systems (NeurIPS) conference, where face biometrics are a prominent topic. Organizers for the first time required speakers to prepare statements specifying the impacts their work could have on society.

They also created a review panel with the power to reject work that “raised ethical concerns.”

The Nature article noted sentiment about AI-supported technologies is shifting dramatically. It quoted Jason Gabriel, DeepMind’s ethicist, saying there had been a general “techno optimism. Clearly that has changed.”

Policymakers and the public are souring on AI as they learn about the threats of deepfakes, racially biased algorithms and wholesale face image scraping by secretive startups.

The need for forethought and transparency is critical if advanced democracies are to adopt systems — many of which involve surveillance — at a rate remotely approximating that of authoritarian regimes.

This month, an Australia-based consulting firm asked if the dynamics of AI research funding is recreating the late 1900s scandals in which industry marketing aims were achieved when otherwise reputable research organizations were co-opted.

Specifically, the authors point to the fortunes thrown at research institutions by tobacco companies to produce work that at best obfuscated smoking’s harms and at worst refuted science that eventually won out on the matter.

The article was published by Australasian Human Research Ethics Consultancy Services. It focused on facial recognition in service of surveillance, and ultimately follows the funding question down the same paths traveled by the previous Nature article.

EdSurge, a research firm concentrated on the equitable use of technology in education, takes the point further. The article says society is “unified” in “concerns that we are inching closer to a dystopian future.”

The need to train people in technical proficiency is obvious, according to EdSurge.

But so should be the need to instill the skills and “moral courage” to create AI technology that “dismantles existing power dynamics” in a way that nurtures society as it innovates and creates capital.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics