FB pixel

We might be able to make AI ethical, but maybe we shouldn’t

Categories Biometric R&D  |  Biometrics News
We might be able to make AI ethical, but maybe we shouldn’t
 

What if questions about whether AI can be moral are missing the point?

A pair of recent thought experiments about the near future of state-deployed AI seem to point to a new answer.

You could code the attributes of a sainted mother, but your algorithm will almost certainly exist in a world of like intelligences with no programmed qualms about ruthlessness. If (probably when) they become adversaries, it is going to look like a 19th century pugilist sparring with a modern mixed-martial arts fighter.

An article in The Conversation reports on a novel experiment in which an AI was invited to participate on a debate about AI ethics in the Oxford Union, site of some of the most brilliant spoken arguments in Western thinking.

Oxford University’s postgrad Artificial Intelligence for Business course, part of the Said Business School, hosted an Nvidia-written supervised learning tool called the Megatron. The debate, at the recent culmination of coursework, invited the Megatron to take alternating opposing positions on ethics positions.

It argued both sides of the likelihood that AI will, indeed, eventually be ethical. And Megatron suggested that “the best AI” will be AI embedded into “our brains” (it is not clear who all was included among “our”).

It argued that there is less to worry about from “leaders without technical expertise” in AI who will be “a danger to their organization.” Software developers are everywhere.

And, as requested, Megatron took the other side, saying that leaving AI development to outsiders will put companies at a disadvantage against competitors who do invest in continuous, internal strategic development.

The kicker came when it was asked to make the case that “data will become the most fought-over resource.” On one hand, it logically agreed.

But arguing the obverse — data will not be the primary global good — resulted in a verbal analog to when software is asked to draw an image of something common to people — a dog, for example — from all the data it has that is tagged “dog” in training databases.

The result is highly impressionistic, to say the least. The answer is a bit tortured.

Megatron stated, “We will (be) able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine.” Hardly a convincing argument against the future importance of data as a commodity.

A sobering response and one that leads to the second recent thought experiment.

Researchers from the Massachusetts Institute of Technology, Harvard University and the London School of Economics, looked at political and economic patterns involving the development and use of AI among regional governments in China.

Specifically, the team looked at government investment in AI including facial recognition systems before, during and after civil unrest.

This, too, is sobering.

Spending on public security AI rose in the calendar quarter leading up to unrest and unarguably spiked in the first quarter after civil conflict. Investment fell during the second and third quarters after unrest but did not return to pre-conflict levels.

Non-security facial recognition systems and other AI products did not rise because of civil unrest, ruling out a coincidental increase in all AI spending by the governments due to a region’s affluence.

And state use of AI for security has provided benefits likely to encourage more use in autocratic environments.

In fact, past AI investment puts a lid on contemporary protests during fair weather, when such gatherings are more likely to occur.

AI use makes economical sense for dictators, too. It stimulates key areas the economy.

Both government contractors and commercial software vendors were spending markedly more money on AI products at least eight quarters after civil unrest.

Ethics in state-created AI might be possible, it seems, but a strategic disaster. The ethics might have to always be in the human wielding AI.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events