We might be able to make AI ethical, but maybe we shouldn’t
What if questions about whether AI can be moral are missing the point?
A pair of recent thought experiments about the near future of state-deployed AI seem to point to a new answer.
You could code the attributes of a sainted mother, but your algorithm will almost certainly exist in a world of like intelligences with no programmed qualms about ruthlessness. If (probably when) they become adversaries, it is going to look like a 19th century pugilist sparring with a modern mixed-martial arts fighter.
An article in The Conversation reports on a novel experiment in which an AI was invited to participate on a debate about AI ethics in the Oxford Union, site of some of the most brilliant spoken arguments in Western thinking.
Oxford University’s postgrad Artificial Intelligence for Business course, part of the Said Business School, hosted an Nvidia-written supervised learning tool called the Megatron. The debate, at the recent culmination of coursework, invited the Megatron to take alternating opposing positions on ethics positions.
It argued both sides of the likelihood that AI will, indeed, eventually be ethical. And Megatron suggested that “the best AI” will be AI embedded into “our brains” (it is not clear who all was included among “our”).
It argued that there is less to worry about from “leaders without technical expertise” in AI who will be “a danger to their organization.” Software developers are everywhere.
And, as requested, Megatron took the other side, saying that leaving AI development to outsiders will put companies at a disadvantage against competitors who do invest in continuous, internal strategic development.
The kicker came when it was asked to make the case that “data will become the most fought-over resource.” On one hand, it logically agreed.
But arguing the obverse — data will not be the primary global good — resulted in a verbal analog to when software is asked to draw an image of something common to people — a dog, for example — from all the data it has that is tagged “dog” in training databases.
The result is highly impressionistic, to say the least. The answer is a bit tortured.
Megatron stated, “We will (be) able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine.” Hardly a convincing argument against the future importance of data as a commodity.
A sobering response and one that leads to the second recent thought experiment.
Researchers from the Massachusetts Institute of Technology, Harvard University and the London School of Economics, looked at political and economic patterns involving the development and use of AI among regional governments in China.
Specifically, the team looked at government investment in AI including facial recognition systems before, during and after civil unrest.
This, too, is sobering.
Spending on public security AI rose in the calendar quarter leading up to unrest and unarguably spiked in the first quarter after civil conflict. Investment fell during the second and third quarters after unrest but did not return to pre-conflict levels.
Non-security facial recognition systems and other AI products did not rise because of civil unrest, ruling out a coincidental increase in all AI spending by the governments due to a region’s affluence.
And state use of AI for security has provided benefits likely to encourage more use in autocratic environments.
In fact, past AI investment puts a lid on contemporary protests during fair weather, when such gatherings are more likely to occur.
AI use makes economical sense for dictators, too. It stimulates key areas the economy.
Both government contractors and commercial software vendors were spending markedly more money on AI products at least eight quarters after civil unrest.
Ethics in state-created AI might be possible, it seems, but a strategic disaster. The ethics might have to always be in the human wielding AI.