FB pixel

In a military context, bias in AI reflects the decisions of its makers

ICRC blog post argues that robot intelligence is a human problem
In a military context, bias in AI reflects the decisions of its makers
 

According to Ingvild Bode, associate professor at the Center for War Studies at the University of Southern Denmark, the artificial intelligence in lethal autonomous weapons systems (LAWS) and other military applications “will likely contain algorithmic biases” that could have “serious consequences.”

“Biases can lead to legal and moral harms as people of a certain age group, gender, or skin tone may be wrongfully assessed to be combatants,” writes Bode in the piece, which is based on her presentation to the Group of Governmental Experts (GGE) on lethal autonomous weapon systems. In other words, an error in biometric identification due to algorithmic bias could lead to a civilian being executed by a lethal drone.

It is a chilling thought next to Bode’s core thesis: “Beyond some noteworthy exceptions,” she says, “chiefly UNIDIR’s 2021 report ‘Does Military AI Have Gender?’ as well as policy briefs published by the Observer Research Foundation and the Campaign to Stop Killer Robots, issues of bias have not been covered at length.”

For a couple reasons, however, Bode believes we can make some assumptions about bias in AI for military use cases. “Much of the innovative potential on AI technologies comes from civilian tech companies who are increasingly collaborating with military actors,” she writes. “More fundamentally, the types of techniques used in civilian and military applications of AI, such as machine learning, are the same and will therefore be subject to similar concerns regarding bias.”

Bias across the spectrum of military AI

One issue is that bias is not, so to speak, one issue at all – but rather embedded at various stages across the life cycle of an algorithmic model. Bode identifies three key potential bias points: on data sets used for training; in the human choices made during design and development processes, which can then be baked into systems; and in how AI systems are used. 

“AI technologies gain new meanings and functions – as well as potentially biases – through being used repeatedly and in an increasingly widespread way,” writes Bode. “This can happen in two ways: first, simply through employing systems featuring AI technologies, any biases that these contain will be amplified. Second, people will act on the outputs that AI systems produce.” She points to automation bias, “a tendency for humans to depend excessively on automated systems and to defer to outputs produced by such technologies,” as a particular problem in the military domain.

Bode cites research by Joy Buolamwini and Timnit Gebru, who examined three kinds of facial analysis software and found that all three recognized men’s faces more accurately than women’s, and were generally better at recognizing people with lighter skin tones.

“Thinking around bias underlines, once again, that technology is not neutral,” says Bode. “Technologies are ‘products of their time’, they mirror our societies.” Since bias is inherent in society, it is inherent in AI. Bode calls out in particular the lack of diversity in STEM professions and work cultures in which AI systems germinate. “Even at the design stage,” she writes, “mitigation strategies would also have to be mainstreamed into how AI programmers think about (initial) modeling parameters. Here, it matters to have a closer look at the tech companies who are dominating investment in and development of AI technologies and their particular interests because these interests are likely to have a direct impact on choices made at the design stage.”

In short, “technical solutions will not be sufficient to resolve bias.”

Bode concludes her piece by re-emphasizing that “the problem of algorithmic bias demonstrates we should think about AI technologies not as something separate from human judgment but as deeply enmeshed in forms of human judgment throughout the entire life cycle of AI technologies.” As is ever thus, we are our technology – and our technology is us.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Karachi launches safe city project with live vehicle, facial recognition

Karachi has launched the first phase of its Safe City Project, with five poles equipped with 25 surveillance cameras. A…

 

Biometric payments project launched in Kazan, Russia

The authorities of the city of Kazan, a center of Russian petrochemical production, has officially launched biometric face payment via…

 

DPI is taking stage as India’s digital model is shaping global governance

Governments worldwide are increasingly prioritizing the development of robust Digital Public Infrastructure (DPI), in a shift that comes in response…

 

Comoros to implement digital govt program with $10M AfDB grant

The Union of the Comoros is set to execute a digital public infrastructure (DPI) project thanks to €9.51 million (US$10.4…

 

Malaysia pushes for stronger DPI governance

Malaysia has ambitions to become a digital hub for Southeast Asia but the country still has some way to go,…

 

Biometrics race for the borders

Biometrics to ease border crossings are a major theme of the week among Biometric Update’s most-read articles of the week….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events