FB pixel

In a military context, bias in AI reflects the decisions of its makers

ICRC blog post argues that robot intelligence is a human problem
In a military context, bias in AI reflects the decisions of its makers
 

According to Ingvild Bode, associate professor at the Center for War Studies at the University of Southern Denmark, the artificial intelligence in lethal autonomous weapons systems (LAWS) and other military applications “will likely contain algorithmic biases” that could have “serious consequences.”

“Biases can lead to legal and moral harms as people of a certain age group, gender, or skin tone may be wrongfully assessed to be combatants,” writes Bode in the piece, which is based on her presentation to the Group of Governmental Experts (GGE) on lethal autonomous weapon systems. In other words, an error in biometric identification due to algorithmic bias could lead to a civilian being executed by a lethal drone.

It is a chilling thought next to Bode’s core thesis: “Beyond some noteworthy exceptions,” she says, “chiefly UNIDIR’s 2021 report ‘Does Military AI Have Gender?’ as well as policy briefs published by the Observer Research Foundation and the Campaign to Stop Killer Robots, issues of bias have not been covered at length.”

For a couple reasons, however, Bode believes we can make some assumptions about bias in AI for military use cases. “Much of the innovative potential on AI technologies comes from civilian tech companies who are increasingly collaborating with military actors,” she writes. “More fundamentally, the types of techniques used in civilian and military applications of AI, such as machine learning, are the same and will therefore be subject to similar concerns regarding bias.”

Bias across the spectrum of military AI

One issue is that bias is not, so to speak, one issue at all – but rather embedded at various stages across the life cycle of an algorithmic model. Bode identifies three key potential bias points: on data sets used for training; in the human choices made during design and development processes, which can then be baked into systems; and in how AI systems are used. 

“AI technologies gain new meanings and functions – as well as potentially biases – through being used repeatedly and in an increasingly widespread way,” writes Bode. “This can happen in two ways: first, simply through employing systems featuring AI technologies, any biases that these contain will be amplified. Second, people will act on the outputs that AI systems produce.” She points to automation bias, “a tendency for humans to depend excessively on automated systems and to defer to outputs produced by such technologies,” as a particular problem in the military domain.

Bode cites research by Joy Buolamwini and Timnit Gebru, who examined three kinds of facial analysis software and found that all three recognized men’s faces more accurately than women’s, and were generally better at recognizing people with lighter skin tones.

“Thinking around bias underlines, once again, that technology is not neutral,” says Bode. “Technologies are ‘products of their time’, they mirror our societies.” Since bias is inherent in society, it is inherent in AI. Bode calls out in particular the lack of diversity in STEM professions and work cultures in which AI systems germinate. “Even at the design stage,” she writes, “mitigation strategies would also have to be mainstreamed into how AI programmers think about (initial) modeling parameters. Here, it matters to have a closer look at the tech companies who are dominating investment in and development of AI technologies and their particular interests because these interests are likely to have a direct impact on choices made at the design stage.”

In short, “technical solutions will not be sufficient to resolve bias.”

Bode concludes her piece by re-emphasizing that “the problem of algorithmic bias demonstrates we should think about AI technologies not as something separate from human judgment but as deeply enmeshed in forms of human judgment throughout the entire life cycle of AI technologies.” As is ever thus, we are our technology – and our technology is us.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

 

Zenoo integrates Trinsic, Sumsub for advanced digital ID onboarding options

Onboarding and compliance orchestration engine provider Zenoo has formed a pair of partnerships to give its customers a broader range…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events