FB pixel

Transparent warfare: navigating privacy and ethics in the military use of AI

Transparent warfare: navigating privacy and ethics in the military use of AI
 

Military AI systems rely on vast amounts of data, much of which involves potentially sensitive or personally identifiable information (PII) that includes data from surveillance, biometrics, communications, and other intelligence sources. However, the collection and use of such data raise concerns about how privacy is protected, especially when it involves civilian populations or non-combatants. That’s according to a new report from the Canada-based Centre for International Governance Innovation (CIGI), an independent, non-partisan think tank.

In the report, Bytes and Battles: Inclusion of Data Governance in Responsible Military AI, the authors say privacy issues are a significant concern in the context of military AI systems, and highlight what they see are the key privacy challenges.

There are significant challenges in managing military data, including issues of data bias, access control, security, and privacy. Ensuring that data is ethically sourced, properly curated, and aligned with legal frameworks is critical to avoid misuse or unintended consequences, the report says.

The report provides what its two authors believe is “a comprehensive overview on [the] data issues surrounding the development, deployment, and use of AI” by militaries as well as “an overview of possible policy and governance approaches to data practices surrounding military AI to foster the responsible development, testing, deployment and use of AI in the military domain.”

The report’s authors, Yasmin Afina, a researcher for the Security and Technology Programme at the United Nations Institute for Disarmament Research (UNIDIR), and Sarah Grand-Clément, a researcher in both UNIDIR’s Conventional Arms and Ammunition Programme and its Security and Technology Programme, note that “as AI governance frameworks mushroom across the globe, it is clear that data plays an important role in forming this dynamic and ever-evolving policy landscape.”

“Data lies, for instance, at the heart of” President Joe Biden’s October 2023 Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that was issued in late 2023,” Afina and Grand-Clément note. “The EO “highlights the need to safeguard the right to privacy against the mass use of data to train AI systems, while prioritizing the development and use of privacy-preserving techniques, including those that are AI enabled.”

“Similarly,” the report’s two authors said, “data also features prominently in the newly adopted EU AI Act,” which “recalls, in fact, the 2019 Ethics Guidelines for Trustworthy AI developed by the High-Level Expert Group on AI, which was appointed by the European Commission and identified privacy and data governance as one of the seven key principles underpinning trustworthy and ethically sound AI.”

Afina and Grand-Clément also note that the EU AI Act “asserts the vital role high-quality data and access to high-quality data play in providing structure, and in ensuring that high-risk AI systems perform as intended and safely and that they do not become a source of discrimination.”

The report articulates the importance of integrating data governance into the development and deployment of military AI systems, and stresses that as military AI becomes increasingly central to national defense, so too does the need for clear, ethical, and transparent practices surrounding the data used to train these systems.

“Data plays a critical role in the training, testing, and use of artificial intelligence, including in the military domain,” the report says, emphasizing that “research and development for AI-enabled military solutions is proceeding at breakneck speed” and therefore “the important role data plays in shaping these technologies have implications and, at times, raises concerns.”

The report says “these issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning),” and points out that “pathways and governance solutions to address these issues remain scarce and very much underexplored.”

Afina and Sarah Grand-Clément said the risk of data breaches or unauthorized access to military data also is a critical concern. They said that given the strategic and potentially life-threatening nature of military AI systems, the need for robust data security measures to prevent misuse or exposure of sensitive information is essential.

The report addresses the potential privacy violations that could arise when AI systems, such as facial recognition or surveillance technologies, are deployed in military operations. These technologies could infringe on the rights of individuals, particularly in conflict zones or in countries under surveillance, where civilians might unknowingly have their personal information captured and stored.

The use of military AI also must also adhere to international privacy standards and laws, including human rights protections. The report stresses that military AI systems must be designed to respect privacy rights while balancing security and operational needs and underscores the importance of responsible data governance in mitigating risks such as discrimination, errors in autonomous systems, and accountability gaps.

Another privacy issue discussed in the report is the challenge of obtaining informed consent in military operations, especially when data is collected from civilians or other non-military entities. The report notes that transparency regarding how data is collected, used, and shared is vital for upholding privacy rights.

The report’s authors say the use of personal data of civilians also raises issues vis-a-vis compliance with international human rights law, the violation of which places them at increased risk of undue harm by including this data in military data sets without the appropriate safeguarding measures and mechanisms in place.

The report also highlights the importance of data minimization practices – only collecting the data necessary for the task at hand – and ensuring that data is not retained longer than needed. Retention of data for too long increases the risk of misuse and undermines privacy protections.

Establishing transparent data management practices is essential for accountability, especially when AI systems make life-and-death decisions, the report emphasizes, adding that clear audit trails and explainable AI systems can help prevent undesirable outcomes and ensure decision-making aligns with ethical standards.

In summary, the report stresses the need for careful data governance to address privacy concerns in military AI systems. Ensuring the protection of sensitive data, complying with legal and ethical standards, and maintaining transparency in how data is collected and used are critical to safeguarding privacy in military AI operations.

Responsible data governance is crucial for the safe and ethical integration of AI into military operations. Proper oversight, along with clear guidelines and international cooperation, will help mitigate risks and enhance the effectiveness of military AI systems while safeguarding global security and human rights.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

IntelliVision censured for misleading biometric accuracy and bias claims by FTC

The U.S. Federal Trade Commission has slapped IntelliVision with a consent order to halt claims about the accuracy of its…

 

DHS seeks wired interconnection for mobile devices to secure biometric data

The Department of Homeland Security (DHS) is spearheading an initiative to develop a wired interconnection cable/adapter that supports secure and…

 

BixeLab offers guidance on engaging APAC digital ID market

A series of digital identity verification frameworks, regulations and laws are taking effect across the Asia-Pacific region, presenting a sizeable…

 

Unissey first to receive Injection Attack Detection certification

Liveness detection from Unissey has become the first to achieve compliance certification under the Injection Attack Detection (IAD) program as…

 

Dominican Republic biometric passport plans advance, supplier to front costs

The Dominican Republic is preparing to launch its biometric passports with embedded electronic chips to replace the machine-readable version, with…

 

Ghana upgrades to chip-embedded passport for enhanced security

Ghana has rolled out an upgraded version of its passport which is embedded with a microprocessor chip containing the holder’s…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events