Federal agencies move to explore AI ethics and technical standards

The U.S. Department of Defense recently held a public comment meeting at Stanford University to discuss artificial intelligence ethics, VentureBeat reports. The Defense Innovation Board is planning to provide DoD with ethical guidelines and recommendations in a report this summer for the development or acquisition of autonomous systems.

DoD Deputy General Counsel Charles Allen said at the start of the meeting that the military’s AI policy will be created to adhere to international humanitarian law, limits on AI in weaponry placed by a 2012 DoD directive, and the miltary’s 1,200-page long manual on the law of war. Allen also defended drone object recognition initiative Project Maven, saying it could “help cut through the fog of war.”

“This could mean better identification of civilians and objects on the battlefield, which allows our commanders to take steps to reduce harm to them,” he said, according to VentureBeat.

Later, former U.S. marine Peter Dixon, founder and CEO of Second Front Systems, said AI used to identify people in drone footage can save lives.

“If we have an ethical military, which we do, are there more civilian casualties that are going to result from a lack of information or from information?” he asked.

Biometric technologies could be used for weapons targeting by autonomous, and multiple speakers expressed concern about such uses of autonomous systems for that application. Other recommendations include a more unified national strategy to compete internationally, the development of self-confidence reporting by AI systems, and closer cooperation between academia and the AI industry.

Tech companies and civil society organizations have also launched AI ethics initiatives, to varying degrees of early success.

NIST to plan AI standards development

The National Institute of Standards and Technology has issued a request for information (RFI) to help improve the agency’s understanding of AI technical standards and related tools. NIST has been tasked with creating a plan for the development of standards and tools to support reliable, robust, and trustworthy AI systems.

The agency says it will consult with other agencies, the private sector, academia, NGO’s and other stakeholders to assist in the plan.

Comments are due by March 31, 2019.

Related Posts

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics