The Vatican and Big Tech, Pentagon release overlapping AI commandments
It is hard to know what it means when a global religious figure, two iconic technology giants and the Pentagon all find themselves on the same side of an argument.
The U.S. Department of Defense issued five principles Feb. 24 for its own use of artificial intelligence, including biometric systems like facial recognition. Systems need to be responsible, equitable, traceable, governable and reliable.
Four days later, at the end of a Vatican workshop examining artificial intelligence ethics and law, Pope Francis, Microsoft Corp., IBM Corp. and other invited organizations called for “new forms of regulation” and six principles that overlap with the Defense Department’s list.
The document, titled Rome Call for AI Ethics and backed by the Pope, says every stage and aspect of artificial intelligence must adhere to ideals of transparency, inclusion, responsibility, impartiality, reliability, security and privacy.
Both effort worry about the affect artificial intelligence could have on humanity.
The Pentagon predominantly worries about a future with remorseless, unaccountable machine-warriors. A Defense Department document announcing the military’s position noted, “(t)hese principles build upon the department’s long history of ethical adoption of new technologies.”
It was the result of 15 months of discussions headed by the Defense Innovation Board covering combat and non-combat uses of artificial intelligence. The board recommended the department’s efforts be:
* Responsible: Personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI
* Equitable: The department will take deliberate steps to minimize unintended bias in AI capabilities
* Traceable: Capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources and design procedures and documentation
* Reliable: The department’s AI capabilities will have explicit, well-defined uses, and the safety and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycle
* Governable: Personnel will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior
The focus of the Vatican group, which also includes UN Food and Agriculture Organization Director-General Qu Dongyu, is human dignity. The group already sees how artificial intelligence, particularly face recognition, is beginning to diminish dignity and betray principles of justice.
The group’s document defines its principles more in line with the Catholic church’s stated social values. The document calls for transparency, saying AI systems must be explainable, inclusion, meaning everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop, responsibility, as in those who design and deploy AI must proceed with responsibility and transparency, impartiality, meaning do not create or act according to bias, it says systems must work reliably and securely in a way that respects privacy.
Both high-minded and practical, the two lists are goal posts that industries and governments need to work together toward in a sustained effort.
There already is some worry in that regard.
Francesca Rossi, global artificial intelligence lead for IBM, speaking to the Financial Times, said, “AI has a lot of nuances, some are very high risk and some are not. It wouldn’t be reasonable to put the burden on applications that can be very beneficial but it is very important to make sure regulation is done the right way on high-risk applications.”
Article Topics
AI | artificial intelligence | best practices | biometrics | Department of Defense | ethics | explainability | facial recognition | IBM | Microsoft | regulation
Comments