White House issues policy directive on defense, intelligence use of AI
The White House’s new National Security Memorandum (NSM) to implement President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, imposes actions that the Department of Defense (DOD) and Intelligence Community (IC) must take to protect privacy and civil rights as they pursue and use AI technologies.
The NSM was a requirement of Biden’s 2023 AI Executive Order. A related document, the Framework to Advance AI Governance and Risk Management in National Security, was published the same time as the NSM.
In particular, the NSM requires DOD and the IC to consult with the Department of Justice (DOJ) to “review their respective legal, policy, civil liberties, privacy, and compliance frameworks, including international legal obligations … consistent with applicable law.”
The policy also requires DOD and the IC to “develop or revise policies and procedures to enable the effective and responsible use of AI” that specifically addresses any issues that are “raised by the acquisition, use, retention, dissemination, and disposal of models trained on datasets that include personal information traceable to specific United States persons, publicly available information, commercially available information, and intellectual property.”
Further, the Justice Department is required to develop guidance in consultation with DOD and the Office of the Director of National Intelligence (ODNI) “regarding constitutional considerations raised by the IC’s acquisition and use of AI,” as well as any challenges associated with classification and compartmentalization; algorithmic bias, inconsistent performance, inaccurate outputs, and other known AI failure modes; threats to analytic integrity when employing AI tools; and any risks posed by a lack of safeguards that protect human rights, civil rights, civil liberties, privacy, and other democratic values” as defined by the new policy.
The Defense department, ODNI, the departments of Commerce, Energy, and Homeland Security, the National Security Agency, National Geospatial-Intelligence Agency, and the National Science Foundation, are all tasked by the NSM with “prioritize[ing] research on AI safety and trustworthiness,” and to “pursue partnerships … with leading public sector, industry, civil society, academic, and other institutions with expertise in these domains, with the objective of accelerating technical and socio-technical progress in AI safety and trustworthiness.”
The NSM says “this work may include research on interpretability, formal methods, privacy enhancing technologies, techniques to address risks to civil liberties and human rights, human-AI interaction, and/or the socio-technical effects of detecting and labeling synthetic and authentic content; for example, to address the malicious use of AI to generate misleading videos or images, including those of a strategically damaging or non-consensual intimate nature, of political or public figures.”
The 40-page NSM covers a disparate set of issues and “is by far the most comprehensive articulation yet of United States national security strategy and policy toward artificial intelligence,” wrote Gregory Allen, director of the Wadhwani AI Center at the Center for Strategic and International Studies and former director of strategy and policy at DOD’s Joint Artificial Intelligence Center, and Wadhwani AI Center research assistant, Isaac Goldston.
They said the NSM, “unlike the AI executive order, mostly ignores the AI technologies developed and deployed in the 2012–2022 timeframe,” and “is squarely concerned with frontier AI models.”
The NSM officially defines frontier models as “a general-purpose AI system near the cutting-edge of performance, as measured by widely accepted publicly available benchmarks, or similar assessments of reasoning, science, and overall capabilities.”
The NSM articulates the Biden administration’s concerns about frontier AI technology, which it sees as a pressing national security priority. The NSM states that “recent innovations have spurred not only an increase in AI use throughout society, but also a paradigm shift within the AI field . . . This trend is most evident with the rise of large language models, but it extends to a broader class of increasingly general-purpose and computationally intensive systems. The United States government must urgently consider how this current AI paradigm specifically could transform the national security mission.”
The NSM says “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security,” and that the “United States must lead the world in the responsible application of AI to appropriate national security functions. AI, if used appropriately and for its intended purpose, can offer great benefits.” But if misused it “could threaten United States national security, bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses, and weaken the rules-based international order.”
“Harmful outcomes could occur even without malicious intent if AI systems and processes lack sufficient protections,” the NSM warns.
The new memorandum “provides further direction on appropriately harnessing artificial intelligence models and AI-enabled technologies in the United States government, especially in the context of national security systems, while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities.”
During a “fireside chat” on October 24 at the National Defense University in Washington, DC, Jake Sullivan, the assistant to the president for national security affairs, unambiguously likened the AI revolution to earlier transformative national security technologies like nuclear and space.
CSIS’s Allen and Goldston said “government officials told CSIS that some of the critical early US national security strategy documents for those technologies served as a direct inspiration for the creation of the AI NSM. For example, NSC-68, published in 1950 at a critical moment in the early Cold War, recommended a massive buildup of nuclear and conventional arms in response to the Soviet Union’s nuclear program. This analogy is imperfect since the AI NSM is not advocating a massive arms buildup, but the comparison does helpfully illustrate that the Biden administration views the AI NSM as a landmark document articulating a comprehensive strategy towards a transformative technology.”
Indeed. A Fact Sheet on the NSM states the policy “is designed to galvanize federal government adoption of AI to advance the national security mission, including by ensuring that such adoption reflects democratic values and protects human rights, civil rights, civil liberties, and privacy. In addition, the NSM seeks to shape international norms around AI use to reflect those same democratic values and directs actions to track and counter adversary development and use of AI for national security purposes.”
The NSM says the “United States must understand AI’s limitations as it harnesses the technology’s benefits, and any use of AI must respect democratic values with regard to transparency, human rights, civil rights, civil liberties, privacy, and safety.”
In addition, the NSM says the “government must continue cultivating a stable and responsible framework to advance international AI governance that fosters safe, secure, and trustworthy AI development and use; manages AI risks; realizes democratic values; respects human rights, civil rights, civil liberties, and privacy; and promotes worldwide benefits from AI.
The Department of Commerce, acting through the AI Safety Institute within the National Institute of Standards and Technology, is designated as the primary point of contact with private sector AI developers “to facilitate voluntary pre- and post-public deployment testing for safety, security, and trustworthiness of frontier AI models.”
The Commerce Department is also tasked with establishing “an enduring capability to lead voluntary unclassified pre-deployment safety testing of frontier AI models on behalf of the government, including assessments of risks relating to cybersecurity, biosecurity, chemical weapons, system autonomy, and other risks as appropriate.”
A classified annex to the NSM addresses sensitive national security issues, including countering adversary use of AI that poses risks to US national security.
Article Topics
AI | data privacy | interoperability | national security | responsible AI | U.S. Government
Comments