Federal departments, agencies issue plans to implement OMB AI policy

US executive departments and agencies have released their plans to comply with an Office of Management and Budget (OMB) policy directive that was issued in March which laid out the requirements and responsibilities for the use of AI throughout the federal government.
The OMB policy memorandum directed all federal departments and agencies to advance AI governance and innovation while managing risks from the use of AI.
The directive spelled out exactly what new requirements and guidance for AI governance, innovation, and risk management federal departments and agencies must implement, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
The initial compliance plans were required to be made public by Tuesday and are required to be updated every two years until 2036. Each agency is required to submit to OMB and to post publicly on the agency’s website either a plan to achieve consistency with the directive or a written determination that the agency does not use and does not anticipate using covered AI. Agencies must also include plans to update any existing internal AI principles and guidelines to ensure consistency with the directive.
By October 15, agencies must submit extension requests for rights- and safety-impacting uses as defined by the directive if they are unable to meet the minimum risk management practices by the December 1 deadline, and to make public their updated annual AI use case inventories by December 16.
An examination of some of the plans found that not all contained references to how AI-enhanced biometrics will be treated pursuant to the OMB directive, which is primarily because “some AI use cases are not required to be individually inventoried, such as those in the Department of Defense or those whose sharing would be inconsistent with applicable law and governmentwide policy.”
Nevertheless, “on an annual basis, agencies must still report and release aggregate metrics about such use cases that are otherwise within the scope” of the OMB directive, “the number of such cases that impact rights and safety, and their compliance with the practices of Section 5(c)” of the directive.
OMB will issue detailed instructions for this reporting through its Integrated Data Collection process or an OMB-designated successor process.
The compliance plan released by the Department of Defense (DOD), for example, contains no references to biometrics, although biometrics is used throughout the defense department for many purposes.
The DOD plan states that “while many of the DOD’s ongoing activities meet the requirements established” by OMB’s directive which “prompted DOD to adapt certain existing processes, such as budget processes, to enable compliance,” DOD’s Chief Digital and Artificial Intelligence Officer “will develop DOD-wide guidance for complying with minimum risk management practices and requesting waivers for covered AI use cases” inolving “ensur[ing] consistency with minimum risk management practices for safety-impacting and rights-impacting AI activities.”
The Department of State is another executive cabinet-level department that did not specifically address its biometric and biometric-related activities in its OMB compliance plan.
Under the heading, “Removing Barriers to the Responsible Use of AI,” the department said it “has identified potential barriers to responsible use of AI, including barriers to responsible GenAI listed in NIST AI 600-1, such as data privacy, confabulations, harmful bias and homogenization, human-AI configuration, and others,” and that as “part of broader efforts to address the barrier of data privacy the department has onboarded, tested, and is maintaining a vetted GenAI model that is safe to process Sensitive but Unclassified information for department use. In addition, the department leverages its AI governance bodies to establish policies and procedures that manage the risks of confabulation, harmful bias and homogenization, and human-AI configuration.”
Continuing, the State Department said it “has created and maintains guidance to have a human-in-the-loop in AI implementation and oversight” and “stood up an independent testing, evaluation, validation, and verification team that focuses on measuring baselines and monitoring performance by operationalizing the risk management practices established in [the] NIST AI Risk Management Framework.
In contrast, the compliance plan for the Department of Homeland Security (DHS) – which manages the federal government’s largest repository of biometric data on millions of individuals, including law enforcement and known and suspected terrorists – contains significant language regarding its biometric-related activities that are subject to OMB’s directive.
Under the section titled, “Responsible Procurement of AI for Biometric Identification,” DHS said that when procuring systems that use AI to identify individuals using biometric identifiers – e.g., faces, irises, fingerprints, or gait -agencies are encouraged to:
- Assess and address the risks that the data used to train or operate the AI may not be lawfully collected or used, or else may not be sufficiently accurate to support reliable biometric identification. This includes the risks that the biometric information was collected without appropriate consent, was originally collected for another purpose, embeds unwanted bias, or was collected without validation of the included identities; and
- Request supporting documentation or test results to validate the accuracy, reliability, and validity of the AI’s ability to match identities.
DHS’s compliance document states that “a use of AI is presumed to be rights-impacting if it is used or expected to be used in real-world conditions to control or significantly influence the outcomes of any of the following agency activities or decisions:”
- Blocking, removing, hiding, or limiting the reach of protected speech;
- In law enforcement contexts, producing risk assessments about individuals; predicting criminal recidivism; predicting criminal offenders; identifying criminal suspects or predicting perpetrators’ identities; predicting victims of crime; forecasting crime; detecting gunshots; tracking personal vehicles over time in public spaces, including license plate readers; conducting biometric identification (e.g., iris, facial, fingerprint, or gait matching); sketching faces; reconstructing faces based on genetic information; monitoring social media; monitoring prisons; forensically analyzing criminal evidence; conducting forensic genetics; conducting cyber intrusions in the course of an investigation; conducting physical location-monitoring or tracking of individuals; or making determinations related to sentencing, parole, supervised release, probation, bail, pretrial release, or pretrial detention;
- Deciding or providing risk assessments related to immigration, asylum, or detention status; providing immigration-related risk assessments about individuals who intend to travel to, or have already entered, the US or its territories; determining individuals’ border access or access to federal immigration related services through biometrics or through monitoring social media and other online activity; monitoring individuals’ physical location for immigration and detention-related purposes; or forecasting the migration activity of individuals;
- Conducting biometric identification for one-to-many identification in publicly accessible spaces;
- Detecting or measuring emotions, thought, impairment, or deception in humans;
- Replicating a person’s likeness or voice without express consent;
- In education contexts, detecting student cheating or plagiarism; influencing admissions processes; monitoring students online or in virtual-reality; projecting student progress or outcomes; recommending disciplinary interventions; determining access to educational resources or programs; determining eligibility for student aid or federal education; or facilitating surveillance (whether online or in-person);
- Screening tenants; monitoring tenants in the context of public housing; providing valuations for homes; underwriting mortgages; or determining access to or terms of home insurance;
- Determining the terms or conditions of employment, including pre-employment screening, reasonable accommodation, pay or promotion, performance management, hiring or termination, or recommending disciplinary action; performing time-on-task tracking; or conducting workplace surveillance or automated personnel management;
- Carrying out the medically relevant functions of medical devices; providing medical diagnoses; determining medical treatments; providing medical or insurance health-risk assessments; providing drug-addiction risk assessments or determining access to medication; conducting risk assessments for suicide or other violence; detecting or preventing mental-health issues; flagging patients for interventions; allocating care in the context of public insurance; or controlling health-insurance costs and underwriting;
- Allocating loans; determining financial-system access; credit scoring; determining who is subject to a financial audit; making insurance determinations and risk assessments; determining interest rates; or determining financial penalties (e.g., garnishing wages or withholding tax returns);
- Making decisions regarding access to, eligibility for, or revocation of critical government resources or services; allowing or denying access – through biometrics or other means (e.g., signature matching) – to IT systems for accessing services for benefits; detecting fraudulent use or attempted use of government services; assigning penalties in the context of government benefits;
- Translating between languages for the purpose of official communication to an individual where the responses are legally binding; providing live language interpretation or translation, without a competent interpreter or translator present, for an interaction that directly informs an agency decision or action;
- Providing recommendations, decisions, or risk assessments about adoption matching, child protective actions, recommending child custody, whether a parent or guardian is suitable to gain or retain custody of a child, or protective actions for senior citizens or disabled persons;
- Allocating loans; determining financial-system access; credit scoring; determining who is subject to a financial audit; making insurance determinations and risk assessments; determining interest rates; or determining financial penalties (e.g., garnishing wages or withholding tax returns);
- Making decisions regarding access to, eligibility for, or revocation of critical government resources or services; allowing or denying access – through biometrics or other means (e.g., signature matching) – to IT systems for accessing services for benefits; detecting fraudulent use or attempted use of government services; assigning penalties in the context of government benefits;
- Translating between languages for the purpose of official communication to an individual where the responses are legally binding; providing live language interpretation or translation, without a competent interpreter or translator present, for an interaction that directly informs an agency decision or action; or
- Providing recommendations, decisions, or risk assessments about adoption matching, child protective actions, recommending child custody, whether a parent or guardian is suitable to gain or retain custody of a child, or protective actions for senior citizens or disabled persons.
When procuring systems that use AI to identify individuals using biometric identifiers, DHS components “are encouraged to assess and address the risks that the data used to train or operate the AI may not be lawfully collected or used, or else may not be sufficiently accurate to support reliable biometric identification. This includes the risks that the biometric information was collected without appropriate consent, was originally collected for another purpose, embeds unwanted bias, or was collected without validation of the included identities.”
In addition, when procuring systems that use AI to identify individuals using biometric identifiers, DHS components must request supporting documentation or test results to validate the accuracy, reliability, and validity of the AI’s ability to match identities.
DHS said it intends to build on the White House’s AI Roadmap as it develops the AI compliance strategy, as well as the roadmap’s initiatives and lessons-learned from related activities “to create a strategy for identifying and removing barriers to the responsible use of AI and achieving enterprise-wide improvements in AI maturity.”
DHS also said it’s removing barriers to the responsible use of AI. The department said it “has undertaken a coordinated, ongoing approach to addressing barriers to responsibly leveraging AI to advance the homeland security mission [which] includes establishment of the AI Task Force, policy for the use of facial recognition and facial capture, and policy on the use of commercial generative AI.”
Article Topics
biometric identification | biometric identifiers | biometrics | data privacy | digital identity | responsible AI | U.S. Government
Comments