US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like facial recognition used by federal, state, and local law enforcement agencies with the goal of protecting privacy and civil rights, while also putting in place policies and guidelines for combating crime facilitated by AI.
Speaking at a Department of Agriculture cybersecurity summit last week, Michelle Ramsden, senior counsel in DOJ’s Office of Privacy and Civil Liberties, said DOJ intends to soon make public its proposals for law enforcement’s use of AI-enhanced technologies.
Ramsden said DOJ has completed a draft of AI conformity guidelines and “has initiated consultations with external experts on AI governance” to ensure that the use of AI is responsible and adequately addresses the risks to privacy, ethics, and security. She said the draft will be published “as soon as possible.”
Jonathan Mayer was appointed in February as DOJ’s first-ever chief AI officer to shepherd what Attorney General Merrick Garland said at the time was DOJ’s duty to “keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe, and protect civil rights.”
Ramsden said DOJ’s emerging technologies board – created last year – is herding the department’s new guidelines for law enforcement.
Also in February, Deputy Attorney General Lisa Monaco directed federal prosecutors to begin attempting to impose harsher penalties on criminals who use AI in their crimes.
“Going forward, where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI, they will,” Monaco said.
In September, DOJ released its revised guidance for corporate compliance programs that use AI as a measure to help federal prosecutors better assess these systems.
The updated guidance was issued in response to Monaco’s directive and “is meant to assist prosecutors in making informed decisions as to whether, and to what extent, the corporation’s compliance program was effective at the time of the offense, and is effective at the time of a charging decision or resolution, for purposes of determining the appropriate form of any resolution or prosecution; monetary penalty, if any; and compliance obligations contained in any corporate criminal resolution (e.g., monitorship or reporting obligations).”
Monaco said DOJ has “just scratched the surface of how AI can strengthen the Justice Department’s work.”
Monaco’s directive is part of DOJ’s broader initiative to clamp down on the misuse of AI and sets forth much higher accountability standards for companies regarding how to properly integrate AI into their compliance programs, essentially providing a framework for businesses to ensure their AI systems are ethically and effectively designed to mitigate potential legal risks.
On Wednesday, DOJ’s Civil Rights Division held the fourth in a series of meetings with the heads of federal agency’s civil rights offices and senior government officials to foster AI and civil rights coordination in response to President Joe Biden’s October 2023 Executive Order on the safe, secure and trustworthy development and use of AI.
The Executive Order tasked DOJ’s Civil Rights Division with coordinating federal agencies to use their authorities to prevent and address unlawful discrimination and other harms that may result from the use of AI in programs and benefits, while preserving the potential social, medical, and other advances AI may spur.
The meeting highlighted a Justice Department symposium on AI on October 2 sponsored by the Center for Strategic and International Studies that focused on combating technology-enabled crime – including crime facilitated by AI.
DOJ’s Civil Rights Division’s recently appointed Chief Technologist, Dr. Laura Edelson, led discussion of the department’s role in negotiating the first international agreement providing a shared baseline for using AI in a way that is consistent with respect for human rights, democracy and the rule of law.
To strengthen the Civil Rights Division’s efforts to ensure equity in AI, Edelson said she is helping to systematically expand the division’s AI enforcement capacity and to increase the efficiency of its operations by harnessing technological modernization.
Edelson, along with DOJ technologists and researchers, discussed the role of auditing in preventing, investigating, monitoring, and remedying algorithmic bias. DOJ said, “auditing is used to verify that algorithms generate accurate results, as opposed to reflecting historical bias against protected classes.”
DOJ said Friday that “all participants pledged to continue collaboration to protect the American public against any harm that might result from the increased use and reliance on AI, algorithms, and other advanced technologies. The agencies also agreed to partner on external stakeholder engagement around their collective efforts to advance equity and civil rights in AI.”
The heads of the agencies represented at the meeting discussed their efforts to safeguard civil rights through robust enforcement, policy initiatives, rulemaking, and ongoing education and outreach. These accomplishments include:
- A Federal Trade Commission report finding that large social media and video streaming companies engaged in vast surveillance of their users, including kids and teens, with insufficient privacy controls;
- An Equal Employment Opportunity Commission report highlighting barriers to equal opportunity in the high tech workforce and sector and calling for concerted efforts to address discriminatory barriers;
- A Department of Labor-sponsored resource to help employers consider disability inclusion and accessibility in AI hiring technologies; and
- A Department of Education guide that reminds developers who design for education with AI that they share responsibility with educators for advancing equity and protecting students’ civil rights.
For more information, see the Civil Rights Division’s webpage, which centralizes content related to the division’s work on AI and civil rights.
It was in April that DOJ announced that five new cabinet-level federal agencies had “joined a pledge to uphold America’s commitment to core principles of fairness, equality and justice as new technologies like artificial intelligence become more common in daily life.”
“Federal agencies are sending a clear message: we will use our collective authority and power to protect individual rights in the wake of increased reliance on artificial intelligence in various aspects of American life,” Kristen Clarke, Assistant Attorney General for Civil Rights, said at the time. “As social media platforms, banks, landlords, employers and other businesses choose to rely on artificial intelligence, algorithms and automated systems to conduct business, we stand ready to hold accountable those entities that fail to address the unfair and discriminatory outcomes that may result. We are mounting a whole-of-government approach to enforcing civil rights and related laws when it comes to automated systems, including AI.”
The Justice Department published its first overall AI Intelligence Strategy in December 2020.
This post was updated at 9:16am Eastern on October 16, 2024 to clarify the roles of Jonathan Mayer and Michelle Ramsden.
Article Topics
biometric identification | biometrics | criminal ID | Department of Justice | facial recognition | U.S. Government
Comments