AI regulation is evolving differently on each side of the North Atlantic
Both houses of the U.S. Congress now have significant legislation designed specifically to rein in AI created by contractors for use by the federal government.
The news arrives as an influential non-governmental research institute has issued a major critique of the European Commission’s proposed AI Act, citing problems and suggested solutions.
A House of Representatives bill would create new rules for how the federal government buys artificial intelligence, presumably including biometric applications. The Senate already is deliberating its version of the legislation.
Both bodies have to agree on bills for them to be sent to the president for a signature.
The so-called Government Ownership and Oversight of Data (or GOOD) AI Act is seen by backers as an update of the two-year-old AI in Government Act.
The White House’s Office of Management and Budget would create an “AI hygiene” working group to recommend rules, based on the GOOD Act, for buying trustworthy AI. The target code would be standalone, not integrated into larger systems such as word-processing applications.
Members of the working group, all of whom would be leaders of related intragovernmental functions, would be charged with getting feedback from federal and non-governmental experts in civil rights and liberties, and privacy.
Group members also would have a year to recommend rules for getting and the training data and algorithms.
The rules would address methods for protecting against misuse, degradation, unauthorized alteration of AI tools and the ability for the software to be shut down by an unauthorized entity.
The same week House members introduced their bill, the Ada Lovelace Institute, in London, published a lengthy report suggesting changes to the European Commission’s AI Act, introduced last April. The EU AI Act places special status on biometrics as the only application with its own specific rules.
A shorter policy briefing tics off 18 recommendations that the researchers feel would strengthen the AI Act.
It is unfair to compare the U.S. and EC efforts. Washington is focused on only one isolated, albeit important, sector of AI regulation in the United States. The legislation makes no mention of creating an environment that balances civil rights with the fostering of a massive, new industry.
The European Commission is addressing a far wider spectrum and specifically addresses the need to balance concerns with commerce.
The AI Act would impact every organization doing business in or selling to European entities. It is considered the most comprehensive AI regulation in the world and is being discussed as a global template.
Some of the Lovelace recommendations are small in scale but practical: Stop calling the people working with AI as product users; begin calling them deployers.
Another advises creating “clear, judicially reviewable criteria” for risk categories and place each AI system in one.
Developers and deployers should include under the umbrella of fundamental rights protection even in the case of retrospective identification of a person.
Ada Lovelace Institute | AI | AI Act | biometrics | Europe | facial recognition | legislation | regulation | standards | United States