Debate over AI risks heats up on both sides of the Atlantic
As the European Union edges towards its spring deadline for finalizing the Artificial Intelligence Act and the U.S. Congress examines potential risks stemming from AI, debates on AI risks are heating up on both sides of the Atlantic.
While some highlight incompatible regulations between the two trade partners, others are debating the impact of AI on civil rights. The past week has seen warnings from civil society groups hoping to impact European AI legislation, including the use and exports of biometric identification.
Joining them from across the Atlantic were a number of U.S. federal agencies which issued a reminder that they have the authority to tackle harms caused by AI bias and they plan to use it.
Brookings warns about EU-US regulatory misalignment on AI risks
Last year in December, Brussels and Washington agreed to set up a roadmap for defining common technical standards, terminology and evaluation of AI as part of the EU-U.S. Trade and Technology Council (TTC). Ultimately, companies on both sides of the Atlantic should be able to comply with both regulatory regimes using a single set of tools.
But the duo are still a way apart on how to regulate risks of AI technology, Brookings warned in a report issued this week.
“Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and U.S. are on a path to significant misalignment,” the report says.
The report details differences between EU and U.S. approaches to AI risk management and lays out policy recommendations. While the EU places faith in AI legislation, the U.S. relies more on federal agencies and non-regulatory infrastructure, such as the new AI risk management framework. And while both strategies are aligned on concepts of risks, trustworthy AI and endorsement of international standards, the specifics of these AI risk management regimes have more differences than similarities, the report said.
Brookings recommends a number of measures, including deepening knowledge sharing on a number of levels and joint work on recommender systems, network algorithms and online platforms. The U.S. should execute federal agency AI regulatory plans while the EU should create more flexibility in the sectoral implementation of the EU AI Act, the report said.
NGOs reiterate call for bans on remote biometric identification
Groups gathered under human rights organization ARTICLE 19, including European Digital Rights, Access Now, Algorithm Watch, Amnesty International and others, invited Members of the European Parliament (MEPs) to prohibit AI systems that pose an unacceptable risk for fundamental rights.
The group proposed a full ban on all types of remote biometric identification, predictive and policing systems in law enforcement, emotion recognition systems, biometric categorization systems using sensitive attributes or being used in public spaces as well as other AI applications.
Article 19 also invited MEPs to prioritize human rights while working on the EU AI Act. This includes ensuring a right to lodge complaints when people’s rights are violated by an AI system, the obligation of high-risk AI systems creators to conduct and publish a fundamental rights impact assessment before deployment and more.
In a separate letter, Amnesty International called on EU agencies working on the AI Act to address exports of European-made AI technologies that are banned in the EU.
“Firstly, AI systems that are prohibited in Europe should not be allowed to be exported abroad,” said Amnesty International Secretary General Agnès Callamard. “Secondly, permitted high-risk technologies that are exported must meet the same regulatory requirements as high-risk technologies sold in the EU.”
US federal agencies say they already have laws to counter AI harms
Although the U.S. lacks a comprehensive legislature similar to the EU AI Act, four U.S. federal agencies issued a warning this week that existing laws can and will be used to take action against companies abusing or misusing AI.
In a joint announcement, the Federal Trade Commission (FTC) Consumer Financial Protection Bureau (CFPB), the Equal Employment Opportunity Commission (EEOC) and the Department of Justice said that they aim to tackle the harms posed by AI, including bias.
The FTC said it aims to hold companies accountable for their claims of what their AI technology can do as deceptive marketing has long been part of the agency’s domain. The Civil Rights Division noted that it aims to hold companies accountable when they use artificial intelligence in ways that prove discriminatory while CFPB is already looking into housing discrimination resulting from bias in lending or home-valuation algorithms. Finally, the EEOC noted the use of AI for hiring and recruitment.
The four U.S. agencies are not the only ones looking into AI risks. The U.S. Department of Homeland Security (DHS) has also promised to address the effects of AI and established the Artificial Intelligence Task Force last week.
The AI Task Force will look at different applications of AI that can be used in DHS’ work as well as how the technology will influence the threat landscape and augment the arsenal of tools used against threats.
Article Topics
AI | AI Act | biometrics | EU | legislation | regulation | surveillance | U.S.A
Comments