FB pixel

Liability of AI applications under scrutiny in UK, Canada

Brookings research suggests better bias analysis methods needed
Categories Biometric R&D  |  Biometrics News
Liability of AI applications under scrutiny in UK, Canada
 

Artificial intelligence (AI) applications, particularly those focused on biometric data gathering, have recently come under another round of scrutiny both in Europe and Canada.

The European Commission proposed the AI Liability Directive last week, a set of rules designed to aid redress for people whose privacy was harmed by AI-powered and digital devices like self-driving cars, voice assistants and drones.

According to BBC reporting, the Directive may operate alongside the EU’s proposed AI Act if successfully turned into law, introducing a “presumption of causality” for those claiming injuries by AI-enabled products.

In other words, individuals harmed by these systems would not have to provide technical explanations for how AI systems work but merely show how they have harmed them in practical terms.

“The objective of this proposal is to promote the rollout of trustworthy AI to harvest its full benefits for the internal market. It does so by ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general,” reads the text of the Directive.

“It also reduces legal uncertainty of businesses developing or using AI regarding their possible exposure to liability and prevents the emergence of fragmented AI-specific adaptations of national civil liability rules.”

Canada is also following suit in terms of AI regulations, with the federal government proposing its Bill C-27 legislation in June, aka the Artificial Intelligence and Data Act (AIDA). However, not everyone agrees the bill would represent a substantial step forward for privacy.

Case in point, a recent analysis by legal expert Richard Stobbe dissects the act, claiming that while the “regulated activity” mentioned in the bill would undoubtedly apply to banks and airlines, its practical scope will be broader than that.

“That is a purposely broad definition which is designed to catch both the companies that use these systems and providers of such systems, as well as data processors who deploy AI systems in the course of data processing, where such systems are used in the course of international or interprovincial trade and commerce,” Stobbe writes.

The terms “artificial intelligence system,” “high-impact system,” and “harm” are also not sufficiently defined, according to the legal expert.

“I can assure you that lawyers will be arguing for years over the nuances of these various terms,” Stobbe says in an editorial published by Mondaq.

“All of this could be clarified in future drafts of Bill C-27, which would make it easier for lawyers to advise their clients when navigating the complex legal obligations in AIDA. Stay tuned. This law has some maturing to do, and much detail is left to the regulations (which are not yet drafted).”

Dispelling the vagueness in the legislation and determining harm for liability purposes, however, are tasks that come with their own challenges.

University of Washington Assistant Professor and Brooking’s Nonresident Fellow Aylin Caliskan and Carnegie Mellon University Ph.D. Candidate Ryan Steed are concerned that risks like bias found in audio and visual AI systems are often discovered only after they have been widely deployed.

“Our research shows that bias is not only reflected in the patterns of language but also in the image datasets used to train computer vision models,” the duo write in a Brookings article.

“As a result, widely used computer vision models such as iGPT and DALL-E 2 generate new explicit and implicit characterizations and stereotypes that perpetuate existing biases about social groups, which further shape human cognition.”

In a study published last year, Caliskan and Steed prompted iGPT to complete an image given a woman’s face.

“Fifty-two percent of the autocompleted images had bikinis or low-cut tops. In comparison, faces of men were autocompleted with suits or career-related attire 42 percent of the time.”

These results cannot necessarily be anticipated based on known bias tendencies, either, they say.

“The biases at the intersection of race and gender are aligned with theories on intersectionality, reflecting emergent biases not explained by the sum of biases towards either race or gender identity alone.”

Better methods for measuring and analyzing AI bias are needed, the pair writes.

Further efforts to investigate AI bias were recently conducted by SMU (Southern Methodist University), which announced the establishment of a new lab to study how facial recognition and other AI systems perform on diverse user populations.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

The ID16.9 Podcast

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics