Facial biometrics training dataset leads to BIPA lawsuits against Amazon, Alphabet and Microsoft
Despite being conceived of as a way to make biometric facial recognition more equitable, IBM’s Diversity in Faces is continuing to cause headaches within the industry.
Amazon, Alphabet and Microsoft have been accused in three new proposed class-action lawsuits filed in federal court under Illinois’ Biometric Information Privacy Act (BIPA) of analyzing images of people with biometrics without having obtained their permission, CNET reports, violating the law even as they attempted to train demographic disparity out of their systems.
When the dataset was launched with a million publicly available images, IBM Fellow and Manager of AI Tech Dr. John Smith told Biometric Update that the move represented IBM putting its prioritization of fairness in AI into action. Some of those publicly available images are now alleged to have come from Illinois residents, whose likeness cannot be processed by companies using biometrics without informed consent.
A pair of Illinois residents claim they clearly identified themselves as residents of Illinois, but their images were included in the dataset anyway. The lawsuits have been filed in federal courts in California and Washington, where the companies are based.
IBM pulled its general-purpose facial recognition service from the market last month, and questioned the appropriateness of the technology’s use by law enforcement. IBM is not known to have been a major provider of facial recognition to law enforcement agencies in the U.S. or elsewhere. Amazon joined the self-imposed moratorium days later.
Clearview accused of GDPR violation
Clearview AI has also been accused by privacy app Jumbo’s Chief Privacy and Strategy Officer Zoé Vilain of failing to meet its obligations under Europe’s General Data Protection Rule (GDPR), according to Business Insider.
Vilain claims Clearview did not cooperate with a formal subject access request she made after the company told her it holds images of her in its database. Vilain says she provided her first and last name, email and postal address, and IP address, to the company, and Jumbo followed with a formal notice to Clearview that it is legally obliged to allow Vilain to access the data, and delete it if she chooses. Clearview returned a document with three images, one of which Jumbo claims is not Vilain.
If found in violation of GDPR, Clearview could be fined the higher of up to 20 million euros (US$23 million), or 4 percent of its global revenue.
In another BIPA case, this one involving Clearview AI, Law360 writes that Wynndalco Enterprises insurer Citizens says its client has failed to show a covered loss, and that it policy for business liability excludes coverage for statutory violations or distribution of materials prior to the policy period.
Proposed class representatives accuse Wynndalco of intentionally selling access to consumers’ biometric data, after Clearview authorized it to license or sell access to its app and database to customers based in Illinois. Wynndalco is seeking defense and indemnification from Citizens, which responded that Wynndalco failed to show “bodily harm” or “property damage,” but also holds a policy excluding coverage of the civil suit allegations.
Wynndalco is accused of selling access to Clearview’s technology from January 1, 2019 to January 17, 2020. Its insurance policy became effective October 2, 2019, according to Citizens.
The facial recognition app-maker is also facing a new legal challenge in Canada and scrutiny in Australia.