EU restricts law enforcement use of public biometrics with new AI rules
The European Commission has released new rules for biometrics and other artificial intelligence (AI) applications deemed ‘high-risk,’ setting fines of up to 6 percent of global turnover for companies found in violation, Reuters reports.
The increased fine proposal replaces a previously proposed 4 percent maxim penalty, and mentions that fines in this regard should be particularly strict to avoid technologies with a potential for control from authoritarian governments. On the eve of the announcement, 40 members of European Parliament called for a ban on facial recognition and other biometrics in public spaces.
The use of real-time remote biometric identification systems by law enforcement will instead be limited to preventing terrorist attacks, finding missing children, and public security emergencies, Bloomberg reports. High-risk AI systems will also be required to use high-quality datasets, ensure traceability and human oversight, and must be assessed for conformity to the rules prior to deployment.
The new rules were announced by European technology chief Margaret Vestige, and also mentioned fines up to 2 percent of turnover for providing incorrect or misleading information to authorities.
The document, which was first reported by French online news publication Contexte, acknowledged the benefits of facial recognition technologies to help find missing children and terrorists. It also clarified the EU’s previous position on mass surveillance tools and public biometrics, and how these technologies should always remain under serious scrutiny from governments.
US Federal Trade Commission willing to regulate AI fairness
In an eventful week for the regulation of AI applications around the world, the FTC has echoed some of the EU’s concerns and published a blog post on the fair development and deployment of the technology.
The FTC declared face biometrics dangerous while issuing a proposed order to Paravision relating to legacy practices earlier this year.
According to the post by FTC attorney Elisa Jillson, while useful to improve medicine, finance, business operations, media, and more, new AI technologies may also potentially break existing consumer protection laws.
“For example, COVID-19 prediction models can help health systems combat the virus through efficient allocation of ICU beds, ventilators, and other resources,” the post reads. “But as a recent study in the Journal of the American Medical Informatics Association suggests, if those models use data that reflect existing racial bias in healthcare delivery, AI that was meant to benefit all patients may worsen healthcare disparities for people of color.”
To tackle these biases, Jillson urged technology companies to provide a solid foundation for their AI models in the form of appropriate datasets. Discriminatory outcomes should also be considered when developing AI, and companies should be honest and transparent in claims about what their algorithms can deliver.
Finally, companies should hold themselves accountable for their algorithm’s performance. If they do not, Jillson said the FTC may have to intervene.
“For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA,” she explained. “Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates,” Jillson concluded.
Biometrics expert and Maryland Test Facility Principal Data Scientist John Howard called the memo a “significant moment for the regulation of AI in the U.S.” in a LinkedIn post, citing its emphasis on testing and outcomes over intent, and the FTC’s willingness to regulate fairness in algorithms.