FB pixel

EU restricts law enforcement use of public biometrics with new AI rules

 

biometric accuracy facial recognition crowd

The European Commission has released new rules for biometrics and other artificial intelligence (AI) applications deemed ‘high-risk,’ setting fines of up to 6 percent of global turnover for companies found in violation, Reuters reports.

The increased fine proposal replaces a previously proposed 4 percent maxim penalty, and mentions that fines in this regard should be particularly strict to avoid technologies with a potential for control from authoritarian governments. On the eve of the announcement, 40 members of European Parliament called for a ban on facial recognition and other biometrics in public spaces.

The use of real-time remote biometric identification systems by law enforcement will instead be limited to preventing terrorist attacks, finding missing children, and public security emergencies, Bloomberg reports. High-risk AI systems will also be required to use high-quality datasets, ensure traceability and human oversight, and must be assessed for conformity to the rules prior to deployment.

The new rules were announced by European technology chief Margaret Vestige, and also mentioned fines up to 2 percent of turnover for providing incorrect or misleading information to authorities.

The document, which was first reported by French online news publication Contexte, acknowledged the benefits of facial recognition technologies to help find missing children and terrorists. It also clarified the EU’s previous position on mass surveillance tools and public biometrics, and how these technologies should always remain under serious scrutiny from governments.

US Federal Trade Commission willing to regulate AI fairness

In an eventful week for the regulation of AI applications around the world, the FTC has echoed some of the EU’s concerns and published a blog post on the fair development and deployment of the technology.

The FTC declared face biometrics dangerous while issuing a proposed order to Paravision relating to legacy practices earlier this year.

According to the post by FTC attorney Elisa Jillson, while useful to improve medicine, finance, business operations, media, and more, new AI technologies may also potentially break existing consumer protection laws.

“For example, COVID-19 prediction models can help health systems combat the virus through efficient allocation of ICU beds, ventilators, and other resources,” the post reads. “But as a recent study in the Journal of the American Medical Informatics Association suggests, if those models use data that reflect existing racial bias in healthcare delivery, AI that was meant to benefit all patients may worsen healthcare disparities for people of color.”

To tackle these biases, Jillson urged technology companies to provide a solid foundation for their AI models in the form of appropriate datasets. Discriminatory outcomes should also be considered when developing AI, and companies should be honest and transparent in claims about what their algorithms can deliver.

Finally, companies should hold themselves accountable for their algorithm’s performance. If they do not, Jillson said the FTC may have to intervene.

“For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA,” she explained. “Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates,” Jillson concluded.

Biometrics expert and Maryland Test Facility Principal Data Scientist John Howard called the memo a “significant moment for the regulation of AI in the U.S.” in a LinkedIn post, citing its emphasis on testing and outcomes over intent, and the FTC’s willingness to regulate fairness in algorithms.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

ACI pushes back on Philippine national ID card contract cancelation

The Philippine government’s national ID system has come under scrutiny, as the Bangko Sentral ng Pilipinas (BSP) faces criticism for…

 

Brazilian digital ID firm Unico acquires Oz Forensics and Trully.AI

Brazilian digital identity unicorn Unico has announced more acquisitions. The selfie biometrics provider, backed by the likes of Goldman Sachs,…

 

Pakistan ID agency chair out after court rules appointment violates constitution

A legal standoff appears to be brewing between Pakistan’s military government and judiciary, after the Lahore High Court ordered the…

 

SIC Biometrics plots scale-up path from under-the-radar prominence

SIC Biometrics is surely among the most influential hardware providers in the industry to remain an unfamiliar name to some…

 

Idemia contract for SA’s biometric driver’s licenses prompts request for investigation

Idemia is facing further scrutiny in South Africa. A new request from transport minister Barbara Creecy asks the country’s auditor-general…

 

Facial recognition back in crime-fighting toolkit for Colorado police

Facial recognition technology is coming back to the Arapahoe County Sheriff’s Office (ACSO), this time under a new state law…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events