FB pixel

World Economic Forum, EU push for human-centered AI regulatory framework to build trust

 

facial-recognition-database

Given current concerns regarding ethics, privacy and accessibility of facial biometrics, defining its responsible use, establishing systems to support product teams, auto-assessments and third-party audit to ensure compliance are some principles that could help build trust in the technology, the World Economic Forum believes.

While privacy and human rights activists are asking for a complete ban, warning about misuse, some tech companies such as IBM, Microsoft and Amazon have already suspended development and stopped selling biometric facial recognition technology to law enforcement.

WEF writes that “human-centered” ethical and responsible design standards could build trust by preventing bias and misuse. The organization’s AI team has developed a responsible development framework for engineers and policymakers.

The growing interest in touchless and passwordless technology, especially due to COVID-19, could encourage companies to develop safe and secure ways to verify identities by using “Principles for Action” focused on privacy, risk assessment, proportional use of technology, accountability, end-user consent, accessibility for impaired people, and adaptability.

In their effort to design systems responsible by design, product teams not only have to work together with the platform provider and technology user, but also consider facial recognition justification, a data plan suitable with the user’s features, bias risk mitigation and ways to keep users informed.

To ensure the Principles of Action are followed, auto-assessing will help identity weak spots and if the systems are according to standards by comparing results. Companies can build transparency and prevent mistrust by assigning an independent third-party to conduct a compliance audit.

The World Economic Forum points out how lessons learned by the accounting industry can easily be applied to build facial recognition trust. Governments will have to play their part and adopt bills to encourage “sustainable regulation” in relation to international standards.

EU wants regulatory framework on “high-risk” AI

According to Science Business, regulatory framework on “high-risk” AI should be introduced in Europe as per 42 percent of respondents in a recent European Commission consultation. High-risk AI applications range from autonomous vehicles, to AI applications in healthcare or biometric facial recognition.

The overview contains a total of 1,215 responses with mixed feelings when it comes to how this technology should be approached, specifically if there is a need for new legislation or simply a revision of what already exists. As little as 3 percent feel existing legislation is enough.

“It is interesting to note that respondents from industry and business were more likely to agree with limiting new compulsory requirements to high-risk applications [by] a percentage of 54.6 per cent,” the commission noted.

Earlier this year, the EU published a white paper on Artificial Intelligence to establish global standards for technological advancement and AI technology. During the consultation that followed, a high number of companies were aware that AI may be “breaching fundamental rights” which could lead to discrimination.

“Ninety percent and 87 percent of respondents [respectively] find these concerns important or very important,” the commission said.

The companies that participated are from the EU, India, China, Japan, Syria, Iraq, Brazil, Mexico, Canada, the US and the UK. A more detailed report will be released.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events