FB pixel

A 4-step plan to building trust in AI. The fourth will be a hard pill to swallow

Categories Biometric R&D  |  Biometrics News  |  Trade Notes
A 4-step plan to building trust in AI. The fourth will be a hard pill to swallow
 

Having analyzed a European Commission white paper about regulating AI and biometrics, the World Economic Forum says businesses should act now to vet algorithms before governments start making rules.

Resistance to AI-related technology, Forum leaders say, “is fueled by the feeling that various AI systems have been deployed without having been properly vetted.”

The Forum is a non-governmental organization famous for its annual gabfest in Davos, Switzerland. Its proposed four-step vetting process came about after the EC distributed its February white paper recommending “a European approach to excellence and trust” for AI development and operation.

(See our analysis of the document here.)

The first step is to document the lineage of products and behaviors. It should read almost like a history of each AI product or service, and it should start from the beginning, explaining the organization’s aim in writing the algorithm.

Training data sets need to be included, according to the Forum, as do intended use scenarios, the results of safety and fairness tests and performance characteristics. The same goes for system behavior while operating, which should be used to understand the predictions that machine learning models make.

The second step suggests organizations create a formal interpretation of the EC white paper’s specific requirements for their particular strategic development and operational needs. The interpretation should be created by an in-house cross-functional group.

The Forum also recommends that all products and services be assessed for compliance using the organization’s interpretation of the EC white paper as a metric.

“An independent cross-functional team, consisting of risk and compliance officers, product managers, and data scientists, should perform an internal audit,” according to the Forum.

Perhaps the most controversial suggestion is fourth: “Report findings to relevant stakeholders.”

In other words, distribute the audit to regulators, product buyers, providers, consumer groups and civil society organizations. The goal is to be transparent in order to increase trust, without which AI will remain hobbled by fears and suspicions.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events