A 4-step plan to building trust in AI. The fourth will be a hard pill to swallow
Having analyzed a European Commission white paper about regulating AI and biometrics, the World Economic Forum says businesses should act now to vet algorithms before governments start making rules.
Resistance to AI-related technology, Forum leaders say, “is fueled by the feeling that various AI systems have been deployed without having been properly vetted.”
The Forum is a non-governmental organization famous for its annual gabfest in Davos, Switzerland. Its proposed four-step vetting process came about after the EC distributed its February white paper recommending “a European approach to excellence and trust” for AI development and operation.
(See our analysis of the document here.)
The first step is to document the lineage of products and behaviors. It should read almost like a history of each AI product or service, and it should start from the beginning, explaining the organization’s aim in writing the algorithm.
Training data sets need to be included, according to the Forum, as do intended use scenarios, the results of safety and fairness tests and performance characteristics. The same goes for system behavior while operating, which should be used to understand the predictions that machine learning models make.
The second step suggests organizations create a formal interpretation of the EC white paper’s specific requirements for their particular strategic development and operational needs. The interpretation should be created by an in-house cross-functional group.
The Forum also recommends that all products and services be assessed for compliance using the organization’s interpretation of the EC white paper as a metric.
“An independent cross-functional team, consisting of risk and compliance officers, product managers, and data scientists, should perform an internal audit,” according to the Forum.
Perhaps the most controversial suggestion is fourth: “Report findings to relevant stakeholders.”
In other words, distribute the audit to regulators, product buyers, providers, consumer groups and civil society organizations. The goal is to be transparent in order to increase trust, without which AI will remain hobbled by fears and suspicions.