WPF calls for NIST and OECD to take over standard-setting for AI governance
Over the past years, governments from across the globe have begun to measure and improve artificial intelligence systems by building AI governance tools. Organizations such as the World Privacy Forum (WPF) say that now is the time to establish evaluation for these tools and are calling for organizations such as the National Institute of Standards and Technology (NIST) and OECD to take the lead.
In a report published on Friday, the non-profit public interest research group argues that setting up international standards, such as quality assurance, can help evaluate AI governance more transparently. The group urged NIST to develop recommendations for creating, evaluating, and using AI governance tools.
The OECD, on the other hand, could play a role by creating a definitive best-practice framework for these tools. WPF researchers participate in the AI Expert Groups at the OECD AI Policy Observatory.
“The crucial role of NIST and the OECD in convening stakeholders and developing an evaluative environment and multistakeholder consensus procedures for high-quality AI governance tools and catalogs,” the organization writes in the report titled Risky Analysis: Assessing and Improving AI Governance Tools.
The report analyzed AI governance tools, such as guidance documents, self-assessment questionnaires, process frameworks, technical frameworks, technical code, and software created in Africa, Asia, North America, Europe, South America, Australia and New Zealand. These are the tools used by organizations implementing digital identity and biometrics to detect and mitigate bias, and assess other areas of systems and their components.
The organization says that it is difficult to give an exact number on how many of these tools exist. The research analyzed more than 30 AI governance tools across 13 jurisdictions. The majority of AI governance tools reviewed by the report focus on two goals: fairness, or avoiding bias and discrimination in AI-based decision-making and the explainability of those systems.
Many of them, however, proved to be using measurement methods for AI systems that are unsuitable, out of context, or “off-label.”
“More than 38 percent of AI governance tools reviewed in this report either mention, recommend, or incorporate at least one of three measures shown in scholarly literature to be problematic,” the report states.
WPF says that establishing evaluation systems will be crucial to create a healthier AI ecosystem. Another approach that could help achieve that is measurement modeling, a structured method that can be used to illuminate gaps between the actual results of measurement systems and policy goals, the report concludes.
The White House has already signaled its desire to give NIST more responsibility for AI governance, and the OECD issued a set of recommendations for digital identity governance earlier this year.
Article Topics
AI | data privacy | NIST | OECD | standards | World Privacy Forum
Comments