US organizations say they’ll build firmament for AI trust
The United States’ technology standards body has published the first version of its AI risk management framework playbook. It is part of agency’s effort to present a coherent message about creating trustworthy AI.
The National Institute of Standards and Technology has introduced so-called Trustworthy and Responsible AI Resource Center.
Mozilla, the nonprofit creator of the Firefox browser, is similarly inspired. Executives say they have budgeted $30 million to build trustworthy and open-source AI called Mozilla.ai.
According to a NIST statement, officials intend to populate the center with “foundational content, technical documents and toolkits.” It will point AI builders, regulators and researchers to resources including metrics, datasets and standards.
One of the dangers of the AI community today is that is comprised of everything from comparatively long-established corporate teams to individuals experimenting with algorithms. The resource center by itself will not create a community with shared aims in terms of safety, but it presents a new, single point of reference and communication.
Indeed, the framework in question is voluntary. It is designed as a guide for implementing “key building blocks” for trust in AI.
The resource center will be a living collection of documents, including updates of the framework.
AI | Mozilla | NIST | responsible AI | standards