NIST listening to AI community on a proposed risk management framework
The U.S. government wants everyone to do better at incorporating trust into their AI products’ design, development, use and evaluation. Ignoring trustworthiness has caused major problems for some.
NIST wants to create a voluntary and collaborative document that is accepted within the AI community as realistic but still aspirational. In no small measure, NIST realizes that AI will remain an esoteric or even alien concept with most people unless it is perceived as transparent and standards-based.
Comments on the framework can be sent to AIframework@nist.gov by September 29. A workshop referencing the feedback is scheduled October 18-19.
There is a framework playbook, too, and it is still a draft as well.
The organization is also looking for candidates to write The History of AI in the United States, Part I.
Officials at NIST want to see a show of hands of research contractors capable of painting a detailed picture of domestic development of artificial intelligence.
No contracts necessarily will result from this search for sources and NIST is a legislatively mandated intermediary in this instance for the National Artificial Intelligence Advisory Committee.
If a contract is ordered, NIST would be looking for reports on “topical areas of concern,” and technical and analytical support for the committee.
Among the 13 areas that the contractor would delve into include: How competitive is the country in AI, what is the state of AI science and how close is artificial general intelligence, is the committee helping to maintain the U.S.’ position and are federal laws adequately addressing ethical and safety issues?
No deadline is mentioned.