AI risk management evolves. Now it has to interoperate
Consensus about AI risk management is growing among the governments of developed economies.
That does not mean officials anywhere feel AI can ever be operated without risks. Far from it.
But managing its risks is fundamental in freeing AI to meet its potential, according to a large panel of speakers tasked by government organizations and foundations to consider ways make the algorithms trustworthy.
All were talking the same language figuratively and literally. Only a few years ago, there were disagreements on whether managing the risks was necessary or even desirable. One of the confab’s most interesting (and, unfortunately, rushed) topics was about the fight for global interoperability of all aspects of managing AI risk.
Sponsored by the Council of Europe, a group that has studied ways to support democracy and the rule of law since the end of World War II, the webinar is dense with information and worth investing an hour’s attention.
Panel members came from the Treasury Board of Canada, University of Tokyo, European Commission, Alan Turing Institute, U.S. National Institute of Standards and Technology, United Nation and Organization for Economic Co-operation and Development, or OECD.
“We’re very much going in the same direction,” said Karine Perset, a leader of the OECD’s AI policy efforts. But she noted that there are many regional, national and international risk-management initiatives.
Interoperability from terms and definitions to policy approaches is critical, said Perset. What is more, the many frameworks governments sign on to will have to be implemented by businesses, many of which will know little about risk management in general, much less with AI.
The task now, she said, is to find the commonalities for all the relevant players to, indeed, minimize risk.
AI | interoperability | legislation | regulation | responsible AI