UK advisory firm launches AI and biometrics risk assessment tool
Wondering how to assess the risks of your artificial intelligence or biometrics system? The Centre for Data Ethics and Innovation (CDEI), a UK government expert body working on making AI and data use safe, has presented a case study of a new assessment service for developers.
The Anekanta AI Risk Intelligence System is based on a questionnaire that can be used to assess AI risks, including for systems that feature biometric data.
The questionnaire contains more than 150 questions on transparency, explainability and the level of autonomy of the AI system. It also examines the origin of the inputs, expected outputs and the impacts of the system.
The questionnaire was produced by AI consultancy company Anekanta AI, which offers assessments and recommendations for mitigating risks. The company is planning to launch the questionnaire as an online service.
“The system and its outputs may readily be aligned with the UK’s AI Regulation white paper, GDPR and the pending EU AI Act,” Anekanta AI explains.
The UK has decided on a light-touch approach to regulating AI and biometrics, prioritizing the creation of a framework to address risks instead of new legislation. Part of that framework is “AI assurance techniques,” so the government has taken the step of presenting case studies of 28 options from the market.
The case study covers areas such as cybersecurity practices and the reliability of AI technology. The service is intended to ensure that the AI system assessed is transparent and understandable to the user. It also examines fairness, including the potential for discrimination and bias. The questionnaire also checks whether the impacted parties can contest the use of the AI system and seek redress.
The questionnaire was added to the Portfolio of AI Assurance Techniques, a repository developed by the Centre for Data Ethics and Innovation (CDEI) that provides examples of AI assurance techniques.
Anekanta AI provides research and risk management services for high-risk AI, including advisory on navigating UK, European Union and United States regulations on AI.