FB pixel

Enable firms to understand and tackle their AI biases, NIST to launch ‘socio-technical approach’

Categories Biometric R&D  |  Biometrics News
Enable firms to understand and tackle their AI biases, NIST to launch ‘socio-technical approach’
 

Enterprises will need to have the tools, skills and human oversight to detect and remove bias in their artificial intelligence applications to maintain a safe online world, writes Steve Durbin, Chief Executive of security risk firm Information Security Forum in a think piece for the World Economic Forum. Meanwhile, NIST wants entities to try a “socio-technical” approach to AI to tackle bias.

“AI-led discrimination can be abstract, un-intuitive, subtle, intangible and difficult to detect. The source code may likely be restricted from the public or auditors may not know how an algorithm is deployed,” writes Durbin, setting out the issue.

“The complexity of getting inside an AI algorithm to see how it’s been written and responding cannot be underestimated.”

Durbin uses privacy laws as a comparison and warning. Privacy laws rely on giving notice and choice such as when disclaimers pop up on websites. “If such notices were applied to AI, it would have serious consequences for the security and privacy of consumers and society,” he notes.

AI could accelerate malware attacks by detecting vulnerabilities or poisoning security AI systems by feeding them with incorrect information. Durbin offers some solutions.

Measuring ethics, using AI to tackle discrimination

Durbin recommends five general ways to approach issues with discrimination in AI.

“Because AI decisions increasingly influence and impact people’s lives at scale, enterprises have a moral, social and fiduciary responsibility to manage AI adoption ethically,” he notes, urging ethics to be treated as metrics for firms and organizations.

Firms must adopt tools and methods to help them understand and find the biases in any systems they use. The autonomy of algorithms must be balanced with the creation of an ethics committee. Employees must be empowered to promote responsible AI.

Finally, using AI to run algorithms alongside human decision processes, then comparing the outcomes and examining the reasons for the AI decision can help benefit traditional methods of assessing human fairness, writes Durbin.

“AI models must be trustworthy, fair and explainable by design,” he concludes, “As AI becomes more democratized and new governance models take shape, it follows that more AI-enabled innovations are on the horizon.”

‘Socio-technical’ NIST playbook to tackle bias

Experts at the U.S. National Institute of Standards and Technology (NIST) are set to launch a new playbook for approaching AI biases and other risks, reports Nextgov.

Written for both public and private entities, the recommendations, expected in the next few days, will be adaptable and flexible and contain areas such as human management of AI systems. ‘Socio-technical’ meaning having an awareness of the human impact on technology, according to Nextgov, so as to prevent it from being used in ways its designers had not intended.

The playbook is intended to help entities prevent human biases entering their AI technologies.

Similar to Durbin, the playbook is expected to encourage governance of the technology and clear roles of responsibility.

NIST has also been working on both assessing the extent of bias in face biometrics, and how to better measure disparities in performance between subjects from different demographics, with its ongoing Face Recognition Vendor Test series.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics