FB pixel

Enable firms to understand and tackle their AI biases, NIST to launch ‘socio-technical approach’

Categories Biometric R&D  |  Biometrics News
Enable firms to understand and tackle their AI biases, NIST to launch ‘socio-technical approach’
 

Enterprises will need to have the tools, skills and human oversight to detect and remove bias in their artificial intelligence applications to maintain a safe online world, writes Steve Durbin, Chief Executive of security risk firm Information Security Forum in a think piece for the World Economic Forum. Meanwhile, NIST wants entities to try a “socio-technical” approach to AI to tackle bias.

“AI-led discrimination can be abstract, un-intuitive, subtle, intangible and difficult to detect. The source code may likely be restricted from the public or auditors may not know how an algorithm is deployed,” writes Durbin, setting out the issue.

“The complexity of getting inside an AI algorithm to see how it’s been written and responding cannot be underestimated.”

Durbin uses privacy laws as a comparison and warning. Privacy laws rely on giving notice and choice such as when disclaimers pop up on websites. “If such notices were applied to AI, it would have serious consequences for the security and privacy of consumers and society,” he notes.

AI could accelerate malware attacks by detecting vulnerabilities or poisoning security AI systems by feeding them with incorrect information. Durbin offers some solutions.

Measuring ethics, using AI to tackle discrimination

Durbin recommends five general ways to approach issues with discrimination in AI.

“Because AI decisions increasingly influence and impact people’s lives at scale, enterprises have a moral, social and fiduciary responsibility to manage AI adoption ethically,” he notes, urging ethics to be treated as metrics for firms and organizations.

Firms must adopt tools and methods to help them understand and find the biases in any systems they use. The autonomy of algorithms must be balanced with the creation of an ethics committee. Employees must be empowered to promote responsible AI.

Finally, using AI to run algorithms alongside human decision processes, then comparing the outcomes and examining the reasons for the AI decision can help benefit traditional methods of assessing human fairness, writes Durbin.

“AI models must be trustworthy, fair and explainable by design,” he concludes, “As AI becomes more democratized and new governance models take shape, it follows that more AI-enabled innovations are on the horizon.”

‘Socio-technical’ NIST playbook to tackle bias

Experts at the U.S. National Institute of Standards and Technology (NIST) are set to launch a new playbook for approaching AI biases and other risks, reports Nextgov.

Written for both public and private entities, the recommendations, expected in the next few days, will be adaptable and flexible and contain areas such as human management of AI systems. ‘Socio-technical’ meaning having an awareness of the human impact on technology, according to Nextgov, so as to prevent it from being used in ways its designers had not intended.

The playbook is intended to help entities prevent human biases entering their AI technologies.

Similar to Durbin, the playbook is expected to encourage governance of the technology and clear roles of responsibility.

NIST has also been working on both assessing the extent of bias in face biometrics, and how to better measure disparities in performance between subjects from different demographics, with its ongoing Face Recognition Vendor Test series.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

 

Meta challenges UK Online Safety Act fines tied to global revenue

Lo and behold: Meta does not want to pay the fines UK regulator Ofcom says are owed to it for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events