FB pixel

See no evil: Bad AI strategy, especially when alternative plans exist

Categories Biometric R&D  |  Biometrics News
See no evil: Bad AI strategy, especially when alternative plans exist
 

A U.S. federal watchdog agency has defined four principles that its researchers say can build accountability into the creation and use of AI systems.

The work is aimed primarily at government AI development and products, but if it is proved useful, it could help with private industry, where attitudes toward bias can be as opaque as black-box AI itself.

A disturbing environment there is described in a recent New York Times article that took a peek behind the curtain of obligatory and facile assurances that many of those in private-sector AI recite. At least some CEOs feel, incorrectly, that data is like sand — free of even original sin.

The four principles distilled by the U.S. Government Accountability Office, in fact start with data. (NIST has some related thoughts on the matter, focused on biometrics.)

Data is changed and skewed from the moment it is collected. Strong standards set for representativeness, quality and reliability provide a more solid foundation for every other step in a product’s development, deployment and usefulness.

Real thought has to go into creating datasets. Is the object to sell ties to white cisgender males? Or is it to sell products to anyone with income and a desire? Not all data is the same.

The second principle is monitoring. Proving repeatedly that an algorithm is reliable and relevant is required, according to the GAO.

“One and done” is fortunate, not a realistic expectation even most of the time. Some algorithmic operations will need the same kind of ongoing oversight and management that is necessary with many critical human tasks, including running a power plant, for example.

Governance is highly related. The GAO is recommending governance at both the systems and organizational levels as the best way of exemplifying accountability. The researchers define it as creating the processes to manage, operate and oversee implementation.

The fourth principle involves performance — does the output match the objectives.

It is standard B-school stuff, but some agency heads and CEOs are buying into the Silicon Valley hype. Thinking a content management system will revolutionize a company is one thing. If the CEO is wrong, money has been wasted.

An algorithm that routinely prioritizes a health care company’s Black patients lower than white patients is different.

The New York Times story looking at the ongoing campaign to tease bias out of AI brought up that example. About 18 months ago, New York State regulators investigated UnitedHealth Group for allegedly using AI that pushed aside Black patients, “even when the white patients were healthier.”

It is not known, according to the Times, what if any conclusion has been reached in the UnitedHealth investigation.

The article opens with a focus on a comparatively young AI innovator named Liz O’Sullivan, who participated in a startup trying to automatically eliminate explicit images on the Internet. The firm contracted out the tagging to Indian workers, and what they got back was images of same-sex couples — not same-sex pornography — tagged as offensive.

She’s since been named CEO of Parity, which is searching for tools to find and remove bias in AI. (Parity is not alone.)

Meeting executives and engineers over the last few years, O’Sullivan told the Times, she heard about “fairness through unawareness.” That means they do not look closely at underlying data because it is neither a tool for good nor bad, and to get involved with the data is to introduce bias.

That is like saying all ice ponds are the same; if one looks solid get out there and skate until it proves otherwise.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events