FB pixel

Stanford study shows AI benchmarks aging poorly, need work

Stanford study shows AI benchmarks aging poorly, need work
 

An AI research team led by Stanford University has found that algorithms are, indeed, besting humans in some benchmark tests. But AI benchmarks as a group also are aging and less effective.

New and redesigned benchmarks are needed, the team found.

Those are three insights in a new report relevant to biometric systems as businesses, governments and universities increase the number and capabilities of algorithms for, as an example, facial recognition.

The 2023 AI Index Report is the product of Stanford’s Human-Centered Artificial Intelligence, or HAI, program working with researchers from SRI International, Hebrew University, Google and others. Developments from 127 nations were analyzed.

In one example of AI growth measured against a benchmark, code was able to accurately answer visual questions 84.3 percent of the time, compared to the 80.78 percent human baseline.

In other cases, benchmarks are plateauing when there is growth is possible and needed.

The Celeb-DF deepfake-detection benchmark showed promise in 2019, but it has since stalled.

In all, 50 vision, language, speech and other benchmarks were prodded, and it was found that many algorithms “score extremely high,” limiting their usefulness.

“Many facial recognition systems are able to successfully identify close to 100% of faces, even on challenging datasets,” the researchers wrote.

There appears room to grow both for facial recognition performance and benchmarks. HAI research found that private investment in the code in 2021 and 2022 was second to the last among 25 focus areas, well below $1 billion. It was the second annual decline.

Investment in AI for data management, processing and cloud operations last year was $10 billion, making it the most attractive target sector for deep pockets. (The researchers noted that overall, AI investment is down.)

That said, benchmark saturation is becoming more pervasive, according to the researchers.

By saturation, they mean that year-over-year improvement as measured by benchmarks are flattening. The top algorithm in one area examined in this year’s report was .1 percent better than in 2022, a rate considered negligible.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Enterprise IAM could provide needed critical mass for reusable digital identity

There are 2.6 billion people around the world who make up the addressable population for user-held digital identity, according to…

 

iProov stacks up border biometrics wins with Orlando airport deployment announced

iProov is supplying its face biometrics to Orlando International Airport (MCO), which serves nearly 58 million passengers a year, for…

 

Entrust delves into APAC digital government opportunity from DPI

Digital service delivery powered by national digital identity programs are sweeping across the Asia-Pacific region, enabling more efficient government operations…

 

New York senate committee advances ban on police use of biometric surveillance

The New York Senate Internet and Technology Committee has taken a decisive legislative step to confront the rapidly expanding use…

 

Secretary of state to attend this week’s meeting between UK govt, digital ID industry

This week, the UK government is having a face-to-face with industry trade groups representing the digital identity sector – the…

 

Texas AG secures record breaking privacy settlement with Google

In a major development in the battle over consumer data privacy, Texas Attorney General Ken Paxton has reached a $1.375…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events