FB pixel

AI believers: Give AI a chance. Everyone else: Give us a break

future of AI

Some AI researchers are saying worriers need to relax. Algorithms are nowhere near powerful enough to endanger humanity.

It is likely to happen someday but not today. Besides, say other researchers, benchmark datasets used in one of the premier roles for AI — facial recognition — are seriously compromised by erroneous labeling.

It would seem our digital overlord’s arrival remains a mystery, and the superintelligence may not be thinking clearly when it gets here.

A pair of recent articles, one about the limited nature of AI today and the other about erroneous training, are sending the mixed messages.

Cable news and opinion channel CNBC talked to a number of computer scientists from the University of Cambridge and other organizations who want to throw water on what they feel is an AI freak-out.

Their concern is that governments around the world will over-regulate their research precisely when research could make real progress in creating artificial general intelligence, or AGI.

The article quotes Joshua Feast, CEO of Cogito, maker of AI-infused call center products, saying nothing occurring today “implies we will ever get to AGI with it.” Feast would not be the first entrepreneur who deflates his own technology to minimize oversight of its development.

The “give AI a chance” movement is a reaction to doomsaying by notable technologists, including SpaceX’s Elon Musk. Public and policy debates about AI will continue sloshing back and forth like a rocked tub of water, probably right up to the instant Terminator-building Skynet takes control of things. Whenever that is.

Even AI loyalists, who depend on as much government research funding as they can get to do their work today, will not say the rise of super-intelligent machines is impossible.

At least the AI would be able to see the world clearly, right? There is at least a chance that it could make good decisions regarding humanity without self-defeating filters?

Well, about that.

A VentureBeat article about a new Massachusetts Institute of Technology study does not engender optimism.

MIT researchers say they have found “systematic patterns” of errors in labeled data in datasets. They found “numerous and widespread” mislabeling in the test sets of 10 commonly used computer vision, natural language and audio datasets.

The authors reported in a new paper that an average of 3.4 percent error rate among the 10 datasets. Mistakes happen because labels are applied by other algorithms or via crowdsourcing.

Researchers tapping into the data could be misled about how a biometric model will work outside the lab. There are examples of algorithms displaying horrific judgments of people of color or persistent errors, after all.

If only there were some way to coordinate how, nationwide, datasets are compiled and judged for accuracy.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics