FB pixel

Paravision advisor urges new perspective for responsible AI

Grants awarded for discrimination reduction
Categories Biometric R&D  |  Biometrics News
Paravision advisor urges new perspective for responsible AI
 

Communities have reached the stage where they are standing up to say that the current situation around ethics and bias in artificial intelligence is not good enough, and everyone can start to learn about and promote responsible artificial intelligence, says Elizabeth M. Adams, chief AI ethics advisor for Paravision and CEO of consultancy EMA Advisory Services. Meanwhile Amazon awards a team of AI researchers at the University of Iowa and Louisiana State University US$800,000 to decrease discrimination in AI algorithms.

Responsible AI should be taken as seriously as cybersecurity

“We need to think about responsible AI like we think about cybersecurity,” said Elizabeth M. Adams in a presentation on tackling perpetual historical bias in AI for the Opus College of Business at the University of St. Thomas, Minnesota.

After years of rapid development and real-world application, AI technologies are facing something of a backlash: “what happens when our tech worlds and social worlds collide,” said Adams, who describes herself as a keen technology adopter and futurist.

Adams, herself pursuing a PhD at Pepperdine in bias in AI, described how many people come into contact with technologies being developed, beyond the technical staff. From approving budget, assigning teams, to producing marketing, training and customer service, multiple departments are involved.

This is a good thing for increasing the number of eyes on any developments, the diversity of those involved and the number of opportunities for someone to push back or question a technology during the product development lifecycle.

“It’s very important to have the right people, the right voices at the table when you’re designing your technology,” said Adams. Everyone belongs to a community and has a narrative, all of which can be brought together to improve responsibilities around artificial intelligence.

Responsible AI is not something easy to define and comes down to something of an awareness. An awareness of bias perpetuated in artificial intelligence, of the potential dangers of approaches to AI such as scraping the internet and social media for training databases, an awareness of harms.

Adams and her advisory firm consult organizations on leadership around responsible AI. But she believes it is something for everyone at every level to learn more about and help others learn.

A few years ago, Adams co-founded the Public Oversight of Surveillance Technology and Military Equipment coalition in the City of Minneapolis. It was not about banning things, said Adams, but a reaction to an uptick in use of AI in city, in the area of civic tech, which covers facial recognition, drones and gait biometrics.

When a software vendor that did not care about AI ethics was used in the city, Adams realized “that’s why we need regulation, because if we leave it up to industry experts or technology providers, we may not get safe technology or technology that’s unbiased.”

Adams hopes to instill a sense of responsibility in AI beyond the corporate and tech realms. “If you like art, follow your curiosity around AI in art,” she said, to help people find their inroads into the technologies that are already changing our lives for ever.

Amazon, National Science Foundation award $800K grant to decrease AI bias

A three-year, $800,000 grant is being split between researchers at the University of Iowa and Louisiana State University to develop ways to decrease the possibility of discrimination through machine learning algorithms, reports the Daily Iowan.

The funding comes from the National Science Foundation and Amazon, which is currently facing multiple trials over data privacy issues and was recently criticized for using imbalanced error rates for testing its Rekognition biometric software.

A researcher at the University of Iowa says the team intends to make machine learning models fairer without sacrificing the algorithms’ accuracy. The researchers will look at software used by courts to predict recidivism, how medical resources are allocated to neighborhoods and work with Netflix to work on fairness in content recommendation.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events