Paravision advisor urges new perspective for responsible AI

Communities have reached the stage where they are standing up to say that the current situation around ethics and bias in artificial intelligence is not good enough, and everyone can start to learn about and promote responsible artificial intelligence, says Elizabeth M. Adams, chief AI ethics advisor for Paravision and CEO of consultancy EMA Advisory Services. Meanwhile Amazon awards a team of AI researchers at the University of Iowa and Louisiana State University US$800,000 to decrease discrimination in AI algorithms.
Responsible AI should be taken as seriously as cybersecurity
“We need to think about responsible AI like we think about cybersecurity,” said Elizabeth M. Adams in a presentation on tackling perpetual historical bias in AI for the Opus College of Business at the University of St. Thomas, Minnesota.
After years of rapid development and real-world application, AI technologies are facing something of a backlash: “what happens when our tech worlds and social worlds collide,” said Adams, who describes herself as a keen technology adopter and futurist.
Adams, herself pursuing a PhD at Pepperdine in bias in AI, described how many people come into contact with technologies being developed, beyond the technical staff. From approving budget, assigning teams, to producing marketing, training and customer service, multiple departments are involved.
This is a good thing for increasing the number of eyes on any developments, the diversity of those involved and the number of opportunities for someone to push back or question a technology during the product development lifecycle.
“It’s very important to have the right people, the right voices at the table when you’re designing your technology,” said Adams. Everyone belongs to a community and has a narrative, all of which can be brought together to improve responsibilities around artificial intelligence.
Responsible AI is not something easy to define and comes down to something of an awareness. An awareness of bias perpetuated in artificial intelligence, of the potential dangers of approaches to AI such as scraping the internet and social media for training databases, an awareness of harms.
Adams and her advisory firm consult organizations on leadership around responsible AI. But she believes it is something for everyone at every level to learn more about and help others learn.
A few years ago, Adams co-founded the Public Oversight of Surveillance Technology and Military Equipment coalition in the City of Minneapolis. It was not about banning things, said Adams, but a reaction to an uptick in use of AI in city, in the area of civic tech, which covers facial recognition, drones and gait biometrics.
When a software vendor that did not care about AI ethics was used in the city, Adams realized “that’s why we need regulation, because if we leave it up to industry experts or technology providers, we may not get safe technology or technology that’s unbiased.”
Adams hopes to instill a sense of responsibility in AI beyond the corporate and tech realms. “If you like art, follow your curiosity around AI in art,” she said, to help people find their inroads into the technologies that are already changing our lives for ever.
Amazon, National Science Foundation award $800K grant to decrease AI bias
A three-year, $800,000 grant is being split between researchers at the University of Iowa and Louisiana State University to develop ways to decrease the possibility of discrimination through machine learning algorithms, reports the Daily Iowan.
The funding comes from the National Science Foundation and Amazon, which is currently facing multiple trials over data privacy issues and was recently criticized for using imbalanced error rates for testing its Rekognition biometric software.
A researcher at the University of Iowa says the team intends to make machine learning models fairer without sacrificing the algorithms’ accuracy. The researchers will look at software used by courts to predict recidivism, how medical resources are allocated to neighborhoods and work with Netflix to work on fairness in content recommendation.
Article Topics
AI | biometric-bias | biometrics | ethics | facial recognition | Paravision | regulation | research and development | responsible AI
Comments