Meet 4 common requests to increase facial recognition trust, financial inclusion group says
A U.S.-based network of people advocating for an inclusive economy through technology and market innovation say the success of facial recognition depends on both the public and private sectors working hard to engender trust in AI tools.
Facial recognition systems should be strictly regulated, and there needs to be transparency on the technology’s development, testing, operation and management.
Government and industry should be able to demonstrate that the technology can be created and used ethically.
And, last, compensation should be provided to anyone who is harmed by the biometric tools, according to the coalition, which is made up of industry players, lawmakers, academics and community groups.
The report does not say stakeholders should view this as a list of demands, per se. Its authors are pointing out that the industry might avoid further city and regional bans by addressing these justifiable areas.
What follows are discussions of numerous pressure points that that flow from the list. Most of them could easily have been cadged from B school or public policy texts.
Vendors and other AI developers need diverse teams to make products that address the many market needs and concerns.
Debates on public policy are critical to adopting the right technology and getting citizens’ buy-in, according to the report. A good place to start would be differentiating between police surveillance and public safety monitoring.
The suggestions are numerous, and worth considering, but one of the more unusual notes is this: Mandate the use of the best technology to get the best results. Naturally, that will be the pitch to governments and other would-be clients, but this time, at least, buyers might want to consider it.
Making price the primary consideration likely will mean getting AI systems that were poorly written and trained on problematic datasets. The result could at best be biased decisions.