Privacy as a human right gets lip service. A report shows a different path

A report out of Harvard University warns that global society naively seems to expect artificial intelligence to grow benignly, on its own, in ways that do not sacrifice privacy. It uses remote biometric identity verification as an example of how privacy is a casualty to development.
The report‘s author says that the ways artificial intelligence technologies are built and used are incompatible with the notion of personal privacy. “Personal data has become the bricks and mortar” used to create AI technologies, according to the report.
Assuming privacy is a cherished right, people have two choices to make. One, they can decide that artificial intelligence as it is imagined today should not be created. Or two, they can demand that research and industry hard-wire privacy protection into the technologies — something that is not happening now.
The author is Neal Cohen, privacy director at Onfido Ltd., which makes software used by businesses to verify a person’s identity. Cohen also is the technology and human rights fellow at Harvard’s Carr Center for Human Rights Policy.
Artificial intelligence-related laws exist that try to protect individuals from being harmed by commercial digital innovations, and the pipeline for legislation continues to grow, Cohen writes. But too little debate focuses on how to prevent people from being harmed by the way the technology uses private data in building algorithms.
For example, facial recognition algorithms need large and diverse data sets to be accurate in the real world. On the surface that environment is ungainly to manage when it comes to individuals’ privacy. A level down, researchers can get adequate data sets using images collected by third-party organizations, not all of which put a premium on privacy.
This exact issue was recently identified in a World Privacy Forum report on U.S. schools.
Realistically, Cohen writes, the private- or public-sector software engineers using the images for a product or service to interact with photographed subjects and vice versa. Subjects have no way to control the use of their biometric data.
Cohen notes that it is common for data harvesters and intermediary developers to pass the responsibility for managing privacy desires to the organizations that ultimately put the product in the field. Privacy clauses in contracts are rarely policed.
He outlines three principles that everyone in the AI supply chain should follow.
First, if opting in is not possible, personal data should only be inserted when the technology being developed “satisfied a legitimate societal need.”
And people have to be given the information and methods necessary to make wise decision about their data is used at each link of the supply chain.
Last, AI technologies have to be created in secure ways that empower individuals to manage their data.
Article Topics
AI | artificial intelligence | biometric data | biometrics | data protection | dataset | Onfido | privacy
Comments