FB pixel

Privacy as a human right gets lip service. A report shows a different path

Categories Biometric R&D  |  Biometrics News
Privacy as a human right gets lip service. A report shows a different path
 

A report out of Harvard University warns that global society naively seems to expect artificial intelligence to grow benignly, on its own, in ways that do not sacrifice privacy. It uses remote biometric identity verification as an example of how privacy is a casualty to development.

The report‘s author says that the ways artificial intelligence technologies are built and used are incompatible with the notion of personal privacy. “Personal data has become the bricks and mortar” used to create AI technologies, according to the report.

Assuming privacy is a cherished right, people have two choices to make. One, they can decide that artificial intelligence as it is imagined today should not be created. Or two, they can demand that research and industry hard-wire privacy protection into the technologies — something that is not happening now.

The author is Neal Cohen, privacy director at Onfido Ltd., which makes software used by businesses to verify a person’s identity. Cohen also is the technology and human rights fellow at Harvard’s Carr Center for Human Rights Policy.

Artificial intelligence-related laws exist that try to protect individuals from being harmed by commercial digital innovations, and the pipeline for legislation continues to grow, Cohen writes. But too little debate focuses on how to prevent people from being harmed by the way the technology uses private data in building algorithms.

For example, facial recognition algorithms need large and diverse data sets to be accurate in the real world. On the surface that environment is ungainly to manage when it comes to individuals’ privacy. A level down, researchers can get adequate data sets using images collected by third-party organizations, not all of which put a premium on privacy.

This exact issue was recently identified in a World Privacy Forum report on U.S. schools.

Realistically, Cohen writes, the private- or public-sector software engineers using the images for a product or service to interact with photographed subjects and vice versa. Subjects have no way to control the use of their biometric data.

Cohen notes that it is common for data harvesters and intermediary developers to pass the responsibility for managing privacy desires to the organizations that ultimately put the product in the field. Privacy clauses in contracts are rarely policed.

He outlines three principles that everyone in the AI supply chain should follow.

First, if opting in is not possible, personal data should only be inserted when the technology being developed “satisfied a legitimate societal need.”

And people have to be given the information and methods necessary to make wise decision about their data is used at each link of the supply chain.

Last, AI technologies have to be created in secure ways that empower individuals to manage their data.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events