How biometric transparency and regulation could secure privacy, prevent exclusion in digital identity
Do we need more standards in biometrics? Is IBM making the right move by withdrawing its facial recognition technology from use by law enforcement in the U.S.? An overwhelming majority believed IBM is setting the right example, yet 35 percent disagreed during the Women in Identity webinar about biometrics, privacy and exclusion.
While one attendee argues it is “a statement against intense mass surveillance by the police,” others may argue IBM pulled out because it did not generate the expected revenue.
Debbie Reynolds, data privacy officer for Women in Identity, and founder and Chief Privacy Officer of Debbie Reynolds Consulting, is in favor of technology, as long it is ethically applied, however she believes computers should never be allowed to make final decisions by themselves, as humans should be the ultimate decision makers.
Reynolds believes facial recognition will not disappear any time soon, yet, knowing that bias exists in the system, companies should rather focus on further improving the technology. “So, what I’d really like to see, instead of people saying they’re going to stop using or stop developing [facial recognition], is what are you going to do to improve it?” Reynolds asked the panel.
Dr. Keren Weitzberg, a researcher and Teaching Fellow at University College London whose work is centered on East Africa, is worried how biometric facial recognition can be used, after witnessing previous deployments in Israel and Palestine. Weitzberg argues there should be more transparency on the actual use of the technology and increased regulation to prevent abuse. “And surely there can be some positive uses of them in law enforcement, but, in reality, I haven’t seen a lot of examples of those,” she said.
Most issues, the panel agreed, are related to the inbuilt biases encountered in some facial recognition technologies, as pointed out by civil society and privacy rights groups such as the Algorithmic Justice League. Weitzberg calls IBM’s position “somewhat disingenuous,” as the company is “not one of the main providers of these technologies to U.S. police departments, and also they do provide other kinds of smart technologies to the police.” How does this bias widely affect populations, Weitzberg asked?
Although facial recognition is popular in only a small number of African countries, Weitzberg agrees with Reynolds that facial recognition technologies are “rushed out far ahead of policy and regulation.” When these technologies are integrated with identity systems, the problem, Reynolds believes, is that they are trained and developed for each company’s customer base. Meant “for a very narrow spectrum of people,” it is hard to be implemented across the world. “It just doesn’t work,” Reynolds points out.
Although biometrics and facial recognition are often used interchangeably, there are still significant differences in terminology, explains Emma Lindley, Women in Identity co-founder and Managing Director of AiiD Global. Users often associate facial recognition with surveillance cameras and lack of consent, or a biometric with authenticating access to a bank account. When asked about differences between biometrics and facial recognition, Reynolds is not separating the two because “they are very personal to ourselves” and “they’re being entered into different types of identity systems.”
Weitzberg believes the term biometrics actually covers “a whole spectrum of technologies” used in different scenarios, arguing there is a major difference in the use of facial recognition to unlock a phone from “the police relying on a large centralized database of facial images and engaging in a kind of mass surveillance without people’s consent.” When operating with sensitive data, there is always a question of how privacy and security are affected.
Going back to identity systems, Lindley feels the current landscape is not representative of the people supposed to be using the systems. There is a high number of inadequately represented people, whether because of their gender, race, ethnicity, sexual identity, age, religion or social status, who are supposed to use services that do not fully address their needs.
The National Institute of Standards and Technology (NIST) is asking for feedback on its digital identity guidelines to check if they meet consumer requirements. While 95 percent of webinar audience believe there is a need for more industry standards, Reynolds points out that standards on ethics have not yet been addressed and that the focus is often on technical standards. This is a problem because not all people are digitally literate, but they still have to understand the systems and technology they are using.
There’s been a lot of noise recently around providing digital identity and financial services to unbanked and undocumented communities in African countries, Weitzberg says, but she has seen firsthand that “intergovernmental organizations, government companies do often have a very flattened view of the African user and often don’t recognize that there’s huge variations across African countries. I see a lot of cut and paste models being applied across different African countries and all actually across the global south.”
Weitzberg worries there is not enough recognition of users, as the diversity spectrum is ignored when developing identity systems. Consent appears to be neglected in some cases, as happened with Kenya’s biometric registration program the constitutionality of which was challenged in court after many Kenyans felt pressured into it.
Civil society groups and active community involvement in the process could help, but Weitzberg doubts there will ever be a system without flaws, which can be related to exclusion or consent during “massive power imbalances.” Refugees, for instance, could feel they have no choice but to provide their information, because they are in a vulnerable situation. Another issue she brings up is that of minors who need to be registered, among situations where civil society organizations could really make a difference.
Nearly half of the world’s population does not own a smartphone, while 57 percent does not have internet access, Reynolds points out. Considering contact tracing apps are developed for smartphones, there is a strong need for education in the identity ecosystem to ensure even people who are not digitally literate have access and understand how to use these systems. To make this happen, companies and governments involved in building them need to work together and help educate people going forward.
Article Topics
algorithmic transparency | biometric enrollment | biometrics | digital identity | ethics | facial recognition | identity management | privacy | regulation | standards | Women in Identity
Comments