Ban public real-time face biometrics, 4 in 10 researchers say in ethics survey
Some face biometrics researchers are deeply concerned about the ethics of activity in the field, while others do not see any problem with academic studies, according to a survey and an accompanying article from Nature on the ethics of facial recognition.
Nature surveyed 480 researchers from around the world working in facial recognition, computer vision and artificial intelligence earlier this year. Two-thirds were from Europe and North America.
Biometrics research on vulnerable populations is acceptable so long as it is conducted with consent, according to roughly one in four researchers. Such studies are or may be unethical, however, according to 71 percent.
For studies with questionable ethics, peer review based on human subjects’ ethics is considered a good way to prevent ethics breaches, according to a slight majority, while peer review of the researches’ value and more diligence and retraction from journals are supported by over 40 percent, followed by refusing funding to or involvement with involved entities.
The survey also asked about the ethics of research into sensitive characteristics like sexual identity or ethnicity, with around 15 percent saying it should not be conducted at all, and over 40 percent saying data subject’s informed consent should be required.
On datasets, 193 of 480 respondents said specific informed consent should be attained from the people in images used for facial recognition research, others felt complying with image license conditions is sufficient. One researcher noted the need for enhanced understanding among the public of licensing agreements for images and data.
Facial recognition research should depend on approval being bestowed by institutional review boards or other ethics bodies according to just under half, while roughly a quarter say it depends on the research being done. Far fewer respondents are confidant these bodies are up for the job, however.
More than half would like more training around the ethics of facial recognition, though only around one-quarter who said so say they have received any so far.
When asked about applications of facial recognition, researchers are most concerned about the use of the technology by private firms, followed by government agencies, and then police. More than half are extremely or somewhat uncomfortable with the use by the first two groups. Researchers are relatively comfortable with airport identity checks, police forensics and smartphone unlocking, while individuals looking up others’ identities, work and hiring applications, tracking in public spaces by private companies and school applications are considered more troubling by respondents.
More than 40 percent of researchers want real-time mass surveillance with facial recognition to be banned, and additional regulations, such as to require notification, individual data requests and probable cause for law enforcement, are broadly supported. The smallest portion of researchers suggested that no further regulation is needed.
Actions suggested in the accompanying article include a greater engagement with broader ethical issues by research journals that have so far mostly focused narrowly on consent, and conferences avoiding sponsorship from companies accused of enabling human rights abuses.
Famed biometrics researcher Anil Jain, however, cautioned against allowing bad actions to “curtail scientific exchange.”
Cornell University sociologist and technology ethicist Karen Levy told Nature that there is a persistent mentality among some researchers that concerns about applications do not apply to them, but also that there seems to be a “real awakening in the science community” about the implications of technologies like facial recognition.