US govt facial recognition stakeholders must address public knowledge gap, case study shows
The American Federal Government’s use of facial recognition do not involve the vendors associated with low accuracy in consumer media reports, and are implemented with legal backing and privacy in mind, Identity Strategy Partners CEO and biometrics expert Janice Kephart states in a case study published to LinkedIn.
Growing misinformation about, and in some cases reported misuse of face biometrics prompted officials from 30 U.S. government agencies to question their policies and whether they may be introducing harms like civil liberties violations. Each agency represented uses facial recognition, and some use other biometric modalities, Kephart notes.
The consultation included defining key terms, and conducting a survey of current implementations of facial recognition, and whether they deliver the intended benefit, what authority they were set up under, and whether privacy impact assessments had been conducted. A ‘Face Recognition Solution Best Practices Self-Assessment Criteria’ tool was developed.
Kephart found the business use cases behind face biometrics use well-defined and specific, and all stated that the technology benefits their mission performance. Most also suggested that if facial recognition is banned, their program would be hindered, or even prevented from functioning entirely.
All implementations were found to be operating based on overt or tacit legal permissions, and testing is performed, at many Departments through dedicated biometrics labs.
At least one Privacy Impact Assessment, Privacy Threshold Analysis, System of Records Notice or other reviews had been conducted for all programs operating under the Privacy Act of 1974, according to the article.
All the biometric algorithms in use were found to have been evaluated by NIST, or to have shown value in an operational performance, such as returning a lead for investigation in a cold case.
Kephart reviews the different processes for biometric verification (1:1), small-group matching (1:Few) and large-group matching (1:N), and notes in no cases are results considered adequate grounds for any action on their own, save further investigation.
The Self-Assessment tool is also described, with its ten questions that interrogate the business use case and technology criteria. If the answer to any of the questions is no, the implementation should be reassessed. If most are no, then the program should be ceased or altered immediately. If all questions are answered affirmatively, the organization can have high confidence that its facial recognition implementation meets the safeguard criteria.
None of the programs were found to be “actively using the vendors whose poor performance has been noted by the media,” Kephart writes, adding that each program had carried out extensive accuracy testing.
Definitions are provided for “face recognition,” “face recognition technology,” and “biometrics,” as well as “surveillance technology,” and “video surveillance technology,” which Kephart writes demonstrates that there is no inherent relation between the identification technologies and surveillance.
Kephart, who spoke to Biometric Update about the industry’s growing interest in supporting realistic regulation in 2020, concludes that a knowledge gap between the technology’s practitioners and the public must be addressed, as the scrutiny will only increase.
Article Topics
accuracy | algorithms | biometric identification | biometrics | facial recognition | Janice Kephart | NIST | privacy | U.S. Government
Comments