Questions about AI police surveillance stretch down the supply chain
As government use of facial recognition grows deeper and broader, so do questions about the supply chains delivering new systems.
An essay in the journal Science points out that even as citizens and privacy advocates grow more insistent on details about how biometric surveillance is used by law enforcement, less is known about how systems are designed and made to begin with.
Democratic policing, which is law enforcement loyal to American constitutional principles, requires more surveillance transparency than can be had if tools as invasive and powerful as facial recognition come seemingly out of nowhere with little accountability for manufacturers, writes Elizabeth Joh, a law professor at the University of California, Davis.
(Questions about Clearview AI and its sales to law enforcement are atypical. Most vendors stay in the shadows.)
Unlike most other tools and devices that police officers use in the course of doing their jobs, facial recognition and other AI software is created with ethical policies that then are deployed according to law enforcement operational policies.
Transparency at one level of policy means significantly less without transparency at the other level.
And product design influences how policing is done, according to Joh. Manufacturers decide what triggers their body cameras and what causes them to shut off, she said.
They also decide what kinds of software is used in their products. Buyers can request features and capabilities, but the companies react more strongly to their competitive set and revenue targets.
Vendors impose secrecy clauses, too, that shape department policies. Nondisclosure agreements and shields for proprietary intellectual property can prevent public disclosure of tools and techniques used to enforce laws and capture criminals.
Joh’s conclusion is that a lack of transparency that might be acceptable in electronic consumer goods or defense work, can make democratic policing difficult or impossible.