Global spike in predictive policing draws on AI and biometrics
In China, authorities are making notable use of biometrics and AI in policing and security, with potential implications for human rights. Radio Free Asia reports that visitors to the 19th Asian Games, taking place in September and October in the western Chinese city of Hangzhou, can expect spyholes in their hotel room doors to be equipped with facial recognition. Networked cat’s eye cameras will limit entry to those who have registered with their ID cards.
Unregistered guests, attempts to interfere with the spyhole’s function, or “any abnormal activity” will result in “immediate measures,” according to a notice posted in a hotel lobby, as shared on Twitter.
Chinese authorities justified the wide deployment of facial recognition as a necessary risk prevention and control measures. However, comments from Chinese officials, which mentioned extreme scenarios and “key groups,” raise questions in the context of China’s larger mass surveillance culture.
AI can detect banned banners
Protests often come with banners, especially in China, where a newly available AI technology from Dahua will be able to detect them as they are unfurled. The U.S.-based surveillance research firm IPVM says the tool is explicitly designed to quell protests and clamp down on political dissidents.
Charles Rollet, a researcher for IPVM, said there is “no reason to track banners automatically unless you want to track protests.”
Dahua previously supplied authorities with facial recognition tools for tracking the Uyghur population, who have faced mass displacement and internment at the hands of Xi Jinping’s government. Western organizations sanctioned the company in response.
But condemnation from the west has not stopped police from pursuing more surveillance tools, and exacting more digital control over citizens. Radio Free Asia reports on a proposal in the works in Shanghai, seeking AI tools to monitor neighbourhoods favoured by students and academics. Police want a tool that will send them alerts about a slew of potential offenders, including journalists, university faculty, illegal foreigners, sex workers, “families with abnormal energy consumption” (for presumed cryptocurrency mining) — and Uyghurs.
Indeed, the explosion of China’s mass surveillance complex that began to gain momentum around 2016 has often targeted Uyghurs.
“Wherever they go in China, Uyghurs are essentially being singled out for discriminatory and targeted policing,” Maya Wang, a representative of Human Rights Watch, told RFA. “And that means that they often suffer – they often are unable to find a place to stay, a hotel. Typically, when they take the train, they are subjected to investigation and interrogation and so on.”
Japan to use AI to defend against assassins
The growing adoption of AI and biometrics tools in predictive policing goes far beyond China. In Japan, police are set to test AI-equipped cameras as a defense against violent attacks on high-profile individuals.
The system will feature “behaviour detection,” which tracks movements or patterns it has deemed suspicious.
The move comes as the country marks the anniversary of the death of Prime Minister Shinzo Abe, who was assassinated in July 2022.
Speaking to Nikkei Asia, Isao Itabashi, chief analyst for the Tokyo-based Council for Public Policy, said that “AI cameras are already being used widely in Europe, the U.S. and Asia,” and that behavior detection technology “will also help to deploy police officers more efficiently, as they will have more means for vigilance.”
Uptick of RTCCs in US show how threat events drive surveillance
Surveillance used to generate a “live picture of crime in the city” in the name of predictive policing has evolved alongside the threats it aims to mitigate, says an article in Wired covering the proliferation of real-time crime centres, or RTCCs, in U.S. cities. The 9/11 attacks led to the establishment of the first RTCC in 2005, a vast network of CCTV cameras and licence plate readers monitoring New York City.
These days, the 123 RTCCs at work across U.S. municipalities have a wider array of tools in their kit, including gunshot sensors, social media trawlers and body cameras equipped with facial recognition. Comparing a RTCC to “a scalpel,” the communications director for the National RTCC Association, Erik Lavigne, told Wired that police “aren’t catching the wrong people anymore.”
However, with more RTCCs on the way and heavy questions looming over the mass collection of private data globally, there are concerns about how the system addresses storage, biometric security, overreach, retention, and a host of other issues. Wherever you are now, the world is watching — and while much is uncertain, not much remains to be seen.
Article Topics
AI | biometrics | China | facial recognition | Japan | police | United States | video surveillance
Comments