Biometric surveillance normalization in China could spread, Brookings warns
A new investigation by the New York Times has revealed how China is conducting biometric mass surveillance on a scale even broader than previously understood.
Personal data, including DNA, facial scans and voice biometrics are being collected in a push to “maximize what the state can find out about a person’s identity, activities and social connections, which could ultimately help the government maintain its authoritarian rule,” according to the Times’ analysis.
Analysts examined more than a hundred thousand government bidding documents, and found that Chinese authorities had requested access to cameras in public and private spaces, including lobbies of the Days Inn and Marriott brand hotels. A police estimate in the bidding documents put the number of facial images stored at any given time at 2.5 billion.
Documents from the city of Zhongshan, in the southwest, show the police force requesting technology that would allow facial recognition cameras to also record a voice print within a 300-foot radius. There are records of large purchases of DNA and iris-scanning technology, which has already been used to create an iris biometrics database in the Xinjiang region, where large-scale human rights violations against the Uyghur ethnic minority have attracted international condemnation.
In a recent policy brief, the Brookings Institution identified the risk of potential human rights abuses associated with AI and surveillance technology, in China and beyond. The brief flags the potential for would-be mundane uses of facial recognition to be exploited for harmful ends—and, specifically, efforts by Chinese companies to fast-track facial recognition standards through the United Nations’ International Telecommunication Union (ITU).
Brookings concludes their brief with series of recommendations to address the challenge. They include subsidizing companies that can assist with creating international standards, strengthening community discourse about AI and surveillance—and that the U.S. and its allies “demonstrate that they can produce a viable alternative model by proving that they can use facial recognition, predictive policing, and other AI surveillance tools responsibly at home.”