Microsoft further limits law enforcement use of its facial recognition
Microsoft has updated its terms of service to clarify its ban on U.S. police departments from using Azure OpenAI Service for facial recognition.
The update, published on Wednesday, notes that integrations with Azure OpenAI Service must not be used for facial recognition purposes by or for a police department in the country. An earlier version of the terms of service stated that Azure OpenAI Service prohibits identification or verification using media containing people’s faces by any user, including U.S. police.
While the U.S. police will have to deal with a complete ban, other law enforcement agencies around the world may have more options. According to Microsoft’s rules, the service cannot be used for real-time facial recognition on mobile cameras by any law enforcement globally for identifying people in uncontrolled, “in the wild” environments. This includes body-worn or dash-mounted cameras that use the technology to attempt to identify individuals in suspect or criminal databases.
Emotion recognition and other kinds of facial analysis are also banned.
In its report last year on governing AI, Microsoft said it has declined to build and deploy AI applications that are not aligned with its AI standards and principles, including vetoing a local California police department’s request for real-time facial recognition via body-worn cameras and dash cams.
Azure OpenAI Service offers access to OpenAI’s language models including the GPT-4, GPT-4 Turbo with Vision and others.
The newest changes come after Bloomberg reported in January that OpenAI is working with the U.S. military on several projects relating to cybersecurity.
Article Topics
Azure OpenAI | biometrics | facial recognition | Microsoft | police | real-time biometrics
Comments