FB pixel

Microsoft outlines approach to regulating risks of its biometric tech

Categories Biometric R&D  |  Biometrics News  |  Trade Notes
Microsoft outlines approach to regulating risks of its biometric tech

Microsoft has highlighted some of the risks of unpoliced biometric adoption, discussing some of the strategies it uses to manage the risks within its own facial and voice recognition software.

Its latest governing AI report outlines what the company calls “sensitive uses” of AI, instances where executives feel requires closer vetting. The authors also spell out additional oversight that the company claims to be giving such technologies.

Microsoft says a review program provides additional oversight for teams working on higher-risk use cases of its AI systems, which include hands-on, responsible AI project review and consulting processes through its Office of Responsible AI’s Sensitive Uses.

The company claims to have declined to build and deploy specific AI applications, after concluding “that the projects were not sufficiently aligned with our Responsible AI Standard and principles.”

This included vetoing a local California police department’s request for real-time facial recognition via body-worn cameras and dash cams in patrol scenarios, calling this use “premature.”

The Sensitive Uses review process helped form the internal view that there “needed to be a societal conversation around the use of facial recognition and that laws needed to be established.”

The report also outlined its limited access policy, under which Microsoft sometimes requires potential customers to apply to use and “disclose their intended use” to make sure “it meets one of our predefined acceptable use cases.”

Microsoft also described how it polices its Azure AI’s custom neural voice application, which is used by AT&T.

The company says in this case it “limited customer access to the service, ensured acceptable use cases were defined and communicated through an application form, implemented speaker consent mechanisms, created specific terms of use, published transparency documentation detailing risks and limitations, and established technical guardrails to help ensure the speaker’s active participation when creating a synthetic voice.”

The White House this week revealed its own plans to protect people from the dangers of AI.

The plans mention Microsoft, as well as Anthropic, Google, Hugging Face, Nvidia, OpenAI and Stability AI opening AI systems to government scrutiny.

Article Topics

 |   |   |   |   | 

Latest Biometrics News


EU AI Act should revise its risk-based approach: Report

Another voice has joined the chorus criticizing the European Union’s Artificial Intelligence Act, this time arguing that important provisions of…


Swiss e-ID resists rushing trust infrastructure

Switzerland is debating on how to proceed with the technical implementation of its national digital identity as the 2026 deadline…


Former Jumio exec joins digital ID web 3.0 project

Move over Worldcoin, there’s a new kid on the block vying for the attention of the digital identity industry and…


DHS audit urges upgrade of biometric vetting for noncitizens and asylum seekers

A recent audit by the DHS Office of Inspector General (OIG) has called for the Department of Homeland Security (DHS)…


Researchers spotlight Russia’s opaque facial recognition surveillance system

In recent years, Russia has been attracting attention for its use of facial recognition surveillance to track down protestors, opposition…


Estonia digital identity wallet app from Cybernetica lifts off

Tallinn-based Cybernetica has submitted the minimum viable product (MVP) for Estonia’s national digital identity wallet to the Estonian Information System…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events