FB pixel

Microsoft outlines approach to regulating risks of its biometric tech

Categories Biometric R&D  |  Biometrics News  |  Trade Notes
Microsoft outlines approach to regulating risks of its biometric tech

Microsoft has highlighted some of the risks of unpoliced biometric adoption, discussing some of the strategies it uses to manage the risks within its own facial and voice recognition software.

Its latest governing AI report outlines what the company calls “sensitive uses” of AI, instances where executives feel requires closer vetting. The authors also spell out additional oversight that the company claims to be giving such technologies.

Microsoft says a review program provides additional oversight for teams working on higher-risk use cases of its AI systems, which include hands-on, responsible AI project review and consulting processes through its Office of Responsible AI’s Sensitive Uses.

The company claims to have declined to build and deploy specific AI applications, after concluding “that the projects were not sufficiently aligned with our Responsible AI Standard and principles.”

This included vetoing a local California police department’s request for real-time facial recognition via body-worn cameras and dash cams in patrol scenarios, calling this use “premature.”

The Sensitive Uses review process helped form the internal view that there “needed to be a societal conversation around the use of facial recognition and that laws needed to be established.”

The report also outlined its limited access policy, under which Microsoft sometimes requires potential customers to apply to use and “disclose their intended use” to make sure “it meets one of our predefined acceptable use cases.”

Microsoft also described how it polices its Azure AI’s custom neural voice application, which is used by AT&T.

The company says in this case it “limited customer access to the service, ensured acceptable use cases were defined and communicated through an application form, implemented speaker consent mechanisms, created specific terms of use, published transparency documentation detailing risks and limitations, and established technical guardrails to help ensure the speaker’s active participation when creating a synthetic voice.”

The White House this week revealed its own plans to protect people from the dangers of AI.

The plans mention Microsoft, as well as Anthropic, Google, Hugging Face, Nvidia, OpenAI and Stability AI opening AI systems to government scrutiny.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics