FB pixel

Microsoft outlines approach to regulating risks of its biometric tech

Categories Biometric R&D  |  Biometrics News  |  Trade Notes
Microsoft outlines approach to regulating risks of its biometric tech
 

Microsoft has highlighted some of the risks of unpoliced biometric adoption, discussing some of the strategies it uses to manage the risks within its own facial and voice recognition software.

Its latest governing AI report outlines what the company calls “sensitive uses” of AI, instances where executives feel requires closer vetting. The authors also spell out additional oversight that the company claims to be giving such technologies.

Microsoft says a review program provides additional oversight for teams working on higher-risk use cases of its AI systems, which include hands-on, responsible AI project review and consulting processes through its Office of Responsible AI’s Sensitive Uses.

The company claims to have declined to build and deploy specific AI applications, after concluding “that the projects were not sufficiently aligned with our Responsible AI Standard and principles.”

This included vetoing a local California police department’s request for real-time facial recognition via body-worn cameras and dash cams in patrol scenarios, calling this use “premature.”

The Sensitive Uses review process helped form the internal view that there “needed to be a societal conversation around the use of facial recognition and that laws needed to be established.”

The report also outlined its limited access policy, under which Microsoft sometimes requires potential customers to apply to use and “disclose their intended use” to make sure “it meets one of our predefined acceptable use cases.”

Microsoft also described how it polices its Azure AI’s custom neural voice application, which is used by AT&T.

The company says in this case it “limited customer access to the service, ensured acceptable use cases were defined and communicated through an application form, implemented speaker consent mechanisms, created specific terms of use, published transparency documentation detailing risks and limitations, and established technical guardrails to help ensure the speaker’s active participation when creating a synthetic voice.”

The White House this week revealed its own plans to protect people from the dangers of AI.

The plans mention Microsoft, as well as Anthropic, Google, Hugging Face, Nvidia, OpenAI and Stability AI opening AI systems to government scrutiny.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

RIVR results show biometric liveness detection effectiveness highly variable

The state of the art in biometric presentation attack detection (PAD) is better than document validation, but far worse than…

 

Court signals NetChoice faces tougher road on age check laws

The legal campaign against state social media age check laws is entering a more precarious phase for NetChoice and the…

 

Spain’s AEPD fines Yoti $1.1M for biometric data handling violations

Yoti has been fined 950,000 euros (roughly US$1.1 million) by Spanish data protection regulator AEPD for the handling of biometrics…

 

UK gov’t to design and build national digital ID in-house

The UK government plans to design, build and run its digital ID in-house, rather than outsourcing it to a private-sector…

 

UK Lords reject bid to block police facial recognition searches of DVLA database

The UK’s House of Lords has voted down an attempt to prevent the Driver and Vehicle Licensing Agency (DVLA) database…

 

India is leading example of digital infrastructure, IMF says

Digital public infrastructure (DPI) is being recognized as a foundational public good and a new paper from the International Monetary…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events