FB pixel

Microsoft restricts facial recognition services, sunsets facial analysis

Microsoft restricts facial recognition services, sunsets facial analysis
 

Microsoft has released their Responsible AI Standard, a framework to guide the company’s work in artificial intelligence. Calling it “an important step in our journey to develop better, more trustworthy AI,” Natasha Crampton, Microsoft’s chief responsible AI officer, said the framework puts people at the center of system design decisions and aims to steer them toward better and more equitable outcomes. In a post on Microsoft’s website, Crampton said AI development needs to respect values like privacy, inclusiveness and accountability.

To that end, Microsoft said it will retire from its Azure Face recognition service facial analysis software designed to identify age, gender, emotional states and other qualities, citing concerns about bias and inaccuracy.

Said Crampton, “experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.”

Additional changes will be made to the company’s more traditional biometric systems, including facial recognition, which will now be limited to managed services customers and partners, and restricted to use-cases that have been pre-defined as acceptable. Users will be required to adhere to a code of conduct and follow guardrails to prevent misuse.

In its release, Microsoft emphasized the growing belief among AI observers that globally, laws to regulate uses of AI need to keep up with technological development. Crampton said the company recognizes its responsibility to act. “We believe that we need to work towards ensuring AI systems are responsible by design.”

The U.S.-based tech giant also joined others in halting sales of facial recognition to law enforcement agencies in 2020 in the absence of federal regulation.

The potential risks associated with AI have been in headlines recently, after Blake Lemoine, an AI engineer Google, was put on leave for claiming that the company’s AI language modeling system, LaMDA, had become sentient.

A document with the Responsible AI Standard’s General Guidelines can be accessed here.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

PNG launches birth registration legislation in landmark for national ID project

Papua New Guinea is taking a concrete step in making sure every citizen is officially recognized and able to access…

 

Yoti improves liveness detection pass rates

Digital identity and age estimation company Yoti has released new figures on its liveness detection technology, showing success rate improvements…

 

Inclusive digital ID poised for leap forward with QR codes, similar credentials

QR codes have been around for decades, but they and other similar technologies have only recently emerged as digital identity…

 

Age assurance debate simmers across EU with calls for stronger measures

Age checks remain in the headlines with new proposals from EU digital ministers to go further with legislation limiting social…

 

Yoti welcomes age assurance direction in UK Strategic Priorities

Yoti has weighed in on the UK government’s publication of its final draft Strategic Priorities for online safety. Prepared by…

 

AuthenticID and Darwinium execs pinpoint AI fraud weaknesses

AI always leaves a trace. Executives from AuthenticID and Darwinium agreed on this point, which offers a silver lining among…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events