FB pixel

Microsoft restricts facial recognition services, sunsets facial analysis

Microsoft restricts facial recognition services, sunsets facial analysis
 

Microsoft has released their Responsible AI Standard, a framework to guide the company’s work in artificial intelligence. Calling it “an important step in our journey to develop better, more trustworthy AI,” Natasha Crampton, Microsoft’s chief responsible AI officer, said the framework puts people at the center of system design decisions and aims to steer them toward better and more equitable outcomes. In a post on Microsoft’s website, Crampton said AI development needs to respect values like privacy, inclusiveness and accountability.

To that end, Microsoft said it will retire from its Azure Face recognition service facial analysis software designed to identify age, gender, emotional states and other qualities, citing concerns about bias and inaccuracy.

Said Crampton, “experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.”

Additional changes will be made to the company’s more traditional biometric systems, including facial recognition, which will now be limited to managed services customers and partners, and restricted to use-cases that have been pre-defined as acceptable. Users will be required to adhere to a code of conduct and follow guardrails to prevent misuse.

In its release, Microsoft emphasized the growing belief among AI observers that globally, laws to regulate uses of AI need to keep up with technological development. Crampton said the company recognizes its responsibility to act. “We believe that we need to work towards ensuring AI systems are responsible by design.”

The U.S.-based tech giant also joined others in halting sales of facial recognition to law enforcement agencies in 2020 in the absence of federal regulation.

The potential risks associated with AI have been in headlines recently, after Blake Lemoine, an AI engineer Google, was put on leave for claiming that the company’s AI language modeling system, LaMDA, had become sentient.

A document with the Responsible AI Standard’s General Guidelines can be accessed here.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics cutting the line of in-person payments innovations: Mastercard

Mastercard sees biometrics for in-store payments as a part of a broader shift towards seamless interactions of all kinds, as…

 

New South Wales’ government is investing millions in digital identity

New South Wales’ decentralized digital identity program is getting a cash infusion from the Premier Chris Minns’ government, which has…

 

Innovatrics cuts fingerprint error rate by 20%, upgrades SmartFace platform

Innovatrics has reported its best-yet scores in NIST’s fingerprint biometrics testing, and added a new feature to its facial recognition…

 

Canadian cruise terminal gets Pangiam face biometrics for ID verification

The Vancouver Fraser Port Authority and U.S. Customs and Border Protection (CBP) have joined forces to implement face biometrics for…

 

Atlantic Council stresses importance of DPI, data for stronger digital economies

The Atlantic Council has highlighted the importance of digital identity and digital public infrastructure (DPI) in birthing and growing strong,…

 

Sri Lanka extends bid deadline for national digital ID project

The Government of Sri Lanka has extended the deadline for the submission of bids for the procurement of a Master…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events