Microsoft restricts facial recognition services, sunsets facial analysis

Microsoft has released their Responsible AI Standard, a framework to guide the company’s work in artificial intelligence. Calling it “an important step in our journey to develop better, more trustworthy AI,” Natasha Crampton, Microsoft’s chief responsible AI officer, said the framework puts people at the center of system design decisions and aims to steer them toward better and more equitable outcomes. In a post on Microsoft’s website, Crampton said AI development needs to respect values like privacy, inclusiveness and accountability.
To that end, Microsoft said it will retire from its Azure Face recognition service facial analysis software designed to identify age, gender, emotional states and other qualities, citing concerns about bias and inaccuracy.
Said Crampton, “experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.”
Additional changes will be made to the company’s more traditional biometric systems, including facial recognition, which will now be limited to managed services customers and partners, and restricted to use-cases that have been pre-defined as acceptable. Users will be required to adhere to a code of conduct and follow guardrails to prevent misuse.
In its release, Microsoft emphasized the growing belief among AI observers that globally, laws to regulate uses of AI need to keep up with technological development. Crampton said the company recognizes its responsibility to act. “We believe that we need to work towards ensuring AI systems are responsible by design.”
The U.S.-based tech giant also joined others in halting sales of facial recognition to law enforcement agencies in 2020 in the absence of federal regulation.
The potential risks associated with AI have been in headlines recently, after Blake Lemoine, an AI engineer Google, was put on leave for claiming that the company’s AI language modeling system, LaMDA, had become sentient.
A document with the Responsible AI Standard’s General Guidelines can be accessed here.
Article Topics
AI | biometrics | data privacy | emotion recognition | ethics | facial analysis | facial recognition | Microsoft | regulation | responsible AI
Comments