Microsoft prepares to operationalize principles for responsible use of facial recognition
Microsoft expects to “operationalize” its ethical principles for facial recognition by the end of March, company President and Chief Legal Officer Brad Smith told Bloomberg.
Amidst growing controversy over what constitutes appropriate use of the technology, Microsoft announced six new ethical principles in December to address the potential problems of bias, privacy intrusions, and reduction of democratic freedoms associated with facial recognition. Smith wrote at the time that the company planned to implement them with policies, governance systems and engineering tools by the end of Q1 2019.
The principles do not specifically bar providing biometric technology to governments, but Smith said in an interview with Bloomberg that they “would certainly restrict certain scenarios or uses,” such as ongoing surveillance of a specific individual by law enforcement without adequate safeguards.
Other scenarios in which Microsoft would decline to provide facial recognition include public surveillance in a country where the company could not be confidant that human rights would be observed, and U.S. law enforcement use that could unduly risk discrimination. Likewise, Smith recognized any potential sale to Chinese law enforcement would raise questions, but he notes that that is unlikely, as Chinese government agencies prefer to work with local firms.
Self-imposed guidelines put in place by Microsoft and other companies are not a replacement for industry-wide regulation, according to Smith.
“You never want to create a market that forces companies to choose between being successful and being responsible and unless we have a regulatory floor there is a danger of that happening,” Smith said.