Google responds to limitations of gender labels in facial analysis by removing them to avoid bias
Google has removed gender labels from its Cloud Vision API to avoid bias or misclassification of people with its facial analysis service, VentureBeat reports.
Labels are used to classify image for algorithm training purposes, but the gender labels have been found incompatible with Google’s AI Principles.
“Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the artificial intelligence principles at Google, specifically Principle #2: avoid creating or reinforcing unfair bias. After today, a non-gendered label such as ‘person’ will be returned by Cloud Vision API,” a Google spokesperson told VentureBeat.
Facial analysis software wrongly categorizes the gender of trans men around 30 percent of the time, and failed in every instance to classify the gender of non-binary or genderqueer people, in a study previously reported by Quartz.
Researchers from the University of Colorado Boulder published an article in the The Proceedings of the ACM on Human-Computer Interaction in November, 2019, titled “How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services.” Their analysis of more than 2,000 images taken from Instragram, searched through the hashtags #woman, #man, #transwoman, #transman, #agenderqueer, and #nonbinary, showed cisgender men and women were accurately classified 98 percent of the time.
The research is intended to provide insight into how gender is operationalized by computer vision services, and the report includes recommendations for designing infrastructure and datasets, as well as policy to improve gender-inclusivity and reduce potential harms.
Software from Amazon, Clarifai, IBM, Face++, Google, Kairos, Microsoft and several other companies was tested, and researchers believe that they consistently rely on outdated gender stereotypes, for instance often misclassifying a male researcher with long hair as a woman.
Amazon notes on its website that Rekognition is not meant to be used to classify the gender of individuals, but University of Colorado Boulder Doctoral Research Student Morgan Klaus Scheuerman notes that it is not possible for the company to ensure it is used as suggested, and that aggregations would fail to accurately count trans people, or include other non-binary genders.
Google already did not provide gender classification, but did use labels, and Scheuerman told VentureBeat that the company is attempting to set itself apart from the competition.
A study last year revealed that facial recognition research has historically been conducted almost exclusively based on a strict binary understanding of gender.
Article Topics
algorithms | artificial intelligence | biometric-bias | biometrics | gender recognition | Google | training
Comments