ChatGPT facial recognition potential makes OpenAI nervous
ChatGPT, the artificial intelligence-powered large language model, can analyze images including recognizing and describing people’s faces. But OpenAI, the company behind the chatbot, says that it is not ready to roll out facial recognition or analysis features for public use as it may invite legal issues in jurisdictions that require consent for using biometric data.
The image analysis feature is part of the advanced version of the software called GPT-4 which was announced back in March. The company’s AI policy researcher Sandhini Agarwal told the New York Times that the technology can be used to identify public figures, such as people with a Wikipedia page, but it does not match faces to images found on the internet like tools from Clearview AI and PimEyes. Those companies have taken heat for their data collection practices, as has ChatGPT, albeit so far for non-biometric data in the latter’s case.
While OpenAI has had its fair share of regulatory scrutiny, the company has other reasons to hold off on making image analysis available.
Its creators are still unsure whether the chatbot could say inappropriate things about people’s faces, including assessing their gender or emotional states. The company is also concerned that ChatGPT’s visual analysis could potentially invent a person’s name by producing so-called “hallucinations,” misleading or inaccurate results that were already recorded in the chatbot.
In August last year, OpenAI found gender and age bias in its computer vision model CLIP (Contrastive Language-Image Pre-training) suggesting that it is not appropriate for facial recognition and other tasks.
Some users, however, have been lucky enough to try GPT-4’s image analysis feature.
New Zealand-based podcaster Jonathan Mosen, who is blind, tried out the advanced version of the chatbot through cooperation with Be My Eyes, a Danish mobile platform that connects blind and visually impaired people with sighted volunteers and companies and helps them recognize objects and cope with everyday situations. Mosen described his experience in his podcast “Living Blindfully.”
Microsoft’s AI-powered Bing chatbot also provided a limited rollout of the visual analysis feature to certain users but pictures of faces were automatically blurred, according to the Times. Open AI believes that in the future, the technology could help users identify and solve issues just by uploading images, including fixing a car engine or identifying a skin rash.
Agarwal does not mention the potential for spoof attacks with material generated by ChatGPT, but that possibility is raised in a recent Biometric Update guest post by Jumio Chief of Digital Identity Philipp Pointner
Meanwhile, some companies such as Sensory are looking towards integrating voice-enabled consumer electronics with text-based ChatGPT.