Unforeseen algorithm problems give Google a Gemini migraine

If this is what AI is doing in its pre-school years, its teen years should be a scream.
Google executives are trying to appear forthcoming and responsive about troubling responses from its Gemini conversational app. Recent responses – at least those getting the most attention – seemed specifically designed to enrage U.S. nativists.
Until further notice, Gemini will not generate images of people.
The problem is that a new, and apparently faulty, image generation feature was added to Gemini three weeks ago that.
In one case, the AI responded to an illustration instruction by substituting the white male Founding Fathers in an image with representations of other races and American Indians, according to tech-investment publication TechCrunch.
A Google blog post apologizes for the Gemini imaging and says that the feature has been unplugged while a solution is created. No timeline has been disclosed, but the “process will include extensive testing,” according to Google.
The blog post does not mention the potential for cybercriminals to use Gemini to create images for use in presentation and injection attacks to commit online fraud.
But executives say two things went wrong with the tool.
Gemini was tuned to create a range of people, but that tuning didn’t address instances when showing a range was an objective mistake.
And the model grew “way more” cautious than programmers intended. It decided a neutral response to a neutral request was, instead, sensitive, causing Gemini to not answer in order to avoid offending.
Together, Google says, these conditions caused the model to sculpt erroneous answers that, in some cases, appeared to show a cultural or political bias.
Article Topics
algorithms | biometrics | deepfakes | generative AI | Google
Comments