Technology

Google explains what went wrong with Gemini AI image generation

As world leaders and industry stalwarts slammed Google over inaccuracies in its AI-generated historical images, the tech giant has tried to explain what exactly went wrong with its AI.

The company has made the decision to pause Gemini’s image generation of people while it works on “improving the accuracy of its responses”.

While Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, expressed concern over the potential violation of Indian IT laws by Google’s Gemini AI chatbot, Tesla and SpaceX CEO Elon Musk accused Google of running “racist, anti-civilisational programming” with its AI models.

Prabhakar Raghavan, Senior Vice President at Google, admitted in a latest blog post that it is clear that “this feature missed the mark”.

“Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well,” Raghavan said.

So what went wrong?

“In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” explained Raghavan.

And second, over time, the model became way more cautious than “we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive”.

The company said that it did not want it to create inaccurate historical — or any other — images.

“So we turned the image generation of people off and will work to improve it significantly before turning it back on. This process will include extensive testing,” said Raghavan.

However, he said that he can’t “promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results”.

“But I can promise that we will continue to take action whenever we identify an issue,” Raghavan added.

Show More
Back to top button