Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Google has temporarily stopped its latest artificial intelligence model, Gemini, from generating images of people, as a backlash erupted over its depiction of different ethnicities and genders.
Gemini creates realistic images based on users’ descriptions in a similar manner to OpenAI’s ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs.
However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historically inaccurate contexts, for instance in depictions of Viking kings or German soldiers from the second world war.
Google said: “We’re working to improve these kinds of depictions immediately. Gemini’s image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
It added that it would “pause the image generation of people and will re-release an improved version soon”.
The search giant has said Gemini is its “largest, most capable and most general” AI system and would be available as a free app. The company has said the model can analyse information from images and audio, and has sophisticated reasoning and coding capabilities.
It follows the release of other sophisticated models by rivals including OpenAI, Meta and start-ups Anthropic and Mistral.
Read the full article here