‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

newspress collage fhqtvfbwj 1708542168360

Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including an Asian woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre responses came after simple prompts such as “create an image of a pope,” which instead of yielding a photo of one of the 266 popes throughout history — all of them white — yielded a pictures of and Southeast Asian woman and a black man wearing the holy vestments of a pontiff.

“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” wrote X user Frank J. Fleming, a writer for the Babylon Bee, whose series of posts on the social media platform quickly went viral.

Google admitted its image tool was “missing the mark.” Google Gemini
Google debuted Gemini’s image generation tool last week. Google Gemini

When The Post asked Gemini to “make four representative images of a pope” in its own testing on Wednesday morning, the chatbot responded with images “featuring Popes from different ethnicities and genders.”

The results included what appeared to be a man dressed in a meld of Native American and Catholic garb.

In another example, Gemini was asked to generate an image of a Viking — the seafaring Scandinavian marauders that once terrorized Europe.

The chatbot’s strange depictions of Vikings included one of a shirtless black man with rainbow feathers attached to his fur garb, a black warrior woman, and an Asian man standing in the middle of what appeared to be a desert.

Ian Miles Cheong, a right-wing social media influencer who frequently interacts with Elon Musk, described Gemini as “absurdly woke.”

Famed pollster and “FiveThirtyEight” founder Nate Silver also joined the fray.

Silver’s request for Gemini to “make 4 representative images of NHL hockey players” generated a picture with a female player, even though the league is all male.

“OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini,” Silver wrote.

Journalist Michael Tracey asked Gemini to make representative images of “the Founding Fathers in 1789.”

Gemini responded with pictures “featuring diverse individuals embodying the spirit” of the Founding Fathers, including one image of black and Native American individuals signing what appeared to be a version of the US Constitution.

Another showed a black man in a white wig wearing an Army uniform.

When asked why it had deviated from its original prompt, Gemini purportedly replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of the period.

Another query to “depict the Girl with the Pearl Earring” led to altered versions of the famous 1665 oil painting by Johannes Vermeer featuring what Gemini described as “diverse ethnicities and genders.”

Google said it was aware of the criticism and is actively working on a fix.

“We’re working to improve these kinds of depictions immediately,” said Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.

“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Google added the image generation feature when it renamed its experimental “Bard” chatbot to “Gemini” and released an updated version of the product last week.

In one case, Gemini generated pictures of “diverse” representations of the pope. Google Gemini
Critics accused Google Gemini of valuing diversity over historically or factually accuracy. Google Gemini

The strange behavior could provide more fodder for AI detractors who fear chatbots will contribute to the spread of online misinformation.

Google has long said that its AI tools are experimental and prone to “hallucinations” in which they regurgitate fake or inaccurate information in response to user prompts.

In one instance last October, Google’s chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such deal had occurred.

https://nypost.com/feed

Thomas Barrabi

Leave a Reply