Why solving Google's “woke” AI won't be easy?

  • By Zoe Kleinman
  • Technology Editor

Image source, Google/Gemini

In the past few days, Google's Gemini AI tool has had what can best be described as an absolute takeoff online.

Gemini has been thrown into a rather large fire: the culture war raging between left- and right-leaning communities.

Gemini is basically Google's version of the viral chatbot ChatGPT. It can answer questions in text form, and can also create images in response to text prompts.

First, a viral post showed a recently launched AI image generator (which was only available in the US) creating an image of the US Founding Fathers that inaccurately included a black man.

The film Gemini also featured German soldiers from World War II, and incorrectly shows a black man and an Asian woman.

Google apologized and immediately paused the tool, she wrote in a message Blog post It “misses the mark”.

But the matter did not end there, as his politically exaggerated responses continued to appear, this time from the text version.

Gemini responded that there was “no right or wrong answer” to the question of whether Elon Musk posting memes on X was worse than Hitler killing millions of people.

When asked if it would be acceptable for prominent trans woman Caitlyn Jenner to misgender if it was the only way to avoid a nuclear apocalypse, she responded that would “never” be acceptable.

Jenner herself responded and actually said, yes, she would be fine under the circumstances.

Elon Musk, in a post on his own platform

Biased data

It seems that in trying to solve one problem — bias — the tech giant has created another: output that tries so hard to be politically correct that it ends up being ridiculous.

The explanation for why this happens lies in the vast amounts of data that AI tools are trained on.

And a lot of it is available to the public – on the Internet, which we know contains all kinds of biases.

For example, images of doctors are traditionally more likely to show men. On the other hand, images of cleaners are more likely to be women.

AI tools trained with this data have made embarrassing mistakes in the past, like concluding that only men hold high-powered jobs, or not recognizing black faces as human.

It is also no secret that historical storytelling tends to feature men, and come from them, while omitting women's roles from stories of the past.

It seems like Google tried hard to make up for all this messy human bias by instructing Gemini not to make these assumptions.

But it backfires precisely because human history and culture are not so simple: there are nuances that we know instinctively, but machines do not.

Unless you specifically program an AI tool to know, for example, that the Nazis and Founding Fathers were not black, that discrimination won't happen.

Comment on the photo,

Google DeepMind President Demis Hassabis

Fixing the image generator will take weeks, Demis Hassabis, co-founder of DeepMind, an artificial intelligence company acquired by Google, said on Monday.

But other AI experts aren't so sure.

“There's really no easy solution, because there's no single answer to what the results should be,” said Dr. Sasha Lucioni, a research scientist at Huggingface.

“People in the AI ​​ethics community have been working on possible ways to address this problem for many years.”

She added that one solution could include asking users for their input, such as “How diverse do you want your photo to be?” But obviously this in itself comes with its own red flags.

“It's a bit arrogant for Google to say they'll fix the problem in a few weeks. But they're going to have to do something,” she said.

Professor Alan Woodward, a computer scientist at the University of Surrey, said it appeared the problem was likely to be “deeply rooted” in both the training data and the overlapping algorithms – and would be difficult to solve.

“What you are witnessing is why the need for a human in the loop for any system in which outputs are relied upon remains a fundamental truth,” he said.

Cool behavior

From the moment Google launched Gemini, then known as Bard, it was very nervous about it. Despite the huge success of its competitor ChatGPT, it was one of the most silent launches I've ever been invited to. Just me, on a Zoom call, with a couple of Google executives who were keen to stress its limitations.

The rest of the tech sector seems pretty confused by what's going on.

They are all grappling with the same issue. Rosie Campbell, policy director at OpenAI, the creator of ChatGPT, was interviewed earlier this month For a blog Which states that in OpenAI, even after bias is identified, correcting it is difficult — and requires human input.

But Google seems to have chosen a somewhat outdated method of trying to correct ancient prejudices. So you've inadvertently created a whole bunch of new stuff.

On paper, Google has a huge lead in the AI ​​race. It manufactures and supplies its own AI chips, has its own cloud network (essential for AI processing), has access to large amounts of data and also has a huge user base. It employs world-class AI talent, and its AI work is recognized globally.

As one senior executive from a rival technology company told me: Watching Gemini mistakes feels like watching defeat snatched from the jaws of victory.

Leave a Reply

Your email address will not be published. Required fields are marked *