Advertisement
Australia markets open in 2 hours 10 minutes
  • ALL ORDS

    7,974.80
    -27.70 (-0.35%)
     
  • AUD/USD

    0.6617
    -0.0021 (-0.31%)
     
  • ASX 200

    7,724.30
    -25.40 (-0.33%)
     
  • OIL

    78.49
    +0.04 (+0.05%)
     
  • GOLD

    2,348.40
    -0.70 (-0.03%)
     
  • Bitcoin AUD

    100,405.25
    +469.04 (+0.47%)
     
  • CMC Crypto 200

    1,382.40
    -35.47 (-2.50%)
     

Google AI makes string of errors after relying on joke websites

Google AI repeated false claim that former US president Barack Obama was a Muslim
Google AI repeated false claim that former US president Barack Obama was a Muslim - Evelyn Hockstein/REUTERS

Google’s artificial intelligence-powered search results have claimed that Barack Obama is a Muslim and told people to eat rocks, in the latest high-profile case of the company’s AI systems misfiring.

Users of the search engine’s new “AI overviews” have shared multiple examples of the new feature displaying incorrect or potentially dangerous examples, days into its launch.

In one case, the AI claimed that Mr Obama was America’s only Muslim president. In others it told people to put glue on pizza if cheese does not stick to the base and said it was healthy to eat one rock a day.

The AI overview feature uses data from existing websites to inform its answers, but in many cases Google appears to be relying on joke websites or misinterpreting reliable sources.

Its answer about Mr Obama, who is Christian, referred to an Oxford University Press webpage about a book on American Christianity.

ADVERTISEMENT

One of the book’s chapters was titled “Barack Hussein Obama: America’s First Muslim President?” – a reference to some Americans’ belief that Mr Obama is a Muslim – which Google’s AI appears to have misinterpreted as truth.

The advice to eat rocks was based on an article in The Onion, a satirical news website, and the recommendation to put glue on pizza was based on a joke posted on Reddit more than a decade ago.

Google’s AI overviews, announced last week, have been introduced in the United States in recent days and are due to be added to search results worldwide this year.

Google said the examples of it providing incorrect answers were in response to “uncommon queries” and that it had tested the feature rigorously before its launch.

It said: “The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences.

“The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall.”

The cases are the latest AI mishap for Google. Earlier this year it suspended its Gemini chatbot’s ability to generate images of people after users found that it would draw pictures of black Nazis and Native American Vikings.

Google suspended its Gemini chatbot's ability to generate images of people after users found that it would draw pictures of black Nazis and Native American Vikings
Google suspended its Gemini chatbot's ability to generate images of people after creating pictures of black Nazis and Native American Vikings

The company later apologised after its text chatbot failed to condemn paedophilia and equated Elon Musk with Adolf Hitler. The company’s co-founder Sergey Brin admitted that it had “messed up”.

Google is under pressure to put more AI features into its products to respond to Microsoft, whose partnership with ChatGPT’s developer OpenAI has allowed it to push ahead in AI.

However, Google’s response has been controversial because many websites fear that the search engine answering questions directly will mean fewer visits to their pages.