Advertisement
Australia markets closed
  • ALL ORDS

    7,837.40
    -100.10 (-1.26%)
     
  • ASX 200

    7,575.90
    -107.10 (-1.39%)
     
  • AUD/USD

    0.6520
    -0.0004 (-0.06%)
     
  • OIL

    83.50
    -0.07 (-0.08%)
     
  • GOLD

    2,345.70
    +3.20 (+0.14%)
     
  • Bitcoin AUD

    97,679.02
    +491.13 (+0.51%)
     
  • CMC Crypto 200

    1,328.61
    -67.92 (-4.86%)
     
  • AUD/EUR

    0.6105
    +0.0032 (+0.53%)
     
  • AUD/NZD

    1.0991
    +0.0034 (+0.31%)
     
  • NZX 50

    11,805.09
    -141.34 (-1.18%)
     
  • NASDAQ

    17,695.40
    +264.89 (+1.52%)
     
  • FTSE

    8,139.52
    +60.66 (+0.75%)
     
  • Dow Jones

    38,152.35
    +66.55 (+0.17%)
     
  • DAX

    18,162.68
    +245.40 (+1.37%)
     
  • Hang Seng

    17,651.15
    +366.61 (+2.12%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     

Google is taking reservations to talk to its supposedly-sentient chatbot

There's a pretty good chance it won't even say anything racist.

Carol Yepes via Getty Images

At the I/O 2022 conference this past May, Google CEO Sundar Pichai announced that the company would, in the coming months, gradually avail its experimental LaMDA 2 conversational AI model to select beta users. Those months have come. On Thursday, researchers at Google's AI division announced that interested users can register to explore the model as access increasingly becomes available.

Regular readers will recognize LaMDA as the supposedly sentient natural language processing (NLP) model that a Google researcher got himself fired over. NLPs are a class of AI model designed to parse human speech into actionable commands and are behind the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for realtime translation and subtitle apps. Basically, whenever you're talking to a computer, it's using NLP tech to listen.

"I'm sorry, I didn't quite get that" is a phrase that still haunts many early Siri adopters' dreams, though in the past decade NLP technology has advanced at a rapid pace. Today's models are trained on hundreds of billions of parameters, can translate hundreds of languages in real time and even carry lessons learned in one conversation through to subsequent chats.

ADVERTISEMENT

Google's AI Test kitchen will enable beta users to experiment and explore interactions with the NLP in a controlled, presumably supervised, sandbox. Access will begin rolling out to small groups of US Android users today before spreading to iOS devices in the coming weeks. The program will offer a set of guided demos which will show users LaMDA's capabilities.

"The first demo, 'Imagine It,' lets you name a place and offers paths to explore your imagination," Tris Warkentin, Group Product Manager at Google Research, and Josh Woodward, Senior Director of Product Management for Labs at Google, wrote in a Google AI blog Thursday. "With the 'List It' demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful subtasks. And in the 'Talk About It (Dogs Edition)' demo, you can have a fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic."

The focus on safe, responsible interactions is a common one in an industry where there's already a name for chatbot AIs that go full-Nazi, and that name in Taye. Thankfully, that exceedingly embarrassing incident was a lesson that Microsoft and much of the rest of the AI field has taken to heart, which is why we see such strident restrictions on what users can have Midjourney or Dall-E 2 conjure, or what topics Facebook's Blenderbot 3 can discuss.

That's not to say the system is foolproof. "We’ve run dedicated rounds of adversarial testing to find additional flaws in the model," Warkentin and Woodward wrote. "We enlisted expert red teaming members... who have uncovered additional harmful, yet subtle, outputs." Those include failing "to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts," and producing "harmful or toxic responses based on biases in its training data." As many AIs these days are wont to do.