Australia markets open in 41 minutes
  • ALL ORDS

    7,137.60
    -11.00 (-0.15%)
     
  • AUD/USD

    0.6656
    -0.0035 (-0.53%)
     
  • ASX 200

    6,955.20
    -13.40 (-0.19%)
     
  • OIL

    69.75
    +0.49 (+0.71%)
     
  • GOLD

    1,974.40
    -9.40 (-0.47%)
     
  • Bitcoin AUD

    42,080.99
    +689.59 (+1.67%)
     
  • CMC Crypto 200

    597.33
    -21.06 (-3.41%)
     

Human convincingly beats AI at Go with help from a bot

A flaw let the player surround his AI victim while distracting it elsewhere.

ED JONES via Getty Images

A strong amateur Go player has beat a highly-ranked AI system after exploiting a weakness discovered by a second computer, The Financial Times has reported. By exploiting the flaw, American player Kellin Pelrine defeated the KataGo system decisively, winning 14 of 15 games without further computer help. It's a rare Go win for humans since AlphaGo's milestone 2016 victory that helped pave the way for the current AI craze. It also shows that even the most advanced AI systems can have glaring blind spots.

Pelrine's victory was made possible by a research firm called FAR AI, which developed a program to probe KataGo for weaknesses. After playing over a million games, it was able to find a weakness that could be exploited by a decent amateur player. It's "not completely trivial but it's not super-difficult" to learn, said Pelrine. He used the same method was to beat Leela Zero, another top Go AI.

Here's how it works: the goal is to create a large "loop" of stones to encircle an opponent's group, then distract the computer by making moves in other areas of the board. Even when its group was nearly surrounded, the computer failed to notice the strategy. "As a human, it would be quite easy to spot," Pelrine said, since the encircling stones stand out clearly on the board.

The flaw demonstrates that AI systems can't really "think" beyond their training, so they often do things that look incredibly stupid to humans. We've seen similar things with chat bots like the one employed by Microsoft's Bing search engine. While it was good at repetitive tasks like coming up with a travel itinerary, it also gave incorrect information, berated users for wasting its time and even exhibited "unhinged" behavior — likely due to the models it was trained on.

Lightvector (the developer of KataGo) is certainly aware of the problem, which players have been exploiting for several months now. In a GitHub post, it said it's been working on a fix for a variety of attack types that use the exploit.