Advertisement
Australia markets closed
  • ALL ORDS

    7,898.90
    +37.90 (+0.48%)
     
  • AUD/USD

    0.6447
    +0.0011 (+0.16%)
     
  • ASX 200

    7,642.10
    +36.50 (+0.48%)
     
  • OIL

    82.27
    -0.42 (-0.51%)
     
  • GOLD

    2,397.10
    +8.70 (+0.36%)
     
  • Bitcoin AUD

    97,218.20
    -11.39 (-0.01%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     

This robot dog learned how to get up after being knocked down

AI reinforcement learning could allow robots to handle a wide variety of tasks and situations.

At some point when you were a toddler, you learned how to pick yourself up after falling and eventually how to walk on your own two feet. You likely had encouragement from your parents, but for the most part, you learned through trial and error. That’s not how robots like Spot and Atlas from Boston Dynamics learn to walk and dance. They’re meticulously coded to tackle the tasks we throw at them. The results can be impressive, but it can also leave them unable to adapt to situations that aren’t covered by their software. A joint team of researchers from Zhejiang University and the University of Edinburgh claim they’ve developed a better way.

Jueying SIM
Jueying SIM (Yang et al)

In a recent paper published in the journal Science Robotics, they detailed an AI reinforcement approach they used to allow their dog-like robot, Jueying, to learn how to walk and recover from falls on its own. The team told Wired they first trained software that could guide a virtual version of the robot. It consisted of eight AI “experts” that they trained to master a specific skill. For instance, one became fluent in walking, while another learned how to balance. Each time the digital robot successfully completed a task, the team rewarded it with a virtual point. If all of that sounds familiar, it’s because it’s the same approach Google recently used to train its groundbreaking MuZero algorithm

Once they successfully trained the eight experts, they developed an additional network to act as a kind of head coach. It would manage the inputs of the eight other algorithms, prioritizing one or more as needed in a given situation. You can see it in action in the GIF above. They then ported their software to one of their prototypes and put it to the test. You can see the punishment they put it through as they kicked and pushed it to the ground. Each time, it got back up and started walking again.

Jueying recovery
Jueying recovery (Yang et al)

Zhibin Li, one of the authors of the report, told Wired the goal of their team’s research is to create “more intelligent machines, which are able to combine flexible and adaptive skills on the fly, to handle a variety of different tasks that they have never seen before.” However, it may be a while before we see Jueying and Spot sparring for the best robot dog award. One of the challenges the team has ahead of itself is reducing the amount of computational power it takes to simulate the robot’s training. They’ll need to do that to make it more useful for practical applications.