Advertisement
Australia markets closed
  • ALL ORDS

    7,897.50
    +48.10 (+0.61%)
     
  • ASX 200

    7,629.00
    +42.00 (+0.55%)
     
  • AUD/USD

    0.6612
    +0.0040 (+0.61%)
     
  • OIL

    77.99
    -0.96 (-1.22%)
     
  • GOLD

    2,310.10
    +0.50 (+0.02%)
     
  • Bitcoin AUD

    96,211.66
    +857.33 (+0.90%)
     
  • CMC Crypto 200

    1,317.13
    +40.15 (+3.14%)
     
  • AUD/EUR

    0.6140
    +0.0020 (+0.33%)
     
  • AUD/NZD

    1.0992
    -0.0017 (-0.16%)
     
  • NZX 50

    11,938.08
    +64.04 (+0.54%)
     
  • NASDAQ

    17,890.79
    +349.25 (+1.99%)
     
  • FTSE

    8,213.49
    +41.34 (+0.51%)
     
  • Dow Jones

    38,675.68
    +450.02 (+1.18%)
     
  • DAX

    18,001.60
    +105.10 (+0.59%)
     
  • Hang Seng

    18,475.92
    +268.79 (+1.48%)
     
  • NIKKEI 225

    38,236.07
    -37.98 (-0.10%)
     

Meta is teaching robots how to move on their own

Microsoft (MSFT), OpenAI, and Google (GOOG, GOOGL) might be getting all the attention when it comes to generative AI like ChatGPT, Bing, and Bard. But Facebook parent Meta (META) isn’t sitting out the AI wars.

The social media and metaverse giant has announced that it has developed what could eventually be a means to train AI-powered robots: Letting them watch videos of humans performing tasks and then, well, having them copy us.

The breakthrough comes in the form of two research projects focusing on embodied AI, or AI that powers a robot. First there’s what’s called an artificial visual cortex. The second is what Meta refers to as adaptive skilled coordination.

Currently, getting robots to perform tasks requires a massive amount of effort. Researchers have to either program them to recognize things like obstacles or use special markers that the robots can read and follow to a set destination. That’s where the artificial visual cortex and adaptive skill coordination come in.

Meta is creating AI systems to help robots navigate on their own with the hope that they'll be able to help with everything from rescue operations to handling household chores. (Image: Meta)
Meta is creating AI systems to help robots navigate on their own with the hope that they'll be able to help with everything from rescue operations to handling household chores. (Image: Meta) (Meta)

The artificial visual cortex is a module for an AI system developed by letting the system watch what Meta refers to as egocentric videos, those shot from a person’s perspective, of people doing things like grocery shopping or cooking lunch. The idea is to eventually allow robots to be able to learn how to navigate the world without needing extensive programming, but rather by watching videos of how people get around.

ADVERTISEMENT

Adaptive skilled coordination, meanwhile, is a means of training a robot how to navigate the world by training it using virtual settings alone. Equipped with one of Boston Dynamics’ Spot robots, Meta taught an AI how to navigate a home it’s never seen before by first feeding it indoor scans of 1,000 homes. Using those scans, the robot was able to understand how to avoid things like chairs, how to pick up an object, and then how to move it to a new location.

“This robot has never been provided any sort of demonstration of how a human does a task,” explained Dhruv Batra, research director for Meta’s embodied AI and robotics group. “It has not been teleoperated, it is fully autonomous, it has been trained entirely in simulation and then deployed in the real world.”

Sign up for Yahoo Finance's tech newsletter.
Sign up for Yahoo Finance's tech newsletter. (Yahoo Finance)

The idea behind these two kinds of technologies is to eventually develop robots that can perform dangerous tasks, such as mining, or help assist humans with their daily chores. Imagine being able to tell your robot to move the laundry from the washing machine to the drier. Or, heck, fold your clothes!

The problem, Batra said, is that getting robots to do even the most mundane things, walking to an object, grabbing it, and moving it to another location on its own, is incredibly difficult.

“We seem to be able to develop intelligent systems that can play chess, that can play Go. These problems seem very hard for humans, but we can tackle them with machines,” he said. “But the simplest things that a 5-year-old or a 3-year-old can do, we do not have a robot or an intelligence system that can do it.”

Why? Because, as Batra explains it, it’s taken millions of years of evolution for biological organisms to be able to perform what we now consider simple physical tasks. Think about it. Grabbing a cup from a counter and putting it in a cupboard requires you to move to the location, know where the cup is, understand how far to move your arm, how hard to grasp the cup, how high to lift it, and where to place it. That’s simply too much for most current AI technology to handle.

For now, Batra says that Meta’s work is still quite a way off from being implemented in an actual robot. But the work the company is doing could eventually help lead us to a world where robots assist in everything from dangerous rescue operations to putting away our groceries.

But it’ll take time.

By Daniel Howley, tech editor at Yahoo Finance. Follow him @DanielHowley

Click here for the latest technology business news, reviews, and useful articles on tech and gadgets

Read the latest financial and business news from Yahoo Finance