Advertisement
Australia markets closed
  • ALL ORDS

    7,817.40
    -81.50 (-1.03%)
     
  • ASX 200

    7,567.30
    -74.80 (-0.98%)
     
  • AUD/USD

    0.6421
    -0.0004 (-0.07%)
     
  • OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD

    2,406.70
    +8.70 (+0.36%)
     
  • Bitcoin AUD

    98,526.62
    -507.77 (-0.51%)
     
  • CMC Crypto 200

    1,379.35
    +66.72 (+5.08%)
     
  • AUD/EUR

    0.6023
    -0.0008 (-0.13%)
     
  • AUD/NZD

    1.0893
    +0.0018 (+0.17%)
     
  • NZX 50

    11,796.21
    -39.83 (-0.34%)
     
  • NASDAQ

    17,037.65
    -356.67 (-2.05%)
     
  • FTSE

    7,895.85
    +18.80 (+0.24%)
     
  • Dow Jones

    37,986.40
    +211.02 (+0.56%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • Hang Seng

    16,224.14
    -161.73 (-0.99%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     

Facebook Research is developing touchy-feely curious robots

We're tantalizingly close to AI with all five senses.

As a social media platform with global reach, Facebook leans extensively on its artificial intelligence and machine-learning systems to keep the site online and harmful content off it (at least, some of the time). Following its announcement at the start of the month regarding self-supervised learning, computer vision, and natural language processing, Facebook on Monday shared details about three additional areas of research that could eventually lead to more capable and curious AI.

"Much of our work in robotics is focused on self-supervised learning, in which systems learn directly from raw data so they can adapt to new tasks and new circumstances," a team of researchers from FAIR (Facebook AI Research) wrote in a blog post. "In robotics, we're advancing techniques such as model-based reinforcement learning (RL) to enable robots to teach themselves through trial and error using direct input from sensors."

asdf
asdf

Specifically, the team has been trying to get a six-legged robot to teach itself to walk without any outside assistance. "Generally speaking, locomotion is a very difficult task in robotics and this is what it makes it very exciting from our perspective," Roberto Calandra, a FAIR researcher, told Engadget. "We have been able to design algorithms for AI and actually test them on a really challenging problem that we otherwise don't know how to solve."

ADVERTISEMENT

The hexapod begins its existence as a pile of legs with no understanding of its surroundings. Using a reinforcement-learning algorithm, the robot slowly figures out a controller that will help it meet its goal of forward locomotion. And since the algorithm utilizes a recursive self-improvement function, the robot can monitor the information it gathers and further optimize its behavior over time. That is, the more experience the robot gains, the better it performs.

This is easier said than done, given that the robot is expected to figure out not only its location and orientation in space but its balance and momentum as well -- all from a series of sensors located in the machine's knees. By optimizing the robot's behavior and focusing on getting it walking in as few steps as possible, Facebook taught the robot how to "walk" in a matter of hours, rather than days.

But what's a hexapod to do once it figures out how to move? Go exploring, obviously. But it's not so easy to induce wanderlust in robots that are typically trained to achieve a narrowly defined goal. Yet this is exactly what Facebook is trying to do, with some help from its colleagues at NYU and a robotic arm.

asdf
asdf

Previous research into imparting a sense of curiosity onto AI has focused on reducing uncertainty. Facebook's latest efforts strive for the same goal but do so in a more structured manner.

"We actually started with a model that doesn't know much about itself," FAIR researcher Franziska Meier told Engadget. "At this point, the robot knows how to hold its arm, but it doesn't actually know what actions to apply to reach a certain target." But as the robot learns which torques need to be applied to move the arm into the next target configuration, it can eventually begin to optimize its planning.

"We use this model that tells us this, to plan ahead for a number of time steps," Meier continued. "And we try to use this planning procedure to optimize the action sequence to achieve the task." To prevent the robot from optimizing its routines too highly and getting caught in a loop, the research team rewarded the robot for actions that resolved uncertainty.

"We do this exploration, we actually learn a better model faster, achieve the task faster, and we learn a model that generalizes better to new tasks," Meier concluded.

Finally, Facebook has been hard at work teaching robots how to feel. Not emotionally, but physically. And it's leveraging a predictive deep-learning model originally designed for video. "It's essentially a technique that can predict videos from the current state, from the current image and an action," Calandra explained.

The team trained the AI to operate directly using raw data, in this case a high-resolution tactile sensor, rather than through a model. "Our work shows that such policies may be learned entirely without rewards, through diverse unsupervised exploratory interactions with the environment," the researchers concluded. During the experiment, the robot was able to successfully manipulate a joystick, roll a ball and identify the correct face of a 20-sided die.

"We show that we can essentially have a robot manipulating small objects in an unsupervised manner," Calandra said. "And what it means in practice is... we can actually predict accurately what's going to be the output of [a given] action. This allows us to start planning into the future. We can optimize for the sequence of actions that will actually yield the desired outcome."

Combining visual and tactile inputs could greatly improve the functionality of future robotic platforms and improve learning techniques. "To build machines that can learn by interacting with the world independently, we need robots that can leverage data from multiple senses," the team concluded. We can only imagine what Facebook has in store for this. However the company declined to comment on potential practical applications for this research in the future.