Australia markets closed
  • ALL ORDS

    7,369.40
    -53.80 (-0.72%)
     
  • AUD/USD

    0.6716
    -0.0010 (-0.16%)
     
  • ASX 200

    7,175.50
    -53.90 (-0.75%)
     
  • OIL

    72.89
    +0.88 (+1.22%)
     
  • GOLD

    1,794.10
    -3.90 (-0.22%)
     
  • BTC-AUD

    25,046.47
    -171.37 (-0.68%)
     
  • CMC Crypto 200

    394.67
    -7.37 (-1.83%)
     

Google wants robots to generate their own code

There are countless big problems left to solve in the world of automation, and robotic learning sits somewhere near the top. While it’s true that humans have gotten pretty good at programming systems for specific tasks, there’s a big, open-ended question of: and then what?

New research demonstrated at Google’s AI event in New York City this morning proposes the notion of letting robotic systems effectively write their own code. The concept is designed to save human developers the hassle of having to go in and reprogram things as new information arises.

Image Credits: Google

The company notes that existing research and trained models can be effective in implementing the concept. All of that work can prove foundational in developing systems that can continue to generate their own code based on objects and scenarios encountered in the real world. The new work on display today is Code as Policies (CaP).

Image Credits: Google

Google Research Intern Jacky Liang and Robotics Research Scientist Andy Zeng note in a blog post:

With CaP, we propose using language models to directly write robot code through few-shot prompting. Our experiments demonstrate that outputting code led to improved generalization and task performance over directly learning robot tasks and outputting natural language actions. CaP allows a single system to perform a variety of complex and varied robotic tasks without task-specific training.

The system, as described, also relies on third-party libraries and APIs to best generate the code suited to a specific scenario -- as well as support for languages and (why not?) emojis. The information accessible in those APIs are one of the existing limitations at present. The researchers note, “These limitations point to avenues for future work, including extending visual language models to describe low-level robot behaviors (e.g., trajectories) or combining CaPs with exploration algorithms that can autonomously add to the set of control primitives.”

As part of today’s announcement, Google is releasing open source versions of the code accessible through its GitHub site to build on the research it’s thus far presented. So, you know, all of the caveats about early-stage research stuff here.