Advertisement
Australia markets closed
  • ALL ORDS

    8,153.70
    +80.10 (+0.99%)
     
  • ASX 200

    7,896.90
    +77.30 (+0.99%)
     
  • AUD/USD

    0.6513
    -0.0005 (-0.07%)
     
  • OIL

    83.11
    -0.06 (-0.07%)
     
  • GOLD

    2,254.80
    +16.40 (+0.73%)
     
  • Bitcoin AUD

    108,102.45
    +1,013.55 (+0.95%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • AUD/EUR

    0.6040
    +0.0006 (+0.10%)
     
  • AUD/NZD

    1.0903
    +0.0001 (+0.01%)
     
  • NZX 50

    12,105.29
    +94.63 (+0.79%)
     
  • NASDAQ

    18,254.69
    -26.15 (-0.14%)
     
  • FTSE

    7,952.62
    +20.64 (+0.26%)
     
  • Dow Jones

    39,807.37
    +47.29 (+0.12%)
     
  • DAX

    18,492.49
    +15.40 (+0.08%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • NIKKEI 225

    40,337.40
    +169.33 (+0.42%)
     

MIT researchers develop a new 'liquid' neural network that's better at adapting to new info

A new type of neural network that's capable of adapting its underlying behavior after the initial training phase could be the key to big improvements in situations where conditions can change quickly -- like autonomous driving, controlling robots or diagnosing medical conditions. These so-called "liquid" neural networks were devised by MIT Computer Science and Artificial Intelligence Lab's Ramin Hasani and his team at CSAIL, and they have the potential to greatly expand the flexibility of AI technology after the training phase, when they're engaged in the actual practical inference work done in the field.

Typically, after the training phase, during which neural network algorithms are provided with a large volume of relevant target data to hone their inference capabilities, and rewarded for correct responses in order to optimize performance, they're essentially fixed. But Hasani's team developed a means by which his "liquid" neural net can adapt the parameters for "success" over time in response to new information, which means that if a neural net tasked with perception on a self-driving car goes from clear skies into heavy snow, for instance, it's better able to deal with the shift in circumstances and maintain a high level of performance.

The main difference in the method introduced by Hasani and his collaborators is that it focuses on time-series adaptability, meaning that rather than being built on training data that is essentially made up of a number of snapshots, or static moments fixed in time, the liquid networks inherently considers time series data -- or sequences of images rather than isolated slices.

Because of the way the system is designed, it's actually also more open to observation and study by researchers, when compared to traditional neural networks. This kind of AI is typically referred to as a "black box," because while those developing the algorithms know the inputs and the the criteria for determining and encouraging successful behavior, they can't typically determine what exactly is going on within the neural networks that leads to success. This "liquid" model offers more transparency there, and it's less costly when it comes to computing because it relies on fewer, but more sophisticated, computing nodes.

Meanwhile, performance results indicate that it's better than other alternatives for accuracy in predicting the future values of known data sets. The next step for Hasani and his team are to determine how best to make the system even better, and ready it for use in actual practical applications.