Australia markets close in 3 hours 45 minutes
  • ALL ORDS

    6,684.90
    +17.40 (+0.26%)
     
  • ASX 200

    6,486.50
    +17.10 (+0.26%)
     
  • AUD/USD

    0.6486
    +0.0028 (+0.43%)
     
  • OIL

    77.01
    +0.30 (+0.39%)
     
  • GOLD

    1,637.20
    +3.80 (+0.23%)
     
  • BTC-AUD

    30,484.02
    +1,120.99 (+3.82%)
     
  • CMC Crypto 200

    453.10
    +20.00 (+4.62%)
     
  • AUD/EUR

    0.6730
    +0.0018 (+0.27%)
     
  • AUD/NZD

    1.1430
    -0.0017 (-0.15%)
     
  • NZX 50

    11,333.80
    -101.02 (-0.88%)
     
  • NASDAQ

    11,254.11
    -57.13 (-0.51%)
     
  • FTSE

    7,020.95
    +2.35 (+0.03%)
     
  • Dow Jones

    29,260.81
    -329.60 (-1.11%)
     
  • DAX

    12,227.92
    -56.27 (-0.46%)
     
  • Hang Seng

    17,828.75
    -26.39 (-0.15%)
     
  • NIKKEI 225

    26,619.41
    +187.86 (+0.71%)
     

Intel, Arm and Nvidia propose new standard to make AI processing more efficient

·2-min read

In pursuit of faster and more efficient AI system development, Intel, Arm and Nvidia today published a draft specification for what they refer to as a common interchange format for AI. While voluntary, the proposed "8-bit floating point (FP8)" standard, they say, has the potential to accelerate AI development by optimizing hardware memory usage and work for both AI training (i.e., engineering AI systems) and inference (running the systems).

When developing an AI system, data scientists are faced with key engineering choices beyond simply collecting data to train the system. One is selecting a format to represent the weights of the system -- weights being the factors learned from the training data that influence the system's predictions. Weights are what enable a system like GPT-3 to generate whole paragraphs from a sentence-long prompt, for example, or DALL-E 2 to create photorealistic portraits from a caption.

Common formats include half-precision floating point, or FP16, which uses 16 bits to represent the weights of the system, and single precision (FP32), which uses 32 bits. Half-precision and lower reduce the amount of memory required to train and run an AI system while speeding up computations and even reducing bandwidth and power usage. But they sacrifice some accuracy to achieve those gains; after all, 16 bits is less to work with than 32.

Many in the industry -- including Intel, Arm and Nvidia -- are coalescing around FP8 (8 bits) as the sweet spot, however. In a blog post, Nvidia director of product marketing Shar Narasimhan notes that the aforementioned proposed format, which is FP8, shows "comparable accuracy" to 16-bit precisions across use cases including computer vision and image-generating systems while delivering "significant" speedups.

Nvidia, Arm and Intel say they're making their FP8 format license-free, in an open format. A white paper describes it in more detail; Narasimhan says that the specs will be submitted to the IEEE, the professional organization that maintains standards across a number of technical domains, for consideration at a later date.

"We believe that having a common interchange format will enable rapid advancements and the interoperability of both hardware and software platforms to advance computing," Narasimhan.

The trio isn't pushing for parity out of the goodness of their hearts, necessarily. Nvidia's GH100 Hopper architecture natively implements FP8, as does Intel's Gaudi2 AI training chipset.

But a common FP8 format would also benefit rivals like SambaNova, AMD, Groq, IBM, Graphcore and Cerebras -- all of which have experimented with or adopted some form of FP8 for system development. In a blog post this July, Graphcore co-founder and CTO Simon Knowles wrote that the "advent of 8-bit floating point offers tremendous performance and efficiency benefits for AI compute," asserting that it's also "an opportunity" for the industry to settle on a "single, open standard" rather than ushering in a mix of competing formats.