Advertisement
Australia markets closed
  • ALL ORDS

    8,039.90
    +27.80 (+0.35%)
     
  • ASX 200

    7,796.00
    +26.60 (+0.34%)
     
  • AUD/USD

    0.6643
    -0.0015 (-0.23%)
     
  • OIL

    80.64
    -0.65 (-0.80%)
     
  • GOLD

    2,334.10
    -34.90 (-1.47%)
     
  • Bitcoin AUD

    96,514.24
    -1,303.76 (-1.33%)
     
  • CMC Crypto 200

    1,330.12
    -30.21 (-2.22%)
     
  • AUD/EUR

    0.6209
    -0.0008 (-0.13%)
     
  • AUD/NZD

    1.0854
    -0.0019 (-0.18%)
     
  • NZX 50

    11,682.39
    -89.42 (-0.76%)
     
  • NASDAQ

    19,700.43
    -51.87 (-0.26%)
     
  • FTSE

    8,237.72
    -34.74 (-0.42%)
     
  • Dow Jones

    39,150.33
    +15.57 (+0.04%)
     
  • DAX

    18,163.52
    -90.66 (-0.50%)
     
  • Hang Seng

    18,028.52
    -306.80 (-1.67%)
     
  • NIKKEI 225

    38,596.47
    -36.55 (-0.09%)
     

Nvidia CEO explains why Tesla's use of AI is 'revolutionary'

Nvidia's (NVDA) first quarter results beat analyst expectations, with revenue rising 262% to $26.0 billion. The company also announced a 10-for-1 stock split and that it is raising its dividend.

In a Yahoo Finance exclusive interview, Nvidia founder and CEO Jensen Huang spoke about the results and how the demand for his company's products is "just so strong." He also weighed in on how companies like Meta (META) and Tesla (TSLA) are pushing AI technology forward.

Jensen says Meta's Llama large language models are "really, really important" given how they are "activating large language models and generative AI work all over the world."

On Tesla, Jensen describes how the company's latest Full Self-Driving technology is "an end-to-end generative model," saying it "learns from watching videos, surround video, and it learns about how to drive... using generative AI [to] predict the path... how to understand and how to steer the car. And so the technology is really revolutionary."

ADVERTISEMENT

Watch the video to hear why Huang says "that learning from video directly is the most effective way to train" AI systems for autonomous vehicles.

Be sure to check out the full interview with Nvidia CEO Jensen Huang.

This post was written by Stephanie Mikulich.

For more Yahoo Finance coverage of Nvidia:

Nvidia stock pops 4% after earnings beat forecasts, announces stock split and dividend hike

Nvidia CEO Jensen Huang is the 'man of the year': Investor

Why this analyst says Nvidia is not a stock to buy

How Nvidia earnings are impacting the chip market

Beyond the Ticker: Nvidia

Video transcript

Jess, I wanna ask about the uh cloud providers versus the, the other industries that you said are, are getting into the, the JA I game or, or getting NVIDIA chips.

You, you had mentioned that uh in uh comments in the actual release and then we heard from uh CFO collect cress uh that 40% mid 40% of data center revenue comes from those cloud providers.

As we start to see these other industries open up.

What does, what does that mean for NVIDIA?

Will, will the cloud providers kind of uh shrink, I guess their share and then will these other industries pick up where those cloud providers were?

I expect I expect them both to grow uh a couple of different areas.

Of course, uh the consumer internet service providers this last quarter, of course, a big stories from meta.

The uh the incredible scale that, that um uh Mark is investing in uh Llama two was a breakthrough.

Llama three was even more amazing.

Uh They're creating models that, that are, that are activating uh large language model and generative A I work all over the world.

And so, so the work that meta is doing is really, really important.

Uh You also saw uh uh Elon talking about uh the incredible infrastructure that he's building and, and um one of the things that's, that's really revolutionary about, about the, the version 12 of, of Tesla's uh full self driving is that it's an end to end generative model.

And it learns from watching videos, surround video and it, it learns about how to drive uh end to end and generate using generative A I uh uh predict the next, the path and the and the uh how to steer the uh how to understand and how to steer the car.

And so the the technology is really revolutionary and the work that they're doing is incredible.

So I gave you two examples, a start up company that we work with called recursion has built up a supercomputer for generating molecules, understanding proteins and generating molecules, molecules for drug discovery.

The list goes on, I mean, we can go on all afternoon and, and just so many different areas of people who are, who are now recognizing that we now have a software and A I model that can understand and be learned, learn almost any language, the language of English of course, but the language of images and video and chemicals and protein and even physics and to be able to generate almost anything.

And so it's basically like machine translation and uh that capability is now being deployed at scale in so many different industries, Jensen.

Just one more quick.

Last question.

I'm glad you talked about um the auto business and, and what you're seeing there, you mentioned that automotive is now the largest vertical enterprise vertical within data center.

You talked about the Tesla business.

But what is that all about?

Is it, is it self driving among other automakers too?

Are there other functions that automakers are using um within data center?

Help us understand that a little bit better.

Well, Tesla is far ahead in self driving cars.

Um but every single car someday will have to have autonomous capability.

Uh It's, it's safer, it's more convenient, it's more, more fun to drive and in order to do that, uh it is now very well known, very well understood that learning from video directly is the most effective way to train these models.

We used to train based on images that are labeled.

We would say this is a, this is a car, you know, this is a car, this is a sign, this is a road and we would label that manually.

It's incredible.

And now we just put video right into the car and let the car figure it out by itself.

And and this technology is very similar to the technology of large language models, but it requires just an enormous training facility.

And the reason for that is because there's videos, the data rate of video, the amount of data of video is so so high.

Well, the, the same approach that's used for learning physics, the physical world um from videos that is used for self driving cars is essentially the same um A I technology used for grounding large language models to understand the world of physics.

Uh So technologies that are uh like Sora which is just incredible.

Um uh and other technologies vo from, from uh uh Google incredible the ability to generate video that makes sense that are conditioned by human prompt that needs to learn from video.

And so the next generation of A is need to be grounded in physical A I need to be under needs to understand the physical world.

And the on the best way to teach these A is how the physical world behaves is through video, just watching tons and tons and tons of video.

And so the the combination of this multimodality training capability is going to really require a lot of uh computing demand in the years to come.