Advertisement
Australia markets closed
  • ALL ORDS

    7,937.50
    -0.40 (-0.01%)
     
  • ASX 200

    7,683.00
    -0.50 (-0.01%)
     
  • AUD/USD

    0.6527
    +0.0027 (+0.42%)
     
  • OIL

    82.77
    -0.04 (-0.05%)
     
  • GOLD

    2,337.60
    -0.80 (-0.03%)
     
  • Bitcoin AUD

    97,476.70
    -4,184.41 (-4.12%)
     
  • CMC Crypto 200

    1,360.31
    -22.26 (-1.61%)
     
  • AUD/EUR

    0.6082
    +0.0012 (+0.19%)
     
  • AUD/NZD

    1.0953
    +0.0011 (+0.10%)
     
  • NZX 50

    11,946.43
    +143.15 (+1.21%)
     
  • NASDAQ

    17,526.80
    +55.33 (+0.32%)
     
  • FTSE

    8,091.95
    +51.57 (+0.64%)
     
  • Dow Jones

    38,460.92
    -42.77 (-0.11%)
     
  • DAX

    17,983.81
    -104.89 (-0.58%)
     
  • Hang Seng

    17,284.54
    +83.27 (+0.48%)
     
  • NIKKEI 225

    37,628.48
    -831.60 (-2.16%)
     

This is the ’single largest danger’ of A.I. according to expert Kai-Fu Lee

Sinovation Ventures CEO Kai-Fu Lee joins 'Influencers with Andy Serwer' to explain the top 4 dangers of artificial intelligence.

Video transcript

ANDY SERWER: There got to be some concerns that are potentially serious. What might those be?

KAI-FU LEE: OK, so in the book, there is one set that we call externalities. Externalities happens when A.I. is told to do something, and it's so good at doing that thing that it forgets or actually ignores other externalities or negative impacts that it may cause. So when YouTube keeps sending us videos that we're most likely to click on, it's not only not thinking about serendipity, it's also potentially sending me very negative views or very one-sided views that might shape my thinking. So that would be one form of externality that is unintentional consequence on the user because it maniacally tries to optimize something else.

ADVERTISEMENT

Another is the personal data if that's possibly compromised. Another is bias and fairness. Another is, can A.I. explain to us why it made decisions that it made? For key things like driving autonomous vehicles, to try a problem, medical decision-making surgeries, it gets serious. But the single largest danger, as I describe in the book, is autonomous weapons. And that's when A.I. can be trained to kill, and more specifically, trained to assassinate.

Imagine a drone that can fly itself and seek specific people out, either with facial recognition, or cell signals, or whatever, and then it has a bullet. A small piece of dynamite that it can shoot point-blank at the person's forehead. And you know how fast drones move, so the danger is that this targeted assassination weapon can be built by an experienced hobbyist for $1,000. And I think that changes the future of terrorism because no longer are terrorists potentially losing their lives to do something bad.

It also allows a terrorist group to use 10,000 of these drones to perform something as terrible as genocide. And, of course, it changes the future of warfare because between country and country, this can create havoc and damage, but perhaps, anonymously and people don't know who did the attack. So it's also quite different from nuclear arms race, where nuclear arms race at least has deterrence built-in.

That you don't attack someone for the fear of retaliation and annihilation, but autonomous weapons might be doable as a surprise attack, and people might not even know who did it. So I think that is, from my perspective, the ultimate greatest danger that I can be a part of, and we need to be cautious and figure out how to ban or regulate it.

ANDY SERWER: Yeah, that is scary. And I think I've read an article about that fairly recently about the future of warfare. It's terrifying, and it described various weapons and scenarios where these weapons were used. So just to drill down on that just a little bit, how would we prevent these types of weapons to be deployed or developed even?

KAI-FU LEE: So one example is to look at history. How chemical weapons, biological weapons were banned. There could be a global treaty that is enforced. If there are drones today, the easiest way, the cheapest way is to build a drone, not a robot. Robots are much more expensive, and more clumsy, and harder to control. Drones are the most dangerous. So perhaps having some stronger laws of the air, where and how drones can be deployed.

And perhaps having some defensive mechanisms that prevent, you know, where there are a lot of people, or a lot of government functions, to have defensive functions that would basically shoot down drones in areas that are aren't permitted. So I'm not an expert in the domain, but just to brainstorm, these are some ideas. I'm sure there are other better ideas.