Australia markets closed
  • ALL ORDS

    7,689.70
    +15.50 (+0.20%)
     
  • AUD/USD

    0.7392
    -0.0029 (-0.39%)
     
  • ASX 200

    7,381.10
    +19.10 (+0.26%)
     
  • OIL

    83.60
    +1.32 (+1.60%)
     
  • GOLD

    1,762.00
    -6.30 (-0.36%)
     
  • BTC-AUD

    83,148.83
    -171.93 (-0.21%)
     
  • CMC Crypto 200

    1,439.98
    +64.76 (+4.71%)
     

Driving AI innovation in tandem with regulation

·5-min read

The European Commission announced first-of-its-kind legislation regulating the use of artificial intelligence in April. This unleashed criticism that the regulations could slow AI innovation, hamstringing Europe in its competition with the U.S. and China for leadership in AI.

For example, Andrew McAfee wrote an article titled “EU proposals to regulate AI are only going to hinder innovation.”

Anticipating this criticism and mindful of the example of GDPR, where Europe’s thought-leadership position didn’t necessarily translate into data-related innovation, the EC has tried to address AI innovation directly by publishing a new Coordinated Plan on AI.

Released in conjunction with the proposed regulations, the plan is full of initiatives intended to help the EU become a leader in AI technology. So will the combination of regulation and pro-innovation policies be enough to spur accelerating AI leadership?

AI innovation can be accelerated with the right laws

While the combination is well considered and targets improvements in both regulation and innovation, there is a problem: The pro-innovation initiatives are R&D-focused and not targeted at increasing adoption in the “high-risk” AI use cases to be regulated.

Spurring adoption is a key missing element. Many research studies have shown that well-designed “hard law” regulations can actually increase innovation, especially when employed with incentives that accelerate adoption. If the EC were to follow such a strategy, the EU could become a hotbed of AI innovation.

High-risk AI regulation and investment in innovation

The main thrust of the EC regulations is to place new requirements on “high-risk” AI systems. These include AI systems used for remote biometric identification, public infrastructure management, hiring and employment, creditworthiness assessment, and education, as well as for various public-sector use cases, such as dispatching first responders.

The legislation requires developers of these systems to deploy an AI quality management system that addresses requirements around high-quality data sets, record keeping, transparency, human oversight, accuracy, robustness and security. Providers of AI systems not yet designated as high risk are encouraged to create voluntary codes of conduct to achieve similar goals.

It’s clear that the crafters of the proposal were cognizant of the balance between regulation and innovation.

First, the legislation limits the number of AI systems deemed to be high risk, excluding systems that could plausibly have been included, such as insurance, and mostly including AI systems that already have some amount of regulatory oversight, such as employment and lending.

Second, the legislation defines high-level requirements without dictating how they are achieved. It also creates a compliance system based on self-reporting instead of something more onerous.

Finally, the Coordinated Plan is chock-full of R&D-supporting initiatives, including spaces for data-sharing, testing and experimentation facilities, investment in research and AI excellence centers, digital innovation hubs, funding for education, and targeted, programmatic investments in AI for climate change, health, robotics, the public sector, law enforcement and sustainable agriculture.

However, the proposal lacks adoption-driving policies that have led to faster innovation in combination with regulation in other sectors.

Help TechCrunch find the best software consultants for startups.

Provide a recommendation in this quick survey and we'll share the results with everybody.

A motivating precedent: EV incentives in the U.S.

So how could the EC promote much faster AI innovation while enacting regulatory guardrails? The example of electric vehicles in the United States provides a guide.

The U.S. has become a leader in electric car production because of a combination of entrepreneurship, regulations and smart market creation incentives.

Tesla invigorated the electric car industry with the insight that the new vanguard of electric cars should be desirable, high-performance sports cars.

The Corporate Average Fuel Efficiency (CAFE) regulations created a stick that required the development of more efficient vehicles. Generous tax credits for the purchase of electric vehicles helped directly accelerate vehicle sales without interfering with the natural, competitive market dynamics. The combination of CAFE regulations, tax credits and entrepreneurial companies like Tesla has created such a massive boost to innovation that electric vehicle engines are poised to become less expensive than internal combustion ones.

Getting AI incentives right: Three additional initiatives to pursue

The EC has an opportunity to achieve something similar with AI. Specifically, the EC should consider combining these current regulations with three additional initiatives.

Create tax incentives for companies to build or buy high-risk AI systems that adhere to these regulations. The EC should seek to proactively use AI to help meet economic and societal goals.

For example, some banks are using AI to better assess the creditworthiness of individuals with limited credit histories, while simultaneously working to ensure that banking activities are free from bias. This increases financial inclusion, a goal shared by governments, and represents a win-win AI innovation.

Further reduce uncertainty around EC legislative implementation. Part of this can be done directly by the EC -- through the development of more specific standards around AI quality management and fairness. However, there may be even greater value in bringing together a coalition of AI technology providers and user organizations to translate these standards into practical steps for compliance.

For example, the Monetary Authority of Singapore has orchestrated an industry consortium for banks, insurers and AI technology providers called Veritas to achieve similar goals for its Fairness, Ethics, Accountability and Transparency (FEAT) guidelines.

Consider accelerating the adoption of the AI quality management systems that the legislation requires by funding companies to build or buy these systems. There is significant academic and commercial activity already in this space, in areas such as explainability of black box models, assessment of potential discrimination due to data or algorithmic bias, and testing and monitoring of AI systems for their ability to survive significant changes in data.

By creating the conditions to encourage widespread adoption of such technologies, the EC should be able to meet the dual objectives of encouraging innovation and enabling compliance with the new legislation in a sustainable manner.

If the EC assertively reduces uncertainty, promotes the use of regulated, “high-risk” AI and encourages the use of AI quality management techniques, then it has the chance to become the global leader in AI innovation while providing critical protection to its citizens. We should all be pulling for them to be successful, as it would set an example for the world to follow.

Our goal is to create a safe and engaging place for users to connect over interests and passions. In order to improve our community experience, we are temporarily suspending article commenting