Advertisement
Australia markets close in 2 hours 4 minutes
  • ALL ORDS

    8,454.50
    -24.50 (-0.29%)
     
  • ASX 200

    8,189.70
    -15.70 (-0.19%)
     
  • AUD/USD

    0.6745
    -0.0017 (-0.25%)
     
  • OIL

    75.80
    -1.34 (-1.74%)
     
  • GOLD

    2,655.10
    -10.90 (-0.41%)
     
  • Bitcoin AUD

    93,090.99
    -1,396.92 (-1.48%)
     
  • XRP AUD

    0.79
    -0.01 (-1.43%)
     
  • AUD/EUR

    0.6138
    -0.0017 (-0.28%)
     
  • AUD/NZD

    1.0996
    -0.0034 (-0.31%)
     
  • NZX 50

    12,545.70
    -51.17 (-0.41%)
     
  • NASDAQ

    19,800.74
    -234.28 (-1.17%)
     
  • FTSE

    8,303.62
    +22.99 (+0.28%)
     
  • Dow Jones

    41,954.24
    -398.51 (-0.94%)
     
  • DAX

    19,104.10
    -16.83 (-0.09%)
     
  • Hang Seng

    21,540.80
    -1,558.98 (-6.75%)
     
  • NIKKEI 225

    38,861.09
    -471.65 (-1.20%)
     

Report details how Big Tech is leaning on EU not to regulate general purpose AIs

It's still pretty early in the year but the disruptive power of general purpose AI (GPAI) already looks cemented as the big tech story of 2023, with tech giants including Microsoft and Google duking it out to fast-follow OpenAI's viral conversational chatbot, ChatGPT by productizing large language models (LLM) in interfaces of their own -- such as OpenAI investor Microsoft's search with AI combo, New Bing; or Google's conversational search offering, Bard AI, shown off in preview earlier this month as it's scrambled to respond to Remond's challenge to the online search cash-cow.

Big Tech's haste to productize general purpose AI has offered a high speed and very public lesson in embedded risk attached to this flavor of AI -- which typically requires vast amounts of data to train models (in the case of OpenAI's GPT, for instance, this has included pulling data from Internet forums like Reddit) -- with, for example, Google's Bard AI producing erroneous answers to pretty simply search queries in its own official demo of the tech; while, unleashed onto early users, Microsoft's New Bing was quickly encouraged to spew forth the kind of conspiracy nonsense, reprehensible bile and random threats that's easy to run into on the average (under-moderated) online forum or comment thread, as well as making basic errors.

Despite fierce rivalry between the tech giants to be first to milk what they hope will be a new generation of general purpose AI cash-cow -- hence the pair's unseemly rush to unbox half-baked products that have been caught feeding users abject nonsense while swearing it's fact and skewing into aggressive gaslighting as the toxic cherry on-top -- a report published today by European lobbying transparency group, Corporate Europe Observatory (COE), shows how, behind the scenes, these self-same rivals have been united in lobbying European Union lawmakers not to apply its forthcoming AI rulebook to general purpose AIs.

Google and Microsoft are among a number of tech giants named in the report as pushing the bloc's lawmakers for a carve out for general purpose AI -- arguing the forthcoming risk-based framework for regulating applications of artificial intelligence (aka, the AI Act or AIA) should not apply to the source providers of large language models (LLM) or other general purpose AIs. Rather they advocate for rules to only be applied, downstream, on those that are deploying these sorts of models in 'risky' ways.

The EU is ahead of other regions in drafting laws to regulate the use of AI. The AIA does not aim to wrap rules around every single use of the tech. Rather it takes a risk-based approach -- designating certain applications (such as for justice, education employment, immigration etc) "high risk", and subject to the tightest level of regulation; while other, more limited risk apps face lesser requirements; and low risk apps can simply self regulate under a code of conduct.

This approach means, if GPAI model makers end up not facing any hard requirements under the AIA -- such as to use non-biased training data or proactively tackle safety concerns -- it risks setting up a constant battle at the decentralized edge where AI is being applied, with responsibility for safety and trust piled on users of general purpose AI models.

These smaller players are clearly not going to have scale of resources as the model makers themselves to direct towards cleaning up AI-fuelled toxicity -- suggesting it'll be users left exposed to biased and/or unsafe tech (while appliers get the bill for any law breaches and indeed broader product liability attached to AI harms).

The AIA is not yet law. It remains under the EU's co-legislative negotiation process -- so the final shape of the bloc's flagship rulebook still remains to be seen. But COE's report raises concerns that the framework will face a further concerted squeeze and watering down of safety obligations in the coming months -- under "intense lobbying" from US tech companies.

It also notes the US government made its own intervention on the GPAI issue last fall -- pressing Europe against “requiring all general purpose AI providers with the risk-management obligations of the AI Act" as it argued this would be "very burdensome, technically difficult, and in some cases impossible”. So there has been (additional) alignment, between US tech giants and their own government, when it comes to shielding GPAI from foreign regulators.

"Documents obtained by [COE] show how tech companies, particularly from the US, sought to reduce requirements for high risk AI systems and limit the scope of the regulation," the report notes. "In particular Big Tech lobbyists sought to exclude the newly introduced concept of ‘general purpose’ AI systems from regulation (where AI systems – usually produced by Silicon Valley giants – are used or incorporated into a variety of uses by other companies; these same tech giants want the regulations not to apply to the originator of the tech, but only to the companies deploying them in various ways)."

"The AI Act is now nearing its final stages, which are the secretive trilogue negotiations, which tend to benefit well-connected and well-funded lobbyists. As the Council, Parliament, and Commission, set out to reach agreements on EU policy proposals, the stakes for this world-first attempt to regulate AI remain high," it adds. "While MEPs are pushing for stronger fundamental rights protections in the AI Act, the Council introduced several concerning carve-outs for law enforcement and security. It is highly likely that the discussion on general purpose AI will be pushed into the future."

COE's report details how lobbying on the GPAI issue zoomed into action after the Commission revised an earlier position to favor including general purpose AI in the framework: Last year, the European Council -- under the French presidency -- proposed adding requirements for general purpose systems (with the Commission's support, now, for a definition and inclusion of the concept) -- which is itself a testament to how quickly perceptions around this field of AI are developing. (The draft AI Act, which had not considered the need to put guard rails around GPAI, was only presented by the Commission in April 2021.)

"When general purpose AI entered the lexicon of the EU’s AI Act, Big Tech’s well-funded European lobby networks took notice -- and action," it observes. "Several sources who closely followed the proceedings in the European Council and Parliament, interviewed by Corporate Europe Observatory for this report, said Big-Tech lobbyists were working full-time on influencing decision-making on general purpose AI."

According to COE's analysis, tech giants have deployed an expansive playbook of both direct and indirect lobbying methods in its bid to influence the final shape of the EU's AI rulebook -- including, it suggests, a series of "covert" tactics to try to defang the rulebook, such as lobbying via groups which claim to represent the interests of startups yet take funding from Big Tech backers. Or via a high-level expert group set up by the Commission to inform its AI policymaking but which the report notes is dominated by industry representatives (including from Google).

Recounting one private meeting Google obtained with the Commission, COE said the search giant lobbied against the French proposal in the Council to put requirements on the makers of GPAI models -- complaining it "completely shifts the burden to GPAI providers"; and expressing concerns "colegislators might add too many new criteria for the risk assessment" or expand the list of high-risk uses.

"A paper Google submitted to Commission, obtained by CEO through FOI requests, reiterated that 'general purpose AI systems... are not themselves high-risk' and that compliance of general purpose systems with the AI Act’s rules on data governance, human governance, and transparency 'would be difficult or impossible to meet in practice',” the report goes on -- quoting from the paper in which Google suggests others in the "value chain" should "assume the responsibilities of a provider, and the developer of the general-purpose system is not a provider under the AIA". (And for "responsibilities", there, read: Costs and risk.)

The report notes that Microsoft set out its (aligned) position in an open letter sent to the Czech Presidency of the Council -- with the tech giant writing that it saw “no need for the AI Act to have a specific section on [general purpose AI]”, and also arguing that “without knowing the purpose of a general purpose tool, there is no way of being able to comply with any of the requirements for high risk”.

In private meetings with EU lawmakers, Microsoft also seemingly pushed a dubious claim that startups and SMEs would be negatively affected by the AI Act, per COE.

"A document obtained for this investigation details a July 2021 exchange between Microsoft lobbyists and Roberto Viola of DG CNECT, the Commission department overseeing the drafting of the AI Act, which notes 'a discussion on the EU and US position on the draft AI regulation took place, including the possible impact on start-ups and SMEs'," it writes.

Additionally, the report notes a number of "indirect" AIA lobbying efforts by Big Tech -- which it says were conducted via "affiliates", aka third party industry associations that position themselves as broadly representative but count tech giants among their members.

"A September 2022 letter pushed by BSA | The Software Alliance 'strongly urge[d] EU institutions to reject the recent proposals on General Purpose AI' as it would ‘impact’ AI development in Europe and ‘hamper innovation’. BSA was created by Microsoft in 1988 and has in the past been accused of operating on behalf of the tech giant, specifically targeting small and medium enterprises (SMEs) to back Big Tech’s cause," COE writes, citing claims by the BSA that "including general purpose AI, which is used mainly in low-risk cases, in the scope of the Act would create disproportionate obligations for developers and discourage AI development in the EU" and also that this would "negatively impact users of AI -- large and small, who would not have access to these digital tools; and developers of AI -- large and small, who would face significant and sometimes technically insurmountable requirements to enter the market".

Per the report, the secretary general of the European Digital SME Alliance told COE that some of the SMEs in its network had been approached by Big Tech to sign up to the letter -- and that they had advised them against it as they saw no benefits for SMEs or start-ups.

"There would be no benefits because excluding general purpose AI systems would place the hefty obligations for compliance on Europe’s SMEs, rather than on big tech companies," COE argues, adding: "This made it surprising that Allied for Startups, a self-described network of advocacy organisations focused on improving the policy environment for start-ups across the globe, did sign up to BSA’s letter." While its report goes on to point out that Allied for Startups' sponsors include Google, Apple, Microsoft, Amazon, and Meta, adding: "Though the organisation claims their sponsors have no voting rights, observers note their positions have closely aligned with those of Big Tech."

COE argues that a carve out for GPAI would open up a "massive hole" in the EU's flagship AI regulation by shielding tech giants from responsibility to tackle problems like bias and toxicity which their methods may be thoughtlessly baking in as they rush to dominate a new form of applied AI.

And it is tech giants -- since these entities have the resources for the kind of industrial-scale data processing that's required to develop general purpose technologies like LLMs (not to mention the legal firepower to battle allegations of massive copyright infringement which are fast-following developments in generative AI too).

So while OpenAI might not be a household name, like Google or Microsoft, it's certainly a tech giant in funding and resources terms -- having amassed a war chest of some $11 billion since being founded back in 2015 (with its backers including the sometime world's richest man and now owner of Twitter, Elon Musk). It was set up with the explicit goal of becoming a disruptive AI superpower and accelerating developments in artificial intelligence into the mainstream. Less than a decade later, OpenAI's mission is really coming into focus -- as the new battleground for current-gen AI giants to wage a new massive marketshare war.

"Given that general purpose AI is likely to be increasingly commonly used -- after all, how many small companies will develop their own AI models from scratch? -- fencing off the Big Tech companies that produce the initial models from responsibility tears a massive hole in the regulation," COE argues, adding: "It also offers little accountability to those who might be discriminated against by the uses of such AI."

We reached out to Google and Microsoft for comment on COE's report.

A Microsoft spokesperson told us:

The European Union has been and remains an important stakeholder for Microsoft. We seek to be a constructive and transparent partner to European policymakers.

At the time of writing Google had not responded to the request for comment.

In a press release accompanying the report, COE warns that EU lawmakers stand at a crux point with the AIA as negotiations enter a new closed-door stage (aka trilogue discussions).

"As it stands, the lobby blitz generated the desired results. In their latest positions both the Parliament and the Council have postponed the discussion on regulating general purpose AI. Institutions also narrowed the definition of AI systems, limiting the number of systems to which scrutiny would be applied," it writes. "The AI Act is now nearing its final stages, which are the secretive trilogue negotiations, which tend to benefit well-connected and well-funded lobbyists. As the Council, Parliament, and Commission, set out to reach agreements on EU policy proposals, the stakes for this world-first attempt to regulate AI remain high."

Concern about underhand and anti-democratic tactics being deployed by tech giants to try to shape the bloc's digital regulations recently led to a tips hotline being launched that aims to push back by providing a channel for staffers at the European Commission to report dubious efforts to influence lawmaking.

While, last year, a group of MEPs filed complaints with the EU's Transparency Register against a number of tech giants -- accusing them of breaches of the rules.

The complaints followed concerted efforts by tech giants to influence the final shape of major new platform and digital services regulations which are coming into application in the EU this year -- and an earlier report by COE detailing how fiercely Big Adtech (especially) fought against efforts by MEPs to more tightly regulate online tracking and profiling.

COE is running a petition calling for increased transparency on EU trilogues -- urging signatories to mobilize and "not permit Big Tech to kill the AI Act in the dark".