Advertisement
Australia markets closed
  • ALL ORDS

    7,817.40
    -81.50 (-1.03%)
     
  • ASX 200

    7,567.30
    -74.80 (-0.98%)
     
  • AUD/USD

    0.6421
    -0.0004 (-0.07%)
     
  • OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD

    2,406.70
    +8.70 (+0.36%)
     
  • Bitcoin AUD

    99,665.87
    +1,262.20 (+1.28%)
     
  • CMC Crypto 200

    1,375.11
    +62.48 (+4.76%)
     
  • AUD/EUR

    0.6023
    -0.0008 (-0.13%)
     
  • AUD/NZD

    1.0893
    +0.0018 (+0.17%)
     
  • NZX 50

    11,796.21
    -39.83 (-0.34%)
     
  • NASDAQ

    17,037.65
    -356.67 (-2.05%)
     
  • FTSE

    7,895.85
    +18.80 (+0.24%)
     
  • Dow Jones

    37,986.40
    +211.02 (+0.56%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • Hang Seng

    16,224.14
    -161.73 (-0.99%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     

Europe to press the adtech industry to help fight online disinformation

The European Union plans to beef up its response to online disinformation, with the Commission saying today it will step up efforts to combat harmful but not illegal content -- including by pushing for smaller digital services and adtech companies to sign up to voluntary rules aimed at tackling the spread of this type of manipulative and often malicious content.

EU lawmakers pointed to risks such as the threat to public health posed by the spread of harmful disinformation about COVID-19 vaccines as driving the need for tougher action.

Concerns about the impacts of online disinformation on democratic processes are another driver, they said.

Commenting in a statement, Thierry Breton, commissioner for Internal Market, said: “We need to rein in the infodemic and the diffusion of false information putting people's life in danger. Disinformation cannot remain a source of revenue. We need to see stronger commitments by online platforms, the entire advertising ecosystem and networks of fact-checkers. The Digital Services Act will provide us with additional, powerful tools to tackle disinformation.”

ADVERTISEMENT

A new more expansive code of practice on disinformation is being prepared -- and will, the Commission hopes, be finalized in September, to be ready for application at the start of next year.

Its gear change is a fairly public acceptance that the EU's voluntary code of practice -- an approach Brussels has taken since 2018 -- has not worked out as hoped. And, well, we did warn them.

A push to get the adtech industry on board with demonetizing viral disinformation is certainly overdue.

It's clear the online disinformation problem hasn't gone away. Some reports have suggested problematic activity -- like social media voter manipulation and computational propaganda -- have been getting worse in recent years, rather than better.

However, getting visibility into the true scale of the disinformation problem remains a huge challenge given that those best placed to know (ad platforms) don't freely open their systems to external researchers. But that's something else the Commission would like to change.

Signatories to the EU's current code of practice on disinformation are:

Google, Facebook, Twitter, Microsoft, TikTok, Mozilla, DOT Europe (Former EDiMA), the World Federation of Advertisers (WFA) and its Belgian counterpart, the Union of Belgian Advertisers (UBA); the European Association of Communications Agencies (EACA), and its national members from France, Poland and the Czech Republic -- respectively, Association des Agences Conseils en Communication (AACC), Stowarzyszenie Komunikacji Marketingowej/Ad Artis Art Foundation (SAR), and Asociace Komunikacnich Agentur (AKA); the Interactive Advertising Bureau (IAB Europe), Kreativitet & Kommunikation, and Goldbach Audience (Switzerland) AG.

EU lawmakers said they want to broaden participation by getting smaller platforms to join, as well as recruiting all the various players in the adtech space whose tools provide the means for monetizing online disinformation.

Commissioners said today that they want to see the code covering a "whole range" of actors in the online advertising industry (i.e. rather than the current handful).

In its press release the Commission also said it wants platforms and adtech players to exchange information on disinformation ads that have been refused by one of them -- so there's a more coordinated response to shut out bad actors.

As for those who are signed up already, the Commission's report card on their performance was bleak.

Speaking during a press conference, Breton said that only one of the five platform signatories to the code has "really" lived up to its commitments -- which was presumably a reference to the first five tech giants in the above list (aka: Google, Facebook, Twitter, Microsoft and TikTok).

Breton demurred on doing an explicit name-and-shame of the four others -- who he said have not "at all" done what was expected of them -- saying it's not the Commission's place to do that.

Rather, he said people should decide among themselves which of the platform giants that signed up to the code have failed to live up to their commitments. (Signatories since 2018 have pledged to take action to disrupt ad revenues of accounts and websites that spread disinformation; to enhance transparency around political and issue-based ads; tackle fake accounts and online bots; to empower consumers to report disinformation and access different news sources while improving the visibility and discoverability of authoritative content; and to empower the research community so outside experts can help monitor online disinformation through privacy-compliant access to platform data.)

Frankly it's hard to imagine which of the five tech giants from the above list might actually be meeting the Commission's bar. (Microsoft perhaps, on account of its relatively modest social activity versus the rest.)

Safe to say, there's been a lot of more hot air (in the form of selective PR) on the charged topic of disinformation versus hard accountability from the major social platforms over the past three years.

So it's perhaps no accident that Facebook chose today to puff up its historical efforts to combat what it refers to as "influence operations" -- aka "coordinated efforts to manipulate or corrupt public debate for a strategic goal” -- by publishing what it couches as a "threat report" detailing what it's done in this area between 2017 and 2000.

Influence ops refer to online activity that may be being conducted by hostile foreign governments or by malicious agents seeking, in this case, to use Facebook's ad tools as a mass manipulation tool -- perhaps to try to skew an election result or influence the shape of looming regulations. And Facebook's "threat report" states that the tech giant took down and publicly reported only 150 such operations over the report period.

Yet as we know from Facebook whistleblower Sophie Zhang, the scale of the problem of mass malicious manipulation activity on Facebook's platform is vast and its response to it is both under-resourced and PR-led. (A memo written by the former Facebook data scientist, covered by BuzzFeed last year, detailed a lack of institutional support for her work and how takedowns of influence operations could almost immediately respawn -- without Facebook doing anything.)

(NB: If it's Facebook's "broader enforcement against deceptive tactics that do not rise to the level of [Coordinate Inauthentic Behavior]" that you're looking for, rather than efforts against "influence operations", it has a whole other report for that -- the Inauthentic Behavior Report! -- because of course Facebook gets to mark its own homework when it comes to tackling fake activity, and shapes its own level of transparency exactly because there are no legally binding reporting rules on disinformation.)

Legally binding rules on handling online disinformation aren't in the EU's pipeline either -- but commissioners said today that they wanted a beefed-up and "more binding" code.

They do have some levers to pull here via a wider package of digital reforms that's working its way through the EU's co-legislative process right now (aka the Digital Services Act).

The DSA will bring in legally binding rules for how platforms handle illegal content. And the Commission intends its tougher disinformation code to plug into that (in the form of what they call a "co-regulatory backstop").

It still won't be legally binding but it may earn willing platforms extra DSA compliance "cred". So it looks like disinformation-muck-spreaders' arms are set to be twisted in a pincer regulatory move by the EU making sure this stuff is looped, as an adjunct, to the legally binding regulation.

At the same time, Brussels maintains that it does not want to legislate around disinformation. The risks of taking a centralized approach might smell like censorship -- and it sounds keen to avoid that charge at all costs.

The digital regulation packages that the EU has put forward since the 2019 collage took up its mandate are generally aimed at increasing transparency, safety and accountability online, its values and transparency commissioner, Vera Jourova, said today.

Breton also said that now is the "right time" to deepen obligations under the disinformation code -- with the DSA incoming -- and also to give the platforms time to adapt (and involve themselves in discussions on shaping additional obligations).

In another interesting remark Breton also talked about regulators needing to "be able to audit platforms" -- in order to be able to "check what is happening with the algorithms that push these practices".

Though quite how audit powers can be made to fit with a voluntary, non-legally binding code remains to be seen.

Discussing areas where the current code has fallen short, Jourova pointed to inconsistencies of application across different EU Member States and languages.

She also said the Commission is keen for the beefed-up code to do more to empower users to act when they see something dodgy online -- such as by providing users with tools to flag problem content. Platforms should also provide users with the ability to appeal disinformation content takedowns (to avoid the risk of opinions being incorrectly removed), she said.

The focus for the code would be on tackling false "facts not opinions", she emphasized, saying the Commission wants platforms to "embed fact-checking into their systems" -- and for the code to work toward a "decentralized care of facts".

She went on to say that the current signatories to the code haven't provided external researchers with the kind of data access the Commission would like to see -- to support greater transparency into (and accountability around) the disinformation problem.

The code does require either monthly (for COVID-19 disinformation), six-monthly or yearly reports from signatories (depending on the size of the entity). But what's been provided so far doesn't add up to a comprehensive picture of disinformation activity and platform reaction, she said.

She also warned that online manipulation tactics are fast evolving and highly innovative -- while also saying the Commission would like to see signatories agree on a set of identifiable "problematic techniques" to help speed up responses.

In a separate but linked move, EU lawmakers will be coming with a specific plan for tackling political ads transparency in November, she noted.

They are also, in parallel, working on how to respond to the threat posed to European democracies by foreign interference CyberOps -- such as the aforementioned influence operations which are often found thriving on Facebook's platform.

The commissioners did not give many details on those plans today but Jourova said it's "high time to impose costs on perpetrators" -- suggesting that some interesting possibilities may be being considered, such as trade sanctions for state-backed DisOps (although attribution would be one challenge).

Breton said countering foreign influence over the "informational space", as he referred to it, is important work to defend the values of European democracy.

He also said the Commission's anti-disinformation efforts will focus on support for education to help equip EU citizens with the necessary critical thinking capabilities to navigate the huge quantities (of variable quality) information that now surrounds them.

This report was updated with a correction as we originally misstated that the IAB is not a signatory of the code; in fact it joined in May 2018.