Advertisement
Australia markets closed
  • ALL ORDS

    8,153.70
    +80.10 (+0.99%)
     
  • ASX 200

    7,896.90
    +77.30 (+0.99%)
     
  • AUD/USD

    0.6516
    -0.0020 (-0.30%)
     
  • OIL

    83.03
    +1.68 (+2.07%)
     
  • GOLD

    2,240.40
    +27.70 (+1.25%)
     
  • Bitcoin AUD

    108,459.21
    +2,658.41 (+2.51%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • AUD/EUR

    0.6038
    +0.0007 (+0.12%)
     
  • AUD/NZD

    1.0908
    +0.0028 (+0.26%)
     
  • NZX 50

    12,105.29
    +94.63 (+0.79%)
     
  • NASDAQ

    18,254.69
    -26.15 (-0.14%)
     
  • FTSE

    7,952.62
    +20.64 (+0.26%)
     
  • Dow Jones

    39,807.37
    +47.29 (+0.12%)
     
  • DAX

    18,492.49
    +15.40 (+0.08%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • NIKKEI 225

    40,168.07
    -594.66 (-1.46%)
     

Europe's CSAM scanning plan unpicked

The European Union has formally presented its proposal to move from a situation in which some tech platforms voluntarily scan for child sexual abuse material (CSAM) to something more systematic -- publishing draft legislation that will create a framework which could obligate digital services to use automated technologies to detect and report existing or new CSAM, and also identify and report grooming activity targeting kids on their platforms.

The EU proposal -- for "a regulation laying down rules to prevent and combat child sexual abuse" (PDF) -- is intended to replace a temporary and limited derogation from the bloc's ePrivacy rules, which was adopted last year in order to enable messaging platforms to continue long-standing CSAM scanning activity which some undertake voluntarily.

However that was only ever a stop-gap measure. EU lawmakers say they need a permanent solution to tackle the explosion of CSAM and the abuse the material is linked to -- noting how reports of child sexual abuse online rising from 1M+ back in 2014 to 21.7M reports in 2020 when 65M+ CSAM images and videos were also discovered -- and also pointing to an increase in online grooming seen since the pandemic.

The Commission also cites a claim that 60%+ of sexual abuse material globally is hosted in the EU as further underpinning its impetus to act.

ADVERTISEMENT

Some EU Member States are already adopting their own proposals for platforms to tackle CSAM at a national level so there's also a risk of fragmentation of the rules applying to the bloc's Single Market. The aim for the regulation is therefore to avoid that risk by creating a harmonized pan-EU approach.

EU law contains a prohibition on placing a general monitoring obligations on platforms because of the risk of interfering with fundamental rights like privacy -- but the Commission's proposal aims to circumvent that hard limit by setting out what the regulation's preamble describes as "targeted measures that are proportionate to the risk of misuse of a given service for online child sexual abuse and are subject to robust conditions and safeguards".

What exactly is the bloc proposing? In essence, the Commission's proposal seeks to normalize CSAM mitigation by making services elect to put addressing this risk on the same operational footing as tackling spam or malware -- creating a targeted framework of supervised risk assessments combined with a permanent legal basis that authorizes (and may require) detection technologies to be implemented, while also baking in safeguards over how and indeed whether detection must be done, including time limits and multiple layers of oversight.

The regulation itself does not prescribe which technologies may or may not be used for detecting CSAM or 'grooming' (aka, online behavior that's intended to solicit children for sexual abuse).

"We propose to make it mandatory for all providers of service and hosting to make a risk assessment: If there's a risk that my service, my hosting will be used or abused for sharing CSAM. They have to do the risk assessment," said home affairs commissioner Ylva Johansson, explaining how the Commission intends the regulation to function at a press briefing to announce the proposal today. "They have also to present what kind of mitigating measures they are taking -- for example if children have access to this service or not.

"They have to present these risk assessments and the mitigating measures to a competent authority in the Member State where they are based or in the Member State where they appointed a legal representative authority in the EU. This competent authority will assess this. See how big is the risk. How effective are the mitigating measures and is there a need for additional measures," she continued. "Then they will come back to the company -- they will consult the EU Centre, they will consult their data protection agencies -- to say whether there will be a detection order and if they find there should be a detection order then they should ask another independent authority -- it could be a court in that specific Member State -- to issue a detection order for a specific period of time. And that could take into account what kind of technology they are allowed to use for this detection."

"So that's how we put the safeguards [in place]," Johansson went on. "It's not allowed to do a detection without a detection order. But when there is a detection order you're obliged to do it and you're obliged to report when and if you find CSAM. And this should be reported to the EU Centre which will have an important role to assess whether [reported material] will be put forward to law enforcement [and to pick up what the regulation calls "obviously false positives" to prevent innocent/non-CSAM from being forward to law enforcement]."

The regulation will "put the European Union in the global lead on the fight on online sexual abuse", she further suggested.

Stipulations and safeguards

The EU's legislation proposing body says the regulation is based on both the bloc's existing privacy framework (the General Data Protection Regulation; GDPR) and the incoming Digital Services Act (DSA), a recently agreed horizontal update to rules for ecommerce and digital services and platforms which sets governance requirements in areas like illegal content.

CSAM is already illegal across the EU but the problem of child sexual abuse is so grave -- and the role of online tools, not just in spreading and amplifying but also potentially facilitating abuse -- that the Commission argues dedicated legislation is merited in this area.

It adopted a similarly targeted regulation aimed at speeding up takedowns of terrorism content last year -- and the EU approach is intended to support continued expansion of the bloc's digital rulebook by bolting on other vertical instruments, as needed.

"This comes of course with a lot of safeguards," emphasized Johansson of the latest proposed addition to EU digital rules. "What we are targeting in this legislation are service providers online and hosting providers... It's tailored to target this child sexual abuse material online."

As well as applying to messaging services, the regime includes some targeted measures for app stores which are intended to help prevent kids downloading risky apps -- including a requirement that app stores use "necessary age verification and age assessment measures to reliably identify child users on their services".

Johansson explained that the regulation bakes in multiple layers of requirements for in-scope services -- starting with an obligation to conduct a risk assessment that considers any risks their service may present to children in the context of CSAM, and a requirement to present mitigating measures for any risks they identify.

This structure looks intended by EU lawmakers to encourage services to proactively adopt a robust security- and privacy-minded approach towards users to better safeguard any minors from abuse/predatory attention in a bid to shrink their regulatory risk and avoid more robust interventions that could mean they have to warn all their users they are scanning for CSAM (which wouldn't exactly do wonders for the service's reputation).

It looks to be no accident that -- also today -- the Commission published a new strategy for a "better Internet for kids" (BI4K) which will encourage platforms to conform to a new, voluntary "EU code for age-appropriate design"; as well as fostering development of "a European standard on online age verification" by 2024 -- which the bloc's lawmakers also envisage looping in another plan for a pan-EU 'privacy-safe' digital ID wallet (i.e. as a non-commercial option for certifying whether a user is underage or not).

The BI4K strategy doesn't contain legally binding measures but adherence to approved practices, such as the planned age-appropriate design code, could be seen as a way for digital services to earn brownie points towards compliance with the DSA -- which is legally binding and carries the threat of major penalties for infringers. So the EU's approach to platform regulation should be understood as intentionally broad and deep; with a long-tail cascade of stipulations and suggestions which both require and nudge.

Returning to today's proposal to combat child sexual abuse, if a service provider ends up being deemed to be in breach the Commission has proposed fines of up to 6% of global annual turnover -- although it would be up to the Member State agencies to determine the exact level of any penalties.

These local regulatory bodies will also be responsible for assessing the service provider's risk assessment and existing mitigations -- and, ultimately, deciding whether or not a detection order is merited to address specific child safety concerns.

Here the Commission looks to have its eye on avoiding forum shopping and enforcement blockages/bottlenecks (as have hampered GDPR) as the regulation requires Member State-level regulators to consult with a new, centralized (but independent of the EU) agency -- called the "European Centre to prevent and counter child sexual abuse" (aka, the "EU Centre" for short) -- a body lawmakers intend to support their fight against child sexual abuse in a number of ways.

Among the Centre's tasks will be receiving and checking reports of CSAM from in-scope services (and deciding whether or not to forward them to law enforcement); maintaining databases of "indicators" of online CSAM which services could be required to use on receipt of a detection order; and developing (novel) technologies that might be used to detect CSAM and/or grooming.

"In particular, the EU Centre will create, maintain and operate databases of indicators of online child sexual abuse that providers will be required to use to comply with the detection obligations," the Commission writes in the regulation preamble.

"The EU Centre should also carry out certain complementary tasks, such as assisting competent national authorities in the performance of their tasks under this Regulation and providing support to victims in connection to the providers’ obligations. It should also use its central position to facilitate cooperation and the exchange of information and expertise, including for the purposes of evidence-based policy-making and prevention. Prevention is a priority in the Commission’s efforts to fight against child sexual abuse."

The prospect of apps having to incorporate CSAM detection technology developed by a state agency has, unsurprisingly, caused alarm among a number of security, privacy and digital rights watchers.

Although alarm isn't limited to that one component; Pirate Party MEP, Patrick Breyer -- a particularly vocal critic -- dubs the entire proposal "mass surveillance" and "fundamental rights terrorism" on account of the cavalcade of risks he says it presents, from mandating age verification to eroding privacy and confidentiality of messaging and cloud storage for personal photos.

Re: the Centre's listed detection technologies, it's worth noting that Article 10 of the regulation includes this caveated line on obligatory use of its tech -- which states [emphasis ours]: "The provider shall not be required to use any specific technology, including those made available by the EU Centre, as long as the requirements set out in this Article are met" -- which, at least, suggests providers have a choice over whether or not they apply its centrally devised technologies to comply with a detection order vs using some other technologies of their choice.

(Okay, so what are the requirements that must be "met", per the rest of the Article, to be freed from the obligation to use EU Centre approved tech? These include that selected technologies are "effective" at detection of known/new CSAM and grooming activity; are unable to extract other information from comms other than what is "strictly necessary" for detecting the targeted CSAM content/behavior; are "state of the art" and have the "least intrusive" impact on fundamental rights like privacy; and are "sufficiently reliable, in that they limit to the maximum extent possible the rate of errors regarding the detection"... So the primary question arising from the regulation is probably whether such subtle and precise CSAM/grooming detection technologies exist anywhere at all -- or even could ever exist outside the realms of sci-fi.)

That the EU is essentially asking for the technologically impossible has been another quick criticism of the proposal.

Crucially for anyone concerned about the potential impact to (everybody's) privacy and security if messaging comms/cloud storage etc are compromised by third party scanning tech, local oversight bodies responsible for enforcing the regulation must consult EU data protection authorities -- who will clearly have a vital role to play in assessing the proportionality of proposed measures and weighing the impact on fundamental rights.

Per the Commission, technologies developed by the EU Centre will also be assessed by the European Data Protection Board (EDPB), a steering body for application of the GDPR, which it stipulates must be consulted on all detection techs included in the Centre's list. ("The EDPB is also consulted on the ways in which such technologies should be best deployed to ensure compliance with applicable EU rules on the protection of personal data," the Commission adds in a Q&A on the proposal.)

There's a further check built in, according to EU lawmakers, as a separate independent body (which Johansson suggests could be a court) will be responsible for finally issuing -- and, presumably, considering the proportionality of -- any detection order. (But if this check doesn't include a wider weighing of proportionality/necessity it might just amount to a procedural rubber stamp.)

The regulation further stipulates that detection orders must be time limited. Which implies that requiring indefinite detection would not be possible under the plan. Albeit, consecutive detection orders might have a similar effect -- albeit, you'd hope the EU's data protection agencies would do their job of advising against doing that or the risk of a legal challenge to the whole regime would certainly crank up.

Whether all these checks and balances and layers of oversight will calm the privacy and security fears swirling around the proposal remains to be seen.

A version of the draft legislation which leaked earlier this week quickly sparked loud alarm klaxons from a variety of security and industry experts -- who reiterated (now) perennial warnings over the implications of mandating content-scanning in an digital ecosystem that contains robustly encrypted messaging apps.

The concern is especially what the move might mean for end-to-end encrypted services -- with industry watchers querying whether the regulation could force messaging platforms to bake in backdoors to enable the 'necessary' scanning, since they don't have access to content in the clear?

E2EE messaging platform WhatsApp's chief, Will Cathcart, was quick to amplify concerns of what the proposal might mean in a tweet storm.

Some critics also warned that the EU's approach looked similar to a controversial proposal by Apple last year to implement client-side CSAM scanning on users' devices -- which was dropped by the tech giant after another storm of criticism from security and digital rights experts.

Assuming the Commission proposal gets adopted (and the European Parliament and Council have to weigh in before that can happen), one major question for the EU is absolutely what happens if/when services ordered to carry out detection of CSAM are using end-to-end encryption -- meaning they are not in a position to scan message content to detect CSAM/potential grooming in progress since they do not hold keys to decrypt the data.

Johansson was asked about encryption during today's presser -- and specifically whether the regulation poses the risk of backdooring encryption? She sought to close down the concern but the Commission's circuitous logic on this topic makes that task perhaps as difficult as inventing a perfectly effective and privacy safe CSAM detecting technology.

"I know there are rumors on my proposal but this is not a proposal on encryption. This is a proposal on child sexual abuse material," she responded. "CSAM is always illegal in the European Union, no matter the context it is in. [The proposal is] only about detecting CSAM -- it's not about reading or communication or anything. It's just about finding this specific illegal content, report it and to remove it. And it has to be done with technologies that have been consulted with data protection authorities. It has to be with the least privacy intrusive technology.

"If you're searching for a needle in a haystack you need a magnet. And a magnet will only see the needle, and not the hay, so to say. And this is how they use the detection today -- the companies. To detect for malware and spam. It's exactly the same kind of technology, where you're searching for a specific thing and not reading everything. So this is what this about."

"So yes I think and I hope that it will be adopted," she added of the proposal. "We can't continue leaving children without protection as we're doing today."

As noted above, the regulation does not stipulate exact technologies to be used for detection of CSAM. So EU lawmakers are -- essentially -- proposing to legislate a fudge. Which is certainly one way to try to sidestep the inexorable controversy of mandating privacy-intrusive detection without fatally undermining privacy and breaking E2EE in the process.

During the brief Q&A with journalists, Johansson was also asked why the Commission had not made it explicit in the text that client-side scanning would not be an acceptable detection technology -- given the major risks that particular 'state of the art' technology is perceived to pose to encryption and to privacy.

She responded by saying the legislation is "technology neutral", before reiterating another relative: That the regulation has been structured to limit interventions so as to ensure they have the least intrusive impact on privacy.

"I think she is extremely important in these days. Technology is developing extremely fast. And of course we have been listening to those that have concerns about the privacy of the users. We've also been listening to those that have concerns about the privacy of the children victims. And this is the balance to find," she suggested. "That's why we set up this specific regime with the competent authority and they have to make a risk assessment -- mitigating measures that will foster safety by design by the companies.

"If that's not enough -- if detection is necessary -- we have built in the consultation of the data protection authorities and we haver built in a specific decision by another independent authority, it could be a court, that will take the specific detection order. And the EU Centre is there to support and to help with the development of the technology so we have the least privacy intrusive technology.

"But we choose not to define the technology because then it might be outdated already when it's adopted because the technology and development goes so fast. So the important [thing] is the result and the safeguards and to use the least intrusive technology to reach that result that is necessary."

There is, perhaps, a little more reassurance to be found in the Commission's Q&A on the regulation where -- in a section responding to the question of how the proposal will "prevent mass surveillance" -- it writes [emphasis ours]:

"When issuing detection orders, national authorities have to take into account the availability and suitability of relevant technologies. This means that the detection order will not be issued if the state of development of the technology is such that there is no available technology that would allow the provider to comply with the detection order."

That said, the Q&A does confirm that encrypted services are in-scope -- with the Commission writing that had it explicitly excluded those types of services "the consequences would be severe for children". (Even as it also gives a brief nod to the importance of encryption for "the protection of cybersecurity and confidentiality of communications".)

On E2EE specifically, the Commission writes that it continues to work "closely with industry, civil society organisations, and academia in the context of the EU Internet Forum, to support research that identifies technical solutions to scale up and feasibly and lawfully be implemented by companies to detect child sexual abuse in end-to-end encrypted electronic communications in full respect of fundamental rights".

"The proposed legislation takes into account recommendations made under a separate, ongoing multi-stakeholder process exclusively focused on encryption arising from the December 2020 Council Resolution," it further notes, adding [emphasis ours]: "This work has shown that solutions exist but have not been tested on a wide scale basis. The Commission will continue to work with all relevant stakeholders to address regulatory and operational challenges and opportunities in the fight against these crimes."

So -- the tl;dr looks to be that, in the short term, E2EE services are likely to dodge a direct detection order, being as there's likely no (legal) way to detect CSAM without fatally compromising user privacy/security, so the EU's plan could, in the first instance, end up encouraging further adoption of strong encryption (E2EE) by in scope services -- i.e. as a means of managing regulatory risk. (What that might mean for services that operate intentionally user-scanning business models is another question.)

That said, the proposed framework has been set up in such a way as to leave the door open to a pan-EU agency (the EU Centre) being positioned to consult on the design and development of novel technologies that could, one day, tread the line -- or thread the needle, if you prefer -- between risk and rights.

Or else that theoretical possibility is being entertained as another stick for the Commission to hold over unruly technologists to encourage them to engage in more thoughtful, user-centric design as a way to combat predatory behavior and abuse on their services.