Advertisement
Australia markets open in 9 hours 26 minutes
  • ALL ORDS

    7,898.90
    +37.90 (+0.48%)
     
  • AUD/USD

    0.6434
    -0.0002 (-0.04%)
     
  • ASX 200

    7,642.10
    +36.50 (+0.48%)
     
  • OIL

    82.66
    -0.03 (-0.04%)
     
  • GOLD

    2,394.00
    +5.60 (+0.23%)
     
  • Bitcoin AUD

    98,992.89
    +3,569.19 (+3.74%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     

UK names five projects to get funding for CSAM detection

The U.K. government has named five projects that have scored public funding under a "tech safety" challenge announced in September -- when the Home Office said it wanted to encourage the tech industry and academia to develop novel AI/scanning technologies that could be implemented on end-to-end encrypted (e2ee) services to detect child sexual abuse material (CSAM).

A number of mainstream messaging services already use e2ee such as Facebook-owned WhatsApp and Apple's iMessage.

The Home Office has claimed it's looking for a middle ground that doesn't require digital service providers to abandon the use of end-to-end encryption but will still allow for CSAM material to be detected and passed to law enforcement, also without the security being backdoored. At least that's the claim.

The Safety Tech Challenge Fund is being administered by the Department for Digital, Culture, Media and Sport (DCMS) -- and following the announcement of the funding awards yesterday, digital minister, Chris Philp, said in a statement: "It’s entirely possible for social media platforms to use end-to-end encryption without hampering efforts to stamp out child abuse. But they’ve failed to take action to address this problem so we are stepping in to help develop the solutions needed. It is not acceptable to deploy E2EE without ensuring that enforcement and child protection measures are still in place."

ADVERTISEMENT

The five projects which have been awarded an initial £85,000 apiece by the U.K. government -- with a further £130,000 potentially available to be divided up between the "strongest" projects (bringing the total funding pot up to £555,000) -- are as follows:

  • Edinburgh-based digital forensics firm Cyan Forensics and real-time risk intelligence focused Crisp Thinking, in partnership with the University of Edinburgh and the not-for-profit Internet Watch Foundation, which will develop a plug-in to be integrated within encrypted social platforms to "detect CSAM by matching content against known illegal material"

  • Parental control app maker SafeToNet and Anglia Ruskin University, which will develop a suite of live video-moderation AI technologies that can run on any smart device -- and are intended to "prevent the filming of nudity, violence, pornography and CSAM in real-time, as it is being produced"

  • Enterprise security firm GalaxKey, based in St Albans, which will work with Poole-based content moderation software maker Image Analyser and digital identity and age assurance firm Yoti to "develop software focusing on user privacy, detection and prevention of CSAM and predatory behavior, and age verification to detect child sexual abuse before it reaches an E2EE environment, preventing it from being uploaded and shared"

  • Content moderation startup DragonflAI, based in Edinburgh, which will also work with Yoti to combine its on-device nudity AI detection tech with the latter's age assurance technologies -- in order to "spot new indecent images within E2EE environments"

  • Austria-based digital forensics firm T3K-Forensics has also won UK government funding to implement its AI-based child sexual abuse detection technology on smartphones to detect newly created material -- providing what the government bills as "a toolkit that social platforms can integrate with their E2EE services"

The winning projects will be evaluated at the end of a five-month delivery phase by an external evaluator -- which the government said will look at success criteria including "commercial viability to determine deployability into the market, and long term impact".

In a joint statement, DCMS and the Home Office also touted the forthcoming Online Safety Bill -- which they claimed will transform how illegal and harmful online content is dealt with by placing a new duty of care on social media and other tech companies toward their U.K. users.

"This will mean there will be less illegal content such as child sexual abuse and exploitation online and when it does appear it will be removed quicker. The duty of care will still apply to companies that choose to use end-to-end encryption," they added.

Prior guidance put out by DCMS this summer urged social media and messaging firms to "prevent" the use of e2e encryption on child accounts. So the government appears to be evolving its approach (or its messaging) -- and banking on embedded CSAM detection tools being baked into e2e encryption and able to carry out content detection.

Assuming, of course, any of the aforementioned projects delivers the claimed CSAM detection/prevention functionality at acceptable levels of accuracy (i.e. avoiding any ruinous false positives).

Another salient point is the question of whether the novel AI/scanning techs could result in vulnerabilities or even backdoors being baked into e2e encrypted systems -- thereby undermining everyone's security.

There is also the question of whether U.K. citizens will be happy with state-mandated scanning of their electronic devices -- given all the privacy and liberty issues that entails.

While the U.K. public has generally been happy to get behind the notion of improving online child safety it might be rather less happy to discover that means blanket device scanning -- especially if novel technologies end up sending alerts to law enforcement about people's innocent holiday/bath-time snaps.

The political backlash around misfiring 'safety' tech could be swift and substantial.

iPhone maker Apple put the rollout of its own on-device CSAM scanning tech -- "NeuralHash" -- on hold this fall after a privacy backlash.