Advertisement
Australia markets open in 10 hours
  • ALL ORDS

    8,491.50
    -7.20 (-0.08%)
     
  • AUD/USD

    0.6753
    +0.0010 (+0.14%)
     
  • ASX 200

    8,214.50
    -8.50 (-0.10%)
     
  • OIL

    75.49
    -0.36 (-0.47%)
     
  • GOLD

    2,674.20
    +34.90 (+1.32%)
     
  • Bitcoin AUD

    92,848.37
    -579.39 (-0.62%)
     
  • XRP AUD

    0.79
    -0.01 (-1.68%)
     

LinkedIn has stopped grabbing UK users' data for AI

Image Credits: Smith Collection/Gado / Getty Images

The U.K.'s data protection watchdog has confirmed that Microsoft-owned LinkedIn has stopped processing user data for AI model training for now.

Stephen Almond, executive director of regulatory risk for the Information Commissioner's Office, wrote in a statement on Friday: "We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO."

Eagle-eyed privacy experts had already spotted a quiet edit LinkedIn made to its privacy policy after a backlash over grabbing people's info to train AIs — adding the U.K. to the list of European regions where it does not offer an opt-out, as it says it is not processing local users' data for this purpose.

"At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice," LinkedIn general counsel Blake Lawit, wrote in an updated company blog post originally published on September 18.

The professional social network had previously specified it was not processing information of users located in the European Union, EEA, or Switzerland — where the bloc's General Data Protection Regulation (GDPR) applies. However U.K. data protection law is still based on the EU framework, so when it emerged that LinkedIn was not extending the same courtesy to U.K. users, privacy experts were quick to shout foul.

U.K. digital rights nonprofit the Open Rights Group (ORG), channeled its outrage at LinkedIn's action into a fresh complaint to the ICO about consentless data processing for AI. But it was also critical of the regulator for failing to stop yet another AI data heist.

In recent weeks, Meta, the owner of Facebook and Instagram, lifted an earlier pause on processing its own local users' data for training its AIs and returned to default harvesting U.K. users' info. That means users with accounts linked to the U.K. must once again actively opt out if they don't want Meta using their personal data to enrich its algorithms.

Despite the ICO previously raising concerns about Meta's practices, the regulator has so far stood by and watched the adtech giant resume this data harvesting.

In a statement put out on Wednesday, ORG's legal and policy officer, Mariano delli Santi, warned about the imbalance of letting powerful platforms get away with doing what they like with people's information so long as they bury an opt-out somewhere in settings. Instead, he argued, they should be required to obtain affirmative consent up front.

“The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI," he wrote. "Opt-in consent isn't only legally mandated, but a common-sense requirement.”

We've reached out to the ICO and Microsoft with questions and will update this report if we get a response.

Update: LinkedIn spokesperson, Leonna Spilman, emailed a statement in which the company wrote: "At this time, we are not enabling training for generative AI on member data from the European Economic Area (EEA), Switzerland and the United Kingdom. We welcome the opportunity to continue our constructive engagement with the ICO."

In further remarks about its use of user data for AI training, LinkedIn added: "We believe that our members should have the ability to exercise control over their data, which is why we are making available an opt out setting for training AI models used for content generation in the countries where we do this. We’ve always used some form of automation in LinkedIn products, and we’ve always been clear that users have the choice about how their data is used. The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist."