Advertisement
Australia markets closed
  • ALL ORDS

    7,970.80
    +74.90 (+0.95%)
     
  • ASX 200

    7,701.70
    +73.50 (+0.96%)
     
  • AUD/USD

    0.6655
    +0.0021 (+0.31%)
     
  • OIL

    77.18
    -0.73 (-0.94%)
     
  • GOLD

    2,347.70
    -18.80 (-0.79%)
     
  • Bitcoin AUD

    101,504.61
    +1,220.02 (+1.22%)
     
  • CMC Crypto 200

    1,424.53
    -4.04 (-0.28%)
     
  • AUD/EUR

    0.6132
    +0.0014 (+0.23%)
     
  • AUD/NZD

    1.0824
    -0.0020 (-0.19%)
     
  • NZX 50

    11,867.29
    +310.08 (+2.68%)
     
  • NASDAQ

    18,536.65
    -2.01 (-0.01%)
     
  • FTSE

    8,275.38
    +44.33 (+0.54%)
     
  • Dow Jones

    38,686.32
    +574.84 (+1.51%)
     
  • DAX

    18,497.94
    +1.15 (+0.01%)
     
  • Hang Seng

    18,079.61
    -150.58 (-0.83%)
     
  • NIKKEI 225

    38,487.90
    +433.77 (+1.14%)
     

Slack under attack over sneaky AI training policy

Image Credits: TechCrunch

On the heels of ongoing issues around how big tech is appropriating data from individuals and businesses in the training of AI services, a storm is brewing among Slack users upset over how the Salesforce-owned chat platform is charging ahead with its AI vision.

The company, like many others, is tapping its own user data to train some of its new AI services. But, it turns out that if you don't want Slack to use your data, you have to email the company to opt out.

And the terms of that engagement are tucked away in what appears to be an out-of-date, confusing privacy policy that no one was paying attention to. That was the case with Slack, until a miffed person posted about them on a community site hugely popular with developers, and then that post went viral...which is what happened here.

It all kicked off last night, when a note on Hacker News raised the issue of how Slack trains its AI services, by way of a straight link to its privacy principles -- no additional comment was needed. That post kicked off a longer conversation -- and what seemed like news to current Slack users -- that Slack opts users in by default to its AI training, and that you need to email a specific address to opt out.

ADVERTISEMENT

That Hacker News thread then spurred multiple conversations and questions on other platforms: There is a newish, generically named product called "Slack AI" that lets users search for answers and summarize conversation threads, among other things, but why is that not once mentioned by name on that privacy principles page in any way, even to make clear if the privacy policy applies to it? And why does Slack reference both "global models" and "AI models?"

Between people being confused about where Slack is applying its AI privacy principles, and people being surprised and annoyed at the idea of emailing to opt-out -- at a company that makes a big deal of touting that "Your control your data" -- Slack does not come off well.

The shock might be new, but the terms are not. According to pages on the Internet Archive, the terms have been applicable since at least September 2023. (We have asked the company to confirm.)

Per the privacy policy, Slack is using customer data specifically to train "global models," which Slack uses to power channel and emoji recommendations and search results. Slack tells us that its usage of the data has specific limits.

"Slack has platform-level machine learning models for things like channel and emoji recommendations and search results. We do not build or train these models in such a way that they could learn, memorize or be able to reproduce some part of customer data," a company spokesperson told TechCrunch. However, the policy does not appear to address the overall scope and the company's wider plans for training AI models.

In its terms, Slack says that if customers opt out of data training, they would still benefit from the company's "globally trained AI/ML models." But again, in that case, it's not clear then why the company is using customer data in the first place to power features like emoji recommendations.

The company also said it doesn't use customer data to train Slack AI.

"Slack AI is a separately purchased add-on that uses large language models (LLMs) but does not train those LLMs on customer data. Slack AI uses LLMs hosted directly within Slack's AWS infrastructure, so that customer data remains in-house and is not shared with any LLM provider. This ensures that customer data stays in that organization's control and exclusively for that organization's use," a spokesperson said.

Some of the confusion is likely to be addressed sooner rather than later. In a reply to one critical take on Threads from engineer and writer Gergely Orosz, Slack engineer Aaron Maurer conceded that the company needs to update the page to reflect "how these privacy principles play with Slack AI."

Maurer added that these terms were written at the time when the company didn't have Slack AI, and these rules reflect the company's work around search and recommendations. It will be worth examining the terms for future updates, given the confusion around what Slack is currently doing with its AI.

The issues at Slack are a stark reminder that, in the fast-moving world of AI development, user privacy should not be an afterthought and a company's terms of service should clearly spell out how and when data is used or if it is not.

Have a news tip? Contact Ingrid securely on Signal via ingrid.101 or here. (No PR pitches, please.)

We're launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.