Slack trains AI models on user data
Dr. Martin Shelton
May 23, 2024
Illustration by the Electronic Frontier Foundation. (CC BY 2.0)
It’s the Digital Security Training team at Freedom of the Press Foundation (FPF), with security news that keeps you, your sources, and your devices safe. If someone has shared this newsletter with you, please subscribe here.
Over this past week, Slack published a blog post defending its privacy practices following widespread criticism over its use of customer data to train its global AI models. At the moment, organizations are required to opt out to prevent their messages, content, and files from being mined to develop Slack’s AI. The company has updated its privacy principles to clarify that it does not develop “generative models” using customer data, (e.g., chatbots) suggesting it instead uses such data to train “non-generative” models “for features such as emoji and channel recommendations.” Further adding to the confusion, Slack’s messaging is inconsistent. Its AI landing page reads, “Your data is your data. We don't use it to train Slack AI.” But the company now explicitly requires organizations to opt out to prevent their data from being used in this way. Read more here.
Threats to press freedom around the world are at an all-time high. Sign up to stay up to date and take action to protect journalists and whistleblowers everywhere.
Thanks for signing up for our newsletter. You are not yet subscribed! Please check your email for a message asking you to confirm your subscription.
Our team is always ready to assist journalists with digital security concerns. Reach out here, and stay safe and secure out there.
Best,
Martin