The controversy began when Slack users discovered that the company’s privacy principles might allow for the use of their data in ways they hadn’t explicitly agreed to. This includes training large language models (LLMs) that power Slack platform features. Although Slack maintains that it does not use customer data to train these models, the ambiguity in its policy language has led to user distrust.
Engineer and writer Gergely Orosz highlighted this issue in a post on Threads, arguing that Slack’s policy should explicitly state its data use practices rather than relegating explanations to a blog post.
“An ML engineer at the company says they don’t use messages to train LLM models,” Orosz wrote. “My response is that the current terms allow them to do so. I’ll believe this is the policy when it’s in the policy. A blog post is not the privacy policy: every serious company knows this.”
Slack’s Response and Policy Revision
In response to the backlash, Slack announced imminent changes to its privacy principles to clarify the relationship between customer data and the development of AI within its platform. A Salesforce spokesperson, representing the parent company, assured that the updates would specify that the app’s AI initiatives do not involve using customer data for developing or training generative models.
“We’ll be updating those principles today to better explain the relationship between customer data and generative AI in the app,” said the Salesforce spokesperson. These updates are intended to ensure that “customer data never leaves the apps’ trust boundary, and the providers of the LLM never have any access to the customer data.”
User Reactions and Ongoing Concerns
Despite the app’s commitment to updating its policies, the initial discovery of the data use practices led to a significant outcry among users. Platforms like Hacker News and social media sites saw a flurry of comments from concerned individuals questioning the ethics and legality of the apps’ data usage.
Corey Quinn from Duckbill Group voiced a common sentiment among dismayed users: “I’m sorry Slack, you’re doing fucking WHAT with user DMs, messages, files, etc.? I’m positive I’m not reading this correctly.”
To mitigate concerns, SlackHQ took to social media to clarify its position, emphasizing that while AI uses off-the-shelf LLMs, these models do not train on customer data directly.
“Customer data belongs to the customer,” SlackHQ reiterated, confirming that users could opt out of data sharing that contributes to non-generative ML models used for internal features like channel recommendations and search functionalities.
Looking Ahead: Implications for Slack and Its Users
As the app navigates this challenging situation, the implications for user trust and corporate transparency are significant. The incident highlights the delicate balance tech companies must maintain in utilizing AI to enhance user experience while respecting privacy.
For the app, the way forward involves not only making policy adjustments but also restoring faith among its user base that their data is handled with the utmost integrity.
The upcoming policy revisions will be a crucial step for Slack in demonstrating its commitment to transparency and user privacy. As the tech community watches closely, the effectiveness of these changes in quelling user concerns and setting a precedent for data privacy in AI applications will unfold in the coming months.