Sun. Sep 8th, 2024

The provider of collaborative software claims that its latest generative AI features are not dependent on user data. However, unless users opt out, user files and messages are utilized to train other AI models.

In response to worries regarding the usage of user data to train its generative AI (genAI) models, Slack amended its “privacy principles.”

The business stated in a blog post on Friday that the large language models (LLMs) that underpin the genAI capabilities in its collaboration software are not developed using user data, such as files and Slack messages. Customers must still choose not to have their data used by default for its machine learning-based suggestions.

A Slack user raised concerns about the business’s privacy policies on X last week, pointing out that the firm uses customer data for its AI models and that users must opt out. This appears to be when criticism of Slack’s privacy position began. On a HacknerNews forum, several people voiced their outrage. Slack took note of the user complaints and admitted that the pre-existing phrasing of its privacy standards had played a role in the circumstance. “We value the feedback, and as we looked at the language on our website, we realized that they were right,” Slack stated on Friday in a blog post. “We could have done a better job of explaining our approach, especially regarding the differences in how data is used for traditional machine-learning (ML) models and in generative AI.” In words of Raúl Castañón – Raúl Castañón, a senior research analyst at S&P Global Market Intelligence’s 451 Research. “Slack’s privacy principles should help it address concerns that could potentially stall adoption of genAI initiatives,”

When it comes to sharing user data with the AI/ML algorithms, Slack still opts its users in by default. A client organization’s Slack administrator must email the business to request that their data be removed in order to opt out.

“Ideally, training would be opt-in, not opt-out, and companies like Slack/Salesforce would proactively inform customers of the specifics of what data is being used and how it is being used,”   “I think that privacy concerns related to AI training are only going to grow and companies are increasingly going to face backlash if they don’t clearly communicate data use and training methods.” – stated Metrigy’s president and principal analyst, Irwin Lazar.

⭐️Click the link https://temu.to/m/uq7lk46zk7q to get 💰£100 coupon bundle !! Another surprise for you! Click https://temu.to/m/et77as2lh4s to earn with me together🤝!

Leave a Reply

Your email address will not be published. Required fields are marked *