LinkedIn, the professional networking giant, has found itself at the centre of a fresh controversy surrounding the use of its users’ data in AI training without explicit consent. In a recent revelation, LinkedIn admitted to utilizing personal data for developing AI models, sparking a wave of concern among its vast user base. This issue pertains to the broader implications of how AI training leverages user data across tech platforms, highlighting a delicate balance between innovation and privacy.
In a detailed blog update set for November 20, Blake Lawit, LinkedIn’s general counsel, announced impending changes to the platform’s user agreement and privacy policy. These amendments aim to clarify how personal data fuels AI technologies on LinkedIn, promising a more transparent approach moving forward.
What Changes Are Coming?
LinkedIn’s updated privacy policy will explicitly state that personal data may be used to “develop and train AI models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences.” This change is designed to make the service offerings more relevant and beneficial to users. However, the policy also stipulates that users can only opt out of future data use for AI training, not data that has already been used, which has led to some discontent among users.
The Scope and Storage of Collected Data
Whenever a user interacts with AI features, such as composing posts or modifying settings, their data is collected and utilized for AI training. This data remains stored until the user decides to delete any AI-generated content associated with it. For those concerned about past data usage, LinkedIn recommends utilizing its data access tool to manage or remove previously collected information.
Potential Risks and Protections
One significant concern highlighted by LinkedIn is the risk of personal data being outputted by generative AI features, which the platform is taking steps to mitigate by employing privacy-enhancing technologies to obscure personal data from training datasets. Despite the proactive measures, the fact that opting out doesn’t affect data already used in AI training remains a point of contention. This decision, as LinkedIn articulates, is due to the perceived benefits that AI training offers to all members, especially in enhancing job and networking opportunities through AI-driven features.
How to Opt-Out of AI Training on LinkedIn
For users looking to exercise more control over their data, LinkedIn provides an option to opt out of AI training. This can be done by navigating to the “Data privacy” section under account settings and disabling the collection for AI improvement. This setting is automatically enabled for most users, with exceptions in regions like the European Economic Area or Switzerland where stricter privacy laws prevail.
Beyond Opt-Out: Legislative Protections and Future Outlook
As legislative frameworks like the European Union’s AI Act and the GDPR evolve, there’s an anticipation of more robust protections that could level the playing field, ensuring all users are better informed and less susceptible to unwelcome surprises related to data use. LinkedIn’s commitment to transparency and user control over their data is clear in its latest policy updates. However, as AI continues to weave itself deeper into the fabric of digital platforms, the dialogue around ethical AI use, user consent, and privacy is only set to intensify.
In the meantime, users are advised to stay vigilant about how their data is used and to utilize the available tools to safeguard their personal information on LinkedIn and beyond. By understanding the mechanisms behind AI features and the policies governing them, users can better navigate the benefits and challenges posed by technological advancements in professional networking.