TLDR: LinkedIn has updated its generative AI policy, effective November 3, 2025, to use a broader range of member data for training its AI models. This change impacts legal professionals who use the platform for networking and thought leadership, as their public profile details, posts, and activity may now contribute to AI training. Users can opt-out of future data use, but previously collected data will remain in the AI systems.
Effective November 3, 2025, LinkedIn has implemented a significant update to its generative artificial intelligence (AI) policy, broadening the scope of member data utilized to train its AI models. This strategic shift is poised to enhance the platform’s AI-driven features, offering more accurate suggestions, smarter prompts, and improved content interpretation for its users. The policy change holds particular relevance for legal professionals who leverage LinkedIn for sharing insights, building professional visibility, and maintaining their networks.
According to LinkedIn’s updated data-use terms, the company can now employ a wider array of member data from various regions, including the UK, EU/EEA, Switzerland, Canada, and Hong Kong, to train its content-generating AI models. This expanded dataset encompasses:
LinkedIn profile details: Including job titles, skills, education, and location.
Public content: Such as posts, comments, articles, and poll responses.
Platform activity: Interactions within LinkedIn groups and feedback on AI features.
Aggregated job-related data: Resumes and responses to screening questions.
Furthermore, any data input or interaction with LinkedIn’s built-in AI or content-creation tools (e.g., post-drafting, message suggestions, profile prompts) may also contribute to these models. However, certain sensitive data categories, including private messages, login credentials, and payment information, remain explicitly excluded from generative AI training.
A notable aspect of the new policy is the provision for data sharing with LinkedIn affiliates, including Microsoft Corporation and its subsidiaries. This sharing aims to further train their own generative AI models and facilitate more personalized advertising and professional matching.
LinkedIn’s rationale behind this update is to strengthen its generative AI capabilities by training models on real professional activity. This approach is expected to provide the necessary context for delivering more reliable and relevant outputs across the platform. The company anticipates that a broader training dataset will enable better understanding of content relevance, topic relationships, and opportunities pertinent to individual users. This move aligns with LinkedIn’s ongoing investment in AI, ensuring that its generative models perform consistently and at scale.
Also Read:
- Generative AI’s Transformative Impact: Disparate Adoption, Evolving Workforce Dynamics, and Urgent Policy Considerations
- Wikimedia Foundation Urges AI Developers to Credit Content and Utilize Paid API
For members concerned about their data privacy, LinkedIn has established an opt-out mechanism. Users can prevent the use of future data for generative AI training by navigating to ‘Settings & Privacy’ → ‘Data Privacy’ → ‘Data for Generative AI Improvement’. It is crucial to note, however, that this opt-out does not apply retroactively; any data already collected and integrated into LinkedIn’s AI training systems will remain within those systems.


