TLDR: LinkedIn has suspended the use of data from users in the European Economic Area (EEA), Switzerland, and the United Kingdom for training its generative AI models. This decision, announced around September 18-20, 2024, comes after privacy concerns were raised by regulatory bodies, notably the UK’s Information Commissioner’s Office (ICO). While an opt-out option exists for users in other regions, privacy advocates criticize the default opt-in approach.
LinkedIn, the professional networking platform owned by Microsoft, has announced a temporary halt to the training of its generative artificial intelligence models using data from users within the European Economic Area (EEA), Switzerland, and the United Kingdom. This move, confirmed in a blog post on September 18, 2024, and subsequently reported by various outlets, follows significant engagement and concerns raised by privacy watchdogs, particularly the UK’s Information Commissioner’s Office (ICO).
Previously, LinkedIn had specified that data from users in the EU, EEA, and Switzerland were excluded from AI training, but this exclusion did not explicitly extend to UK data. The recent announcement clarifies that the UK is now included in the regions where user data will not be utilized for generative AI training. Blake Lawit, Senior Vice President and General Counsel at LinkedIn, stated, ‘At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice.’
This development was met with approval from the UK’s ICO. Stephen Almond, Executive Director of Regulatory Risk at the ICO, commented, ‘We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.’ Almond emphasized the importance of public trust in privacy rights being respected from the outset.
For users outside the EEA, Switzerland, and the UK, LinkedIn continues to offer an opt-out setting for data used in generative AI training. However, this feature is often enabled by default, a practice that has drawn criticism from privacy advocates. Mariano delli Santi, legal and policy officer at the Open Rights Group, voiced strong opposition to this approach, stating, ‘The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.’
Also Read:
- Reddit Pursues Enhanced AI Content Deals with Google and OpenAI, Seeking Dynamic Pricing Model
- Stability AI Unveils Annual Report Detailing AI Integrity and Transparency Initiatives
LinkedIn maintains that it employs ‘privacy enhancing technologies to redact or remove personal data’ from its training datasets to minimize personal data usage. The company’s updated terms of service, which formalize the use of member data for generative AI models, are set to come into effect on November 20, 2024. This situation mirrors similar challenges faced by other tech giants, with Meta and X (formerly Twitter) also having paused their AI training initiatives on EU/EEA user data following regulatory pressure.


