TLDR: File-sharing giant WeTransfer recently faced a significant user backlash after quietly updating its terms of service to include a clause allowing the use of user content for ‘machine learning models.’ The move sparked widespread concern among its creative user base regarding intellectual property and data exploitation. WeTransfer quickly reversed the change, clarifying it does not use user content for AI training and emphasizing its commitment to user trust, underscoring a growing tension between AI development and user data privacy.
Amsterdam, Netherlands – File-sharing platform WeTransfer found itself at the center of a digital trust storm this week, following a swift and decisive backtrack on controversial updates to its terms of service. The incident has cast a spotlight on the escalating tensions between artificial intelligence ambitions, user data privacy, and the critical importance of digital trust in the tech industry.
Earlier this month, WeTransfer quietly introduced a new clause, specifically section 6.3, into its terms of service. This update, which became visible around July 14th, granted the company a ‘perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license… including to improve performance of machine learning models that enhance our content moderation process.’ This seemingly innocuous legal phrasing quickly ignited a firestorm among WeTransfer’s global user base, particularly within the creative community—artists, writers, filmmakers, and journalists—who rely on the platform for sharing sensitive and proprietary work.
Concerns rapidly mounted over the potential for their intellectual property to be used without consent or compensation to train AI models, or even to develop systems that could eventually compete with their own creative output. Social media platforms, notably X (formerly Twitter), became a hub for users voicing their unease and outrage, perceiving the clause as a ‘dystopian data grab’ rather than standard risk management.
In response to the widespread outcry, WeTransfer acted with remarkable speed. By Tuesday, July 16th, the company had revised its terms, completely removing all references to machine learning and AI. A clarifying statement was issued, asserting, ‘We don’t use customer content to train AI, we never have, and we don’t sell or share user data.’ The company explained that the original language was intended to cover the ‘possibility of using AI to improve content moderation and further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform,’ but conceded that the wording had been ‘unclear’ and caused ‘unnecessary confusion.’
The revised clause now simply states: ‘You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.’ This significant narrowing of permissions aims to reassure users that their content will only be used for core service operations.
This episode, however, extends beyond WeTransfer itself, serving as a potent illustration of a broader ‘trust deficit’ emerging between users and technology providers in the era of artificial intelligence. It underscores fundamental questions about data ownership, control, and the legal parameters of its use. Experts suggest that in a landscape where even routine terms and conditions can carry profound AI implications, companies can no longer afford to rely on vague or ‘industry standard’ legalese. Transparency, clear consent, and robust data protection are becoming non-negotiable pillars of digital trust.
Also Read:
- Meta Declines to Endorse EU’s Voluntary AI Code, Citing ‘Overreach’
- Financial Sector Grapples with AI Data Mining Challenges
WeTransfer, a company with deep roots in the creative community, has stated its commitment to earning back trust, emphasizing its high regard for customers and their work. The incident serves as a stark reminder that in 2025, trust is not merely a marketing asset but a foundational element that must be meticulously earned, architected, and protected within the digital infrastructure.


