TLDR: Recent studies from Harvard Business Review, BetterUp Labs, and Stanford University reveal that ‘AI-generated workslop’—superficial AI content lacking substance—is significantly eroding workplace productivity, costing companies millions, and damaging professional relationships. The research highlights that a substantial portion of employees are dealing with this low-quality output, leading to increased rework and decreased trust among colleagues.
New research from Harvard Business Review, BetterUp Labs, and Stanford University indicates that the widespread adoption of generative AI in corporate environments is leading to a significant decline in productivity due to what is being termed ‘AI-generated workslop’. This ‘workslop’ refers to AI-produced content that appears polished on the surface but lacks the necessary depth and substance to meaningfully advance tasks, ultimately creating more work for human colleagues.
The phenomenon is not merely anecdotal. A study highlighted in the Harvard Business Review found that 41% of workers have encountered such subpar AI outputs in the past month. On average, employees estimate that 15.4% of the material they receive at work falls into this category. Each instance of ‘workslop’ costs employees nearly two hours of rework, translating to an estimated additional cost of $186 per month per worker. For a large organization with 10,000 employees, this could amount to over $9 million annually in lost productivity.
The impact extends beyond financial costs, significantly affecting professional relationships and morale. According to the research, 53% of participants reported irritation upon receiving ‘workslop’, 38% felt confusion, and 22% were offended. Half of the surveyed individuals began to perceive colleagues who submitted such material as less creative, capable, and trustworthy. Furthermore, 42% considered them less reliable, and 37% judged them as less intelligent. A notable 34% reported these incidents to their teams or managers, and 32% stated they would be less inclined to collaborate with the sender again.
Misuse often stems from top-down mandates for broad AI adoption without adequate guidance, encouraging a focus on quantity over quality. This approach can leave employees demotivated and bored when they return to non-AI tasks, potentially stifling long-term innovation. The studies also uncovered a ‘competence penalty,’ where work perceived to be AI-assisted was rated 9% lower, even when identical to human-generated output, with this bias being more pronounced against women and older workers.
Also Read:
- Generative AI Strains Teacher-Student Trust in American Classrooms
- Public Skepticism Threatens Starmer’s AI Ambitions, Survey Reveals
To counteract this trend, leaders are urged to model purposeful AI integration. Recommendations include establishing clear quality norms, fostering a ‘pilot mindset’ for experimentation tied to verifiable outcomes, and promoting transparency in AI use to maintain professional credibility. The Penn Wharton Budget Model projects that AI could boost productivity by 1.5% by 2035, but only if sectoral shifts are managed effectively; otherwise, gains could be negligible. Industry leaders are advised to prioritize substance over superficial shine to ensure AI truly amplifies human strengths.


