TLDR: A research paper by Taitler and Ben-Porat analyzes how Generative AI (GenAI) affects team collaboration. It finds that GenAI can reduce worker effort, even if it’s minimally effective, due to altered incentives. Managers face an NP-complete problem in selecting optimal teams, as removing seemingly low-contributing workers can destabilize the entire group. Simulations show optimal teams are often very large or very small, and many are unstable.
Generative AI (GenAI) is rapidly changing the landscape of how we work together on projects. This new research paper, “Collaborating with GenAI: Incentives and Replacements” by Boaz Taitler and Omer Ben-Porat, delves into the complex dynamics that emerge when GenAI becomes a part of team collaboration. It explores how workers might use AI to their advantage, and how managers might decide to integrate or even replace human workers with these powerful tools.
The paper highlights two main ways GenAI can influence work. Firstly, it can act as an augmentation tool, boosting a worker’s productivity and allowing them to achieve more with less effort. Imagine a writer using AI to quickly generate drafts, or a designer using it to create variations of a concept. Secondly, GenAI can serve as a substitute, taking over parts of a worker’s tasks, or even entire roles. This raises critical questions about how much genuine human effort is still being contributed and how much is effectively handled by AI.
A central focus of the research is the manager’s perspective. In a scenario where a manager needs to build a team for a shared project, they must weigh the benefits of human workers against the capabilities of GenAI. If AI can match or exceed a worker’s output, or if a worker becomes overly reliant on AI without investing meaningful effort, the manager might opt to replace that worker with AI. This decision is not just about cost-saving; it’s about optimizing the overall project outcome while considering the incentives of the remaining human team members.
The “Managed Shared Benefit” Game
To analyze these interactions, the authors introduce a theoretical framework called the “Managed Shared Benefit” (MSB) game. In this model, players (workers) decide how much effort to invest and whether to use GenAI, incurring a cost for their effort. A manager (Principal) then selects a subset of these players to form a coalition. Any unselected players are effectively replaced by GenAI. The shared benefit from the project is then distributed among the selected players and the manager.
One of the paper’s striking findings is that GenAI, even if it’s only slightly effective, can lead workers to exert no effort at all. This phenomenon, termed the “Price of Generativity” (PoG), quantifies the inefficiency introduced when players are allowed to use GenAI. Surprisingly, even a minimal improvement from GenAI can cause a significant drop in collective effort and shared benefit. This suggests that the availability of AI can sometimes make players more selfish, reducing their incentive to contribute actively.
Challenges in Team Formation
From the manager’s viewpoint, selecting the optimal team is a complex problem. The research shows that this optimization problem is “NP-complete,” meaning it’s computationally very difficult to solve for large teams. This difficulty arises from a fundamental trade-off: managers want to maximize the shared benefit of the project, which often means having more skilled players, but each additional player in the coalition reduces the manager’s share of the overall benefit. The paper does, however, offer an efficient algorithm for specific “almost-linear” scenarios, where player contributions are more straightforward.
The study also provides intriguing insights into coalition stability. An optimal team might not always be “stable,” meaning the manager could be tempted to remove a member, even if that decision ultimately leads to a worse outcome. Furthermore, the research reveals that even a worker who seems to contribute very little (a “near-zero” contributor) can be crucial. Their presence might encourage other team members to maintain higher effort levels; removing them could trigger a cascade of reduced effort across the team, significantly harming the project’s overall output.
Also Read:
- Navigating the AI Paradox: How Generative Models Both Homogenize and Spark Innovation
- Generative AI and Human Cognition: Risks, Opportunities, and the Future of Thought
Simulation Insights
Extensive simulations conducted by the authors illustrate these theoretical findings. They show that optimal coalitions tend to be either very large or very small, with medium-sized teams being rare. This suggests that once a manager starts removing members, the “Value-to-Share Ratio” (VSR) of remaining players can drop, leading to further exclusions. The simulations also highlight that while many optimal coalitions are large, a significant portion of them are not stable, indicating a potential for managers to make short-sighted decisions that undermine long-term team effectiveness.
This research by Boaz Taitler and Omer Ben-Porat offers a crucial theoretical lens through which to understand the evolving dynamics of human-AI collaboration. It underscores the need for careful consideration of incentives and team structures as GenAI becomes more integrated into our professional lives. For a deeper dive into their findings, you can read the full paper here.


