TLDR: A research paper titled “The Economics of AI Foundation Models: Openness, Competition, and Governance” explores how foundation model developers strategically choose their level of model openness. It introduces the “data flywheel effect,” where deployers’ fine-tuning efforts create lock-in. The paper identifies three incumbent strategies—Harvest, Defend, and Dominate—based on the strength of this effect. It also reveals an “openness trap,” where transparency mandates can backfire, reducing investment and welfare. Furthermore, it analyzes how vertical integration and government subsidies can have contingent welfare effects, often being captured by incumbents rather than benefiting the broader AI ecosystem, urging for nuanced policy approaches.
The landscape of Artificial Intelligence is undergoing a profound transformation with the emergence of foundation models (FMs). These powerful, general-purpose AI systems, such as GPT-5 and Gemini 2.5, are pre-trained on vast datasets and serve as the bedrock for a wide array of downstream applications. This shift has created a distinct AI value chain, where upstream developers build the core models, and downstream deployers adapt and specialize them for end-users through a crucial process known as fine-tuning.
At the heart of this evolving ecosystem lies a critical strategic decision for foundation model developers: the degree of “model openness.” This isn’t a simple yes or no choice but a spectrum, ranging from fully closed, API-only models (like OpenAI’s GPT-5) to those with publicly released weights allowing deep modification (such as Meta’s Llama). This decision creates a dual effect that profoundly shapes the entire AI value chain.
On one hand, greater openness can amplify knowledge spillovers, enabling new entrants to learn from the incumbent’s technology and intensifying future competition. On the other hand, the same openness can significantly lower the costs of downstream fine-tuning, encouraging deployer investment, accelerating adoption, and stimulating overall ecosystem growth. This dual effect gives rise to a powerful feedback mechanism known as the “data flywheel effect.” As a deployer fine-tunes and operates an incumbent’s model, every interaction, from user feedback to prompt adjustments, enhances their expertise. This accumulated knowledge reduces future fine-tuning costs and makes switching to an unfamiliar model increasingly expensive, creating a form of learning-based lock-in. Think of GitHub Copilot, which improves with each accepted or rejected code suggestion, reinforcing its connection to OpenAI’s models.
A recent research paper, “The Economics of AI Foundation Models: Openness, Competition, and Governance”, by Fasheng Xu, Xiaoyu Wang, Wei Chen, and Karen Xie, delves into these complex economic drivers. The authors construct a two-period game-theoretic model to analyze how openness shapes competition and to explore the implications for policy and strategy in the AI value chain.
Strategic Openness: Three Paths for Incumbents
The paper reveals that an incumbent developer’s optimal level of openness is surprisingly non-monotonic, meaning it doesn’t simply increase or decrease with its competitive advantage. Instead, it depends on the strength of its “data flywheel effect,” leading to three distinct strategic regimes:
- Harvest Strategy: When the data flywheel effect is weak, the incumbent recognizes it cannot sustain a long-term competitive advantage. It opts for maximum openness and a high license price to extract as much short-term profit as possible before ceding the future market. This was observed with early image generation models like Dall-E 3, which, once surpassed, shifted to monetizing existing assets.
- Defend Strategy: For an intermediate data flywheel effect, winning is possible but requires a strategic gambit. The incumbent restricts openness precisely to a level that impairs an entrant’s learning, securing a future advantage even at the expense of some short-term revenue. OpenAI’s approach with its advanced GPT models, keeping them proprietary and API-only, exemplifies this strategy against fast-following open-source alternatives.
- Dominate Strategy: When the data flywheel effect is very strong, the incumbent can confidently pursue high openness and a low license price. This aggressively encourages adoption and accelerates the data flywheel, aiming to establish its technology as the industry standard. Meta’s LLaMa series, released with widely available model weights and a permissive license, is a prime example of this long-term market dominance play.
The “Openness Trap” and Policy Paradoxes
The research highlights a critical policy paradox termed the “openness trap.” While regulators often assume greater openness is always beneficial for competition, mandating full transparency can backfire. If a developer in the “Defend” regime is forced to be fully open, it loses its strategic flexibility. Faced with a guaranteed future loss, the incumbent may pivot to a short-term “Harvest” strategy, charging high prices and reducing investment in fine-tuning. This ultimately leads to a collapse in downstream innovation, reduced consumer surplus, and lower social welfare. The paper suggests that nuanced approaches, such as private model registration with regulators, might be superior to public disclosure mandates, preserving strategic flexibility while ensuring oversight.
Also Read:
- Foundation Models: Charting a New Course for Scientific Exploration
- Rethinking AGI Evaluation: From Simple Scores to Robust Intelligence Clusters
Vertical Integration and Government Subsidies: Double-Edged Swords
The paper also examines the welfare implications of other common corporate strategies and policy tools:
- Vertical Integration: The blurring lines between model developers and application deployers (e.g., Microsoft’s integration of OpenAI models) can be either beneficial or harmful. When the data flywheel effect is strong, integration can enhance efficiency by eliminating internal markups and streamlining operations, ultimately benefiting the ecosystem. However, if the flywheel is weak, integration can become anti-competitive by foreclosing more efficient entrants, leading to reduced investment and overall welfare. Regulators must assess the underlying market dynamics, not just concentration, to determine the impact of such mergers.
- Government Subsidies: Programs designed to spur AI adoption (like the U.S. National AI Research Resource or European “AI Adoption Vouchers”) are vulnerable to “strategic capture.” An incumbent developer might respond to a subsidy by strategically raising its license fees or reducing model openness, effectively absorbing the subsidy’s value and leaving downstream deployers no better off. The research shows that subsidies can perversely incentivize incumbents to maintain defensive strategies longer, delaying pro-adoption moves. To be effective, such policies need conditional frameworks, requiring commitments to stable pricing or specific openness levels from developers.
In conclusion, the paper underscores that model openness is a complex strategic variable with far-reaching economic consequences. It provides a robust framework for understanding the intricate trade-offs faced by AI foundation model developers and offers crucial insights for policymakers aiming to foster a healthy, competitive, and innovative AI ecosystem without falling into unintended “openness traps” or allowing strategic capture of public incentives.


