TLDR: According to a Goldman Sachs executive, artificial intelligence is facing a critical shortage of human-generated training data. However, a vast reserve of untapped data, including synthetic data generated by AI itself, is poised to fuel future development. This shift raises concerns about data quality and the potential for ‘model collapse’ if AI systems are predominantly trained on their own outputs.
Artificial intelligence development has reached a pivotal juncture, with a Goldman Sachs executive indicating that the industry has largely exhausted its supply of human-generated training data. George Lee, co-head of the Goldman Sachs Global Institute, emphasized that while this presents a challenge, a significant volume of untapped data remains available to drive future advancements in AI. This includes a growing reliance on synthetic data, where AI systems are now being employed to generate data for their own pre- and post-training phases, a phenomenon Lee describes as ‘AI building AI.’
This evolution in data sourcing is not without its complexities. A study by research group Epoch AI projects that the publicly available supply of high-quality training data for AI language models could be depleted as early as 2026, with estimates extending to 2032. The anticipated shortage has prompted AI companies to explore alternative data strategies, including the generation of synthetic data.
However, experts caution against the potential pitfalls of this approach. Training generative AI systems predominantly on their own outputs could lead to a degradation in performance, a phenomenon termed ‘model collapse.’ As one researcher noted, training on AI-generated data is ‘like what happens when you photocopy a piece of paper and then you photocopy the photocopy. You lose some of the information.’ Furthermore, this practice risks embedding and amplifying existing mistakes, biases, and unfairness present in the initial training data.
The scarcity of high-quality human-generated data is exacerbated by increasing restrictions from web sources. A study by the Data Provenance Initiative, affiliated with MIT, revealed a ‘rapid crescendo of data restrictions’ over the past year, with approximately 5% of all data and 25% of data from top-tier sources being restricted from major AI training datasets between April 2023 and April 2024. This trend stems from growing ethical and legal concerns surrounding the use of public data for AI training.
Also Read:
- Generative AI’s Soaring Energy Demand: OpenAI’s Text-to-Video Models Could Rival India’s Power Consumption
- The Indispensable Pillars of AI Governance: Ethics, Human Agency, and Combating Algorithmic Distortion
Despite these challenges, the demand for AI services continues to be robust across both consumer and enterprise sectors. Goldman Sachs analysts observe that the AI supply chain remains capacity-constrained, with companies actively seeking solutions to meet the escalating demand. The shift towards synthetic data, while offering a potential solution to the data crunch, underscores the critical need for robust data governance and continuous monitoring to ensure the integrity and ethical operation of advanced AI systems.


