TLDR: SCoGen is a novel framework that synthesizes real-world coding problems to train large language models (LLMs). It extracts domain knowledge, domain skills, coding skills, and application scenarios from datasets like Stack Overflow and Kaggle. These elements are then used to build a scenario-centric knowledge graph, enabling a sampling strategy to generate diverse and complex programming challenges. Experiments show SCoGen significantly improves LLM performance on practical coding tasks, addressing the scarcity of high-quality real-world training data.
Large Language Models (LLMs) have made incredible strides in understanding and generating code, becoming invaluable tools across many software development tasks. However, a significant hurdle remains: the scarcity of high-quality, real-world coding problems needed to further train and improve these models. Traditional synthetic data generation often focuses on simpler, function-level or algorithmic tasks, failing to capture the intricate, multi-domain nature of actual software engineering challenges.
To address this critical gap, researchers have introduced SCoGen, a novel framework designed to synthesize coding problems that truly mirror authentic real-world scenarios. This innovative approach systematically integrates various elements crucial to practical programming: domain knowledge, domain skills, and coding skills. These foundational building blocks are meticulously extracted from vast, real-world programming datasets, including popular platforms like Stack Overflow and Kaggle.
The core idea behind SCoGen is to ground problem generation in realistic application scenarios. These scenarios are also mined from the same datasets and serve as the central organizing principle. They are used to construct a ‘scenario-centric graph’ that intelligently interconnects domain knowledge, domain skills, and coding skills. Imagine a map where a specific application (like a ‘Medical Imaging Diagnostic System’) is linked to all the relevant concepts (e.g., ‘PyTorch Deep Learning Framework’), practical abilities (e.g., ‘Transfer Learning’), and programming techniques (e.g., ‘Medical Image Preprocessing Pipeline’) required to build it.
Based on this structured graph representation, SCoGen employs a sophisticated sampling strategy. This strategy allows for precise control over the complexity and diversity of the generated code problems, ensuring they reflect the multifaceted challenges encountered in real-world development. By combining multiple interrelated combinations of knowledge and skills, the framework can create problems of varying difficulty while maintaining internal coherence.
The methodology begins with curating seed documents from Stack Overflow and Kaggle, followed by a rigorous preprocessing pipeline to ensure data quality. From these documents, SCoGen extracts four fundamental elements: application scenarios, domain knowledge, domain skills, and coding skills. These elements are then used to build the scenario-centric knowledge graph, where an application scenario acts as the central node, connecting to relevant knowledge and skill nodes. The relationships between these nodes are defined by their co-occurrence in the original documents, forming a rich network of interdependencies.
A key aspect of SCoGen is its sampling strategy, which leverages transition probabilities within the knowledge graph to select interconnected elements. This allows for the generation of ‘features’—combinations of domain knowledge, domain skill, and coding skill—that collectively form a programming problem. The complexity of a problem is directly determined by the number of features incorporated. The framework also explores the use of a ‘temperature parameter’ during sampling, which influences the diversity of the generated problems by balancing the influence of high- and low-probability transitions.
Experimental results have shown that SCoGen consistently achieves superior performance compared to state-of-the-art open-source LLMs, including both specialized coders and general-purpose models. When fine-tuned with SCoGen-generated data, models demonstrated significant improvements on real-world-level benchmarks like BigCodeBench Instruct and NaturalCodeBench. Even on basic algorithm-level problems, SCoGen-trained models remained competitive, indicating their broad applicability.
Ablation studies further highlighted the effectiveness of SCoGen’s components. The random sampling strategy, guided by the knowledge graph’s intrinsic properties, proved more effective than an LLM-based sampling approach, suggesting that relying on LLMs for selection might introduce bias and reduce diversity. The studies also explored the impact of problem complexity and the temperature parameter, finding that a careful balance is crucial for generating coherent and challenging problems.
Also Read:
- ENTROPO: A New Framework for Interactive AI Coding Agents with Enhanced Diversity
- GRRAF: AI Framework Generates Code for Graph Reasoning Tasks
In essence, SCoGen offers a powerful new way to create high-quality, realistic training data for code LLMs, pushing the boundaries of what these models can achieve in practical software engineering. While current work focuses predominantly on code generation tasks and single-repository challenges, future efforts aim to explore answer verification mechanisms and scale the framework to even larger models. For more in-depth details, you can refer to the full research paper: SCoGen: Scenario-Centric Graph-Based Synthesis of Real-World Code Problems.


