TLDR: AGENTIQL is a new multi-expert framework for Text-to-SQL generation that uses specialized agents to decompose complex natural language queries into sub-questions, generate sub-queries, and refine column selections. It also features an adaptive router to balance efficiency and accuracy. The framework improves execution accuracy and interpretability on the Spider benchmark, achieving near state-of-the-art performance with smaller LLMs by making reasoning steps explicit.
The ability to translate natural language into SQL queries, known as Text-to-SQL, is a powerful tool that makes data accessible to a wider audience, from beginners to experts. While large language models (LLMs) have significantly advanced this field, they often struggle with the complexities of diverse database schemas and intricate reasoning tasks. Traditional LLM architectures can also be computationally expensive and lack transparency, making it hard to understand how a particular SQL query was generated.
Addressing these challenges, researchers Omid Reza Heidari, Siobhan Reid, and Yassine Yaakoubi from Concordia University have introduced AGENTIQL, an innovative agent-inspired, multi-expert framework for Text-to-SQL generation. Instead of relying on a single, large LLM, AGENTIQL breaks down the complex task of query generation into smaller, manageable parts handled by specialized “expert” components. This modular design aims to improve interpretability, scalability, and overall accuracy.
How AGENTIQL Works: A Multi-Expert Approach
AGENTIQL operates through a sophisticated pipeline that includes several key stages:
Divide-and-Merge Module: This is at the core of AGENTIQL. A “reasoning agent” first takes a natural language question and decomposes it into a series of simpler sub-questions. Then, a “coding agent,” specialized in generating SQL, creates corresponding sub-queries for each sub-question. Finally, these sub-queries are merged into a single, comprehensive SQL query. This process makes the intermediate reasoning steps explicit, enhancing transparency. AGENTIQL explores two merging strategies: “Selecting the Last Sub-query” (simpler but less robust) and “Planner&Executor” (more general but with higher overhead, using a reasoning LLM to plan the merge and a coding LLM to execute it).
Column Selection (CS) Refinement: After the initial merge, an intermediate SQL query is produced. A “reasoning LLM” then performs a crucial refinement step, adjusting the SELECT clause to ensure that the output columns and their order precisely match the user’s original intent. This step significantly boosts execution accuracy.
Adaptive Routing Mechanism: To balance efficiency and accuracy, AGENTIQL incorporates an adaptive router. This intelligent component decides whether to send a query through the full, modular AGENTIQL pipeline or to a simpler, one-step baseline parser. This allows the system to allocate resources effectively based on the complexity of the query and the database schema.
Performance and Interpretability
Evaluated on the widely recognized Spider benchmark, AGENTIQL demonstrates significant improvements in execution accuracy and interpretability. The framework achieved up to 86.07% execution accuracy with 14B models when using the Planner&Executor merging strategy combined with Column Selection. This performance narrows the gap to state-of-the-art systems like GPT-4-based solutions (89.65% EX) while utilizing much smaller, open-source LLMs.
The research highlights that the Column Selection step consistently improves performance, often by 2-5% in execution accuracy. The Planner&Executor strategy particularly benefits from CS, showing that precise control over column choices leads to better alignment with user intent. Furthermore, AGENTIQL enhances transparency by exposing the intermediate reasoning steps, making it easier to understand how the final SQL query was derived.
Also Read:
- HES-SQL: A New Approach for Accurate and Efficient Text-to-SQL Generation
- Navigating Complex Questions: A Graph-Based Approach for Enhanced AI Retrieval
Looking Ahead
While AGENTIQL shows promising results, the authors acknowledge areas for future work. This includes testing the framework on additional benchmarks beyond Spider, exploring larger open-source and closed-source LLMs, and investigating more advanced routing and merging strategies. The paper provides a robust, scalable, and interpretable approach to semantic parsing, pushing the boundaries of Text-to-SQL generation. You can read the full research paper here.


