TLDR: A new research paper introduces a multi-LLM (Large Language Model) architecture to automate the complex task of dispatching Active Distribution Networks (ADNs). This system breaks down the problem into stages handled by specialized LLM agents: an Information Extractor, a Problem Formulator, and a Code Programmer. By translating natural language requests into executable code, the framework aims to simplify ADN management for non-expert operators, enhancing efficiency and reliability. The study validates the effectiveness of this approach, emphasizing the importance of external knowledge, relevant examples, and sufficient model parameters for LLM performance.
Active Distribution Networks (ADNs) are becoming increasingly complex with the integration of various distributed energy resources. Managing these networks effectively, known as ADN dispatch, is crucial for safety and economic efficiency. However, many new operators, such as virtual power plant managers and end prosumers, often lack the specialized knowledge in power system operation, modeling, and programming needed for this task. Relying on human experts for dispatch is both costly and time-consuming, creating a significant barrier to efficient grid management.
To overcome this challenge, a new approach proposes using large language models (LLMs) to automate the modeling and optimization of ADN dispatch problems. This method aims to provide a user-friendly interface that allows ADN operators to derive dispatch strategies simply by using natural language queries, effectively removing technical hurdles and boosting efficiency.
The core of this innovative approach is a multi-LLM coordination architecture, designed to mimic how human experts solve these problems. The process is broken down into sequential stages, each handled by a specialized LLM agent:
The Information Extractor
This agent is the first point of contact. It takes natural language dispatch requests from operators, which can be diverse and even colloquial, and distills them into structured, essential information. This includes details about the controlled district, the dispatch objective (e.g., minimizing operational costs or power loss), available equipment (like diesel generators, batteries, or solar panels), and any additional constraints. This structured output ensures consistency for the subsequent LLM agents.
The Problem Formulator
Once the information is extracted, this agent takes over. It uses the refined data and pre-defined modeling knowledge to construct the dispatch request into a formal constrained optimization problem, expressed in mathematical format. This step is crucial because directly translating natural language into executable code is highly challenging for LLMs. The Problem Formulator also handles the relaxation of any non-convex objectives or constraints, making the problem solvable.
Also Read:
- AI’s Next Frontier: Large Language Models Reshape Wireless Communication
- Enhancing AI Agents with Graph Structures: A Comprehensive Overview
The Code Programmer
The final agent in the chain, the Code Programmer, receives the structured requirements and the mathematical optimization problem. Its task is to translate this into executable code. This code is then fed into commercial solvers (like Gurobi or CPLEX) to obtain the final ADN dispatch strategies. This agent is enhanced with external knowledge, including explanations of case formats and a domain-specific modeling language called PyOptInterface, to ensure accurate code generation.
To further improve accuracy and reliability, tailored refinement techniques are developed for each LLM agent. These include detailed prompt methods, multi-round dialogues for incremental problem building, and a novel retrieval-augmented generation (RAG) assisted few-shot learning method. The RAG method dynamically retrieves and embeds examples that are semantically similar to the current problem, ensuring the LLM has relevant context for code generation, even for new scenarios.
Comprehensive comparisons and end-to-end demonstrations across various test cases validate the effectiveness of this proposed architecture and methods. The results show that this multi-LLM framework significantly improves performance in both problem formulation and code programming, achieving high success rates in generating executable code. Key findings also highlight the critical importance of providing complete external knowledge, appropriate examples for few-shot learning, and sufficient model parameters for the LLMs to perform effectively in complex dispatch tasks.
This work represents a significant step forward in applying LLMs to power systems, particularly for decision-making tasks like ADN dispatch. By creating a seamless “natural language to executable code” pipeline, it greatly reduces the technical burden on ADN operators, enabling more intelligent and flexible grid management. For more in-depth technical details, you can refer to the full research paper available at arXiv.org.


