TLDR: Researchers introduce an LLM-guided adaptive compensator that improves existing robot controllers by using LLMs to design adjustments based on observed system behavior and a desired reference. This method, validated on soft and humanoid robots, outperforms traditional adaptive control, simplifies design, and offers strong generalizability and robustness without requiring LLMs to design complex controllers from scratch.
Large Language Models (LLMs) are rapidly changing how we approach various fields, including robotics. While much of the focus has been on high-level tasks like breaking down complex instructions, their application in the nitty-gritty of automatic control systems has been limited. Previous attempts often dealt with overly simplified systems, only allowed for minor adjustments to existing controllers, and lacked real-world testing.
A new research paper introduces an innovative approach to bring adaptability to automatic control systems using LLMs. Instead of making LLMs design complex controllers from scratch, the researchers propose an “LLMs-guided adaptive compensator.” This framework is inspired by Model Reference Adaptive Control (MRAC), a method where an unknown system’s behavior is made to match a well-understood reference system.
The core idea is straightforward: the LLM is given information about the differences between an unknown system’s behavior and a desired reference system’s behavior. Based on these discrepancies, the LLM designs a “compensator.” This compensator then works alongside an existing controller to adjust the unknown system’s response, making it align closely with the reference. This method avoids the need for LLMs to derive complex mathematical control laws, transforming the challenge into a more intuitive reasoning task.
Experiments rigorously evaluated this new LLMs-guided adaptive compensator against several traditional adaptive control methods, including indirect adaptive control, learning-based adaptive control, and MRAC. They also compared it to an “LLMs-guided adaptive controller,” where the LLM attempts to design the entire controller from scratch. The tests were conducted on both simulated and real-world robots, specifically a soft robotic arm driven by McKibben Pneumatic Artificial Muscles (PAMs) and a humanoid robot.
The results were compelling. The LLMs-guided adaptive compensator consistently outperformed traditional adaptive controllers. It also significantly reduced the complexity of the reasoning process for the LLM compared to trying to design a full controller. The research highlights that this approach leads to a more structured design process, effectively turning complex mathematical derivations into a logical reasoning problem for the LLM. Furthermore, the compensator demonstrated strong generalizability, meaning it can work effectively across different systems, and showed robustness to disturbances.
A key aspect of the research involved a theoretical analysis using Lyapunov stability, a common method to prove system stability. This analysis showed that the compensator can ensure stability within a certain operating region, even for systems with varying characteristics, as long as their error dynamics follow a similar structure. This theoretical backing was supported by practical experiments where the compensator successfully handled various nonlinear and time-varying systems, including those with external disturbances.
In the robot control tests, the compensator proved its mettle. On the PAMs-driven robot arm, which exhibits complex nonlinearities and unknown loads, the LLMs-guided compensator achieved stable and consistent performance, guiding the system smoothly and rapidly to the reference trajectory with minimal overshoot or delay. This was a marked improvement over conventional PID controllers, direct model-based adaptive controllers (which showed oscillations), MRAC (which is complex to design), and learning-based controllers (which require extensive data and can be unstable during training).
The compensator also tackled a “sim-to-real” challenge on a humanoid robot. Here, the goal was to make the simulated robot’s movements match the real robot’s. Even though the LLM could only indirectly influence the control signal by adjusting the target joint angle, the compensator effectively aligned the simulation with the real-world behavior, showcasing its adaptability even under constrained control interfaces.
An interesting finding from the study was the difference in reasoning paths taken by the LLMs. When designing the compensator, the LLMs focused on analyzing system-level features like step responses, identifying performance issues (e.g., overshoot, slow response), inferring underlying physical mechanisms, and then proposing compensation strategies. This process was observable and structured. In contrast, when attempting to design a full adaptive controller, the LLMs adopted a more theory-driven, model-based approach, defining error functions and deriving adaptation laws, which proved to be more complex.
Also Read:
- ReCoDe: A Hybrid AI Framework for Enhanced Multi-Robot Coordination
- Humanoid Robots Master Locomotion with Limited Sensory Data
This study marks a significant step forward in applying LLMs to automatic control. The LLMs-guided adaptive compensator offers a lightweight, safe, and non-intrusive way to enhance existing feedback controllers, making it particularly valuable for deployed robotic platforms where modifying core control systems might be difficult or costly. For more details, you can read the full paper here.


