spot_img
HomeResearch & DevelopmentEnhancing AI Agents with Self-Reflection: Learning from Experience to...

Enhancing AI Agents with Self-Reflection: Learning from Experience to Refine Software Engineering Tasks

TLDR: SAGE (Self-Abstraction from Grounded Experience) is a framework that allows large language model (LLM) agents to learn and improve from their own task executions. It works in three stages: an initial “exploration” attempt, followed by “plan abstraction” where a high-level plan is distilled from the agent’s experience, and finally “plan-augmented execution” where the agent re-attempts the task using this refined plan as guidance. This process consistently improves performance on software engineering benchmarks, demonstrating the value of self-improvement cycles for AI agents.

Large language models (LLMs) are becoming increasingly capable at complex software engineering tasks, such as fixing bugs and generating code. However, a common limitation of many existing LLM agents is their inability to learn and improve from their own past experiences. They often approach each new task as if it were the first, without a structured way to reflect on previous attempts and refine their strategies.

This is where a new framework called Self-Abstraction from Grounded Experience, or SAGE, comes into play. SAGE is designed to enable AI agents to learn from their own task executions and improve their behavior through a process of self-abstraction. Imagine a human trying to solve a complex problem; they might try a few approaches, reflect on what worked and what didn’t, and then come up with a better plan for their next attempt. SAGE aims to give LLM agents a similar capability.

The SAGE framework operates in three distinct stages:

1. Exploration

In this initial stage, the LLM agent attempts to complete a given task, such as fixing a software bug. It interacts with the environment, executes commands, and observes the outcomes. This process generates a detailed, but often unstructured, record of its actions and observations, known as a trajectory.

2. Plan Abstraction

After the exploration phase, a “plan abstraction agent” takes this raw trajectory and distills it into a concise, high-level plan. This plan summarizes the key steps taken, identifies dependencies, and notes any constraints or limitations encountered. It’s like reviewing a detailed log of events and extracting the most important lessons learned. This abstraction is crucial because raw trajectories can be very long and noisy, making it difficult for an LLM to directly process and learn from them effectively.

Also Read:

3. Plan-Augmented Execution

Finally, the agent attempts the same task again, but this time it has access to the newly induced high-level plan as contextual guidance. This refined policy helps the agent act more strategically, avoid past mistakes, and address any edge cases it might have missed in its initial attempt. The plan essentially provides a roadmap for a more informed and structured execution.

The researchers empirically validated SAGE on the SWE-Bench Verified benchmark, which consists of 500 human-verified GitHub issues related to program repair. The results showed consistent performance gains across various LLM backbones (like GPT, Claude, and Gemini) and different agent architectures. For instance, SAGE achieved a notable 7.2% relative performance improvement over a strong baseline when paired with the GPT-5 (high) model. Overall, SAGE reached impressive Pass@1 resolve rates of 73.2% and 74% with the Mini-SWE-Agent and OpenHands CodeAct agent frameworks, respectively.

An interesting finding was that performance could be further improved by using different LLMs for the “plan abstraction” stage compared to the “exploration” and “plan-augmented execution” stages. This suggests that having a fresh perspective for reflection can be beneficial, potentially mitigating self-bias where an LLM might favor its own initial generations.

SAGE represents a significant step forward in enabling LLM agents to learn and improve from their own experiences at test-time. By transforming raw experience into actionable, high-level guidance, it allows agents to refine their policies and achieve more effective problem-solving capabilities, much like humans do through iterative reflection. You can read the full research paper here: Self-Abstraction from Grounded Experience for Plan-Guided Policy Refinement.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -