TLDR: A new tool, ATLantis, automates the generation of plans for Belief-Desire-Intention (BDI) agents using Alternating-Time Temporal Logic (ATL). This innovation tackles the challenges of manual, error-prone plan creation, especially in multi-agent systems. ATLantis pre-generates plans that effectively manage agent cooperation and competition, as demonstrated in a game where agents collaboratively find treasure despite environmental uncertainties. The approach enhances agent rationality and efficiency by providing dynamic, context-aware plans, with future work focusing on overcoming current tool limitations.
In the evolving landscape of Artificial Intelligence, intelligent agents continue to play a crucial role, offering a traditional yet powerful approach to AI. A popular framework for modeling these agents is known as Belief-Desire-Intention (BDI), where agents operate based on their knowledge (beliefs), goals (desires), and chosen actions (intentions). A core component of BDI agents is their ‘plans’ – sequences of actions designed to achieve specific goals.
Historically, generating these plans has been a labor-intensive and error-prone process, particularly challenging in complex multi-agent environments where agents might cooperate or compete. This manual effort often limits the scalability and reliability of BDI systems, making it difficult to debug and ensure optimal agent behavior.
To address these challenges, Dylan Léveillé from Carleton University has developed a novel tool called ATLantis. This tool automates the generation of BDI plans by leveraging Alternating-Time Temporal Logic (ATL). ATL is a powerful logical framework designed for strategic reasoning in multi-agent systems, allowing the generated plans to inherently account for potential competition or cooperation among agents.
How ATLantis Works
ATLantis takes an Interpreted Systems Programming Language (ISPL) file as input, which is then converted into a Concurrent Game Model with Incomplete Information (CGMII). This model represents the agents’ perception of their world, including their certainties and uncertainties about environmental variables. The agents’ desires are expressed as ATL formulas, and ATLantis then derives strategies from these formulas, which become the agents’ intentions.
A key innovation of ATLantis is its ability to pre-generate all possible strategies. This approach significantly reduces the computational overhead that would otherwise occur if plans were generated dynamically at runtime. By evaluating all possible agent desires across various combinations of environmental uncertainties, ATLantis ensures that agents have a comprehensive set of plans ready for execution.
The tool uses MCMAS, a model checking toolkit for temporal logic, to verify ATL formulas against the CGMII model and output corresponding strategies. These strategies are then translated into AgentSpeak plans, a concrete BDI-based programming language. Each generated plan includes pre-conditions based on the agent’s known and unknown variable values, ensuring that only one relevant plan is selected at any given time.
The Goldseeker Game: A Practical Demonstration
The effectiveness of ATLantis is demonstrated through an illustrative game called ‘Goldseeker’. In this game, two agents, BA and RA, must cooperate to mine a treasure located at a specific coordinate. The agents start at randomized positions and are uncertain of their own and each other’s initial locations. However, they have visibility into their current row and column, which helps them infer positions and reduce uncertainty over time.
As the agents perceive their environment and update their beliefs, ATLantis-generated plans guide their actions. For instance, if an agent initially knows its column but is uncertain about its row, a specific plan is selected. As it moves and gathers more information (e.g., encountering obstacles), its beliefs change, leading to the selection of a new, more appropriate plan. This dynamic plan selection allows agents to adapt their strategies based on evolving knowledge, ultimately leading them to successfully achieve their shared goal of mining the treasure.
Also Read:
- Intelligent Control for Multi-Robot Systems: A Transparent Approach with Explainable AI
- Smart Control: How AI Teams Learn Safely with a Hierarchical Approach
Benefits and Future Directions
The ATLantis tool offers several significant advantages: it automates the tedious process of plan generation, inherently supports multi-agent cooperation and competition, and avoids runtime performance delays by pre-generating plans. This contributes to more rational and efficient BDI agent behavior.
However, the research also acknowledges certain limitations. The generated plans are highly dependent on the accuracy of the input MCMAS file and are currently designed for deterministic, epistemic environments. Furthermore, the underlying MCMAS tool limits ATLantis to specifying uncertainty for only a single agent, assuming other agents have complete certainty. Future work aims to address these limitations, potentially by developing a custom ATL model checker to overcome the constraints of existing tools and to empirically evaluate ATLantis’s performance on larger, more complex systems.
This work represents a significant step towards making BDI agents more autonomous and adaptable in multi-agent environments, offering a robust method for automated plan generation. You can read the full research paper here.


