TLDR: The ADEPTS framework, introduced by researchers at FAIR at Meta, provides a unified set of six human-centered capabilities for AI agents: Actuation, Disambiguation, Evaluation, Personalization, Transparency, and Safety. It aims to offer clear guidance for developing AI agents that are understandable, controllable, and trustworthy in everyday use, moving beyond fragmented existing guidelines. ADEPTS defines progressive tiers for each capability, allowing for a standardized way to assess and improve agent performance from a user’s perspective, fostering a shared language for designers, engineers, and policymakers.
Artificial intelligence agents, powered by large language models, are becoming increasingly integrated into our daily lives, offering powerful and flexible assistance. However, the rapid growth and adoption of these agents highlight a critical need for a unified approach to their development, monitoring, and discussion, especially concerning their human-centered capabilities. Currently, guidance for designing AI agents with human users in mind is fragmented across various fields, from user experience (UX) heuristics to engineering taxonomies and ethical checklists. This scattering of information means there hasn’t been a clear, user-focused vocabulary to define what an AI agent should fundamentally be able to do for people.
To address this gap, researchers from FAIR at Meta have introduced ADEPTS, a new capability framework designed to provide cohesive guidance for developing AI agents. ADEPTS is built upon six core principles for human-centered agent design, outlining the essential user-facing capabilities an AI agent should demonstrate to be understandable, controllable, and trustworthy in everyday use. This framework complements existing guidelines by focusing specifically on the interface between technical development and user experience.
Understanding the ADEPTS Framework
The name ADEPTS is an acronym derived from its six key capabilities:
Actuation: This principle emphasizes that an agent should be able to autonomously execute tasks on a user’s behalf. This involves understanding the user’s intent and translating it into actions while respecting permissions and constraints. The framework categorizes actuation capabilities by prompt complexity (how flexible the agent is in receiving instructions, from simple ‘knobs’ to ‘omni-modal’ inputs combining various modalities) and task complexity (measured by the time a human would take to complete the task, from less than a minute to more than a week).
Disambiguation: ADEPTS highlights the agent’s ability to actively clarify and confirm a user’s goal, context, and constraints when there’s uncertainty that could affect the outcome. This is crucial because users may not always provide perfectly clear instructions. Disambiguation tiers range from simply refusing impossible tasks based on its physical or digital capabilities, to detecting underspecified instructions, conducting full conversations to clarify goals, and even actively seeking clarification during task execution when unexpected situations arise.
Evaluation: This capability focuses on the agent’s ability to track task progress and overall context, providing status updates and answers that help users understand the current situation or regain control. Evaluation modes include basic interaction captioning, answering questions about past interactions, detecting task success, and even predicting future success based on the current state or proposed actions. The depth of evaluation can range from a simple binary (success/failure) to a scalar score (e.g., a quality rating) or a multi-dimensional score considering several criteria.
Personalization: An agent should learn and predict a user’s evolving preferences and abilities, respecting them while executing tasks. This can significantly reduce friction in user experiences. Personalization tiers progress from using static ‘system prompts’ (pre-fed information about user preferences) to inferring preferences within a single session, learning across multiple sessions, and ultimately predicting the user’s next goal, acting as a recommendation engine.
Transparency: This principle requires the agent to expose its inputs, reasoning, plans, and past actions to the user at a suitable depth to inform oversight and build trust. Transparency tiers include ‘algorithmic’ (showing its underlying code or system prompts), ‘verbalized’ (explaining its rationale in natural language), and ‘mechanistic’ (exposing the internal workings of its neural networks for deep understanding).
Safety: ADEPTS frames safety as a proactive capability, where the agent pre-emptively prevents harm to people, data, or property, enforcing privacy, security, and ethical constraints before and during execution. Safety is broken down into user misuse (preventing harm caused by user instructions), agent misbehavior (preventing harm from the agent’s own actions or incompetency), and prompt injection (preventing harm from malicious inputs from the environment). Each of these has tiers ranging from detecting direct harm to providing guaranteed safety with probabilistic confidence. Safety evaluation modes also progress from simply detecting unsafe trajectories to preventing unsafe outcomes based on detecting suspicious states or actions.
Also Read:
- Navigating the Ethical Landscape of Autonomous AI in Smart Homes
- Navigating the Path to Trustworthy Federated Learning: A Comprehensive Overview
A Shared Language for Progress
ADEPTS aims to condense complex AI-UX requirements into a compact, actionable framework for AI researchers, designers, engineers, and policy reviewers alike. By focusing on what an agent should be able to do from a user’s perspective, rather than prescribing how it should be built, ADEPTS fosters creativity in design while ensuring that all aspects of development contribute to user-facing obligations. The framework also provides reference tiers for each capability, which can serve as benchmarks for assessing the competency of AI agents and inspiring the development of application-specific measurements.
The creators of ADEPTS believe this framework has the potential to accelerate improvements in user-relevant agent capabilities, simplify the design of experiences that leverage these capabilities, and provide a shared language for tracking and discussing progress in AI agent development. For more details, you can read the full research paper: ADEPTS: A Capability Framework for Human-Centered Agent Design.


