TLDR: A new research paper introduces an Active Inference (AIF) model that accurately simulates human mouse point-and-click behavior. Unlike previous models, this AIF agent is fully probabilistic, accounts for perceptual delays, and can adapt to different target difficulties without needing parameter adjustments, showing human-like movement patterns and adherence to Fitts’ Law.
Researchers have introduced a novel computational model that uses Active Inference (AIF) to simulate the intricate process of human mouse point-and-click behavior. This new approach offers a deeper understanding of how users interact with digital interfaces, moving beyond traditional models that often fall short in capturing the full complexity of human movement and decision-making.
Pointing, a fundamental interaction technique in Human-Computer Interaction (HCI), has been studied for decades. Classic models like Fitts’ Law effectively predict movement time based on target size and distance, but they don’t delve into the detailed process of target acquisition or incorporate the dynamics of clicking. Other computational models, often derived from control theory or machine learning, typically rely on predefined cost or reward functions and are often deterministic, failing to explicitly represent uncertainty. Crucially, many of these models focus solely on cursor movement, treating clicking as a separate task or not including it at all.
The new research proposes applying Active Inference to create a pointing model that is fully probabilistic and predictive. Instead of relying on rewards, this AIF model frames the problem in terms of preferences over observations – for instance, the preference for successfully clicking a button. This allows the model to generate diverse behaviors for different target difficulties, not by re-tuning parameters, but by adapting to greater uncertainty in target acquisition, mirroring human responses.
The AIF agent operates with continuous state, action, and observation spaces. It uses a simplified Second-Order Lag model to represent mouse cursor dynamics and a First-Order Lag system for finger dynamics, incorporating realistic perceptual noise and delay. A key feature is its ability to compensate for these perceptual delays, making its behavior more human-like.
The study’s findings are compelling. The AIF agent successfully produces plausible pointing movements and clicks, with end-point variance similar to that observed in human users. Importantly, the model adheres to Fitts’ Law, demonstrating a linear relationship between movement time and the index of difficulty, with a slope identical to human users. Furthermore, the agent exhibits distinct behavioral strategies for varying target difficulties, such as corrective sub-movements for smaller or more distant targets, all without needing to adjust its core parameters for each scenario.
These results suggest that Active Inference is a promising framework for developing fine-grained interaction models in HCI. Its inherent probabilistic nature allows for a wider variety of realistic cursor trajectories compared to many existing methods. While the current model uses simplified representations of human perception and motor control, and its parameters were hand-tuned based on a single user, it represents a significant step forward.
Also Read:
- Navigating the Unknown: A New Framework for Measuring LLM Agent Search
- AI’s Moral Compass: A Non-Human-Centric Approach to Ethical Decision-Making
Future work aims to develop reusable software components for AIF-based HCI modeling, explore more authentic biomechanical and perceptual models, and implement automatic optimization of agent parameters to better match observed user data. The researchers also envision extending the model to 2D or 3D pointing tasks. This work lays the groundwork for creating more sophisticated, AI-driven synthetic users that can help design better, more intuitive human-computer interfaces. You can read the full research paper here.


