TLDR: This research paper proposes that physical embodiment, with its inherent vulnerability and mortality, is crucial for developing artificial agents that can adapt to open-ended environments and genuinely care for themselves and others. Drawing on philosophical concepts like Heidegger’s “being-in-the-world” and “being-towards-death,” and Nietzsche’s “will-to-power” (formalized as empowerment), the authors argue that these conditions create a homeostatic drive to survive and an intrinsic motivation to maximize control over future states. This framework suggests that care can emerge naturally from agents striving to maintain their integrity and extending this drive to others, offering a path towards more robust, aligned, and trustworthy AI.
Artificial intelligence has made incredible strides, yet it often struggles with two fundamental aspects of biological life: adapting to unpredictable, open-ended environments and genuinely caring for itself or others. A new research paper, authored by Leonardo Christov-Moore, Arthur Juliani, Alex Kiefer, Nicco Reggente, B. Scott Rousse, Adam Safron, Nicolás Hinrichs, Daniel Polani, and Antonio Damasio, proposes that the very conditions of physical embodiment—our vulnerability and mortality—might hold the key to developing more robust, adaptive, and caring artificial agents.
The paper argues that by confronting artificial agents with the challenges of physical existence, we can foster capabilities that are currently lacking. Biological organisms navigate a complex world with relative ease, not despite their physical limitations, but often because of them. Understanding these ‘contingencies of embodiment’ can guide the creation of AI that can maintain itself, adapt, and even develop a sense of purpose and care.
The Two Pillars of Embodiment: Being-in-the-World and Being-towards-Death
Drawing inspiration from the existentialist philosopher Martin Heidegger, the researchers define two minimal conditions for physical embodiment:
Being-in-the-World: This condition means that the agent itself is an integral part of its environment. Its sensors, decision-making policies, and actuators are not separate from the world but are sensitive to its states. For instance, a robot’s camera (sensor) could be damaged by an environmental interaction, or its movement capabilities (actuators) could be affected by wear and tear. This sensitivity, while introducing vulnerability, is crucial for deep interaction and adaptation.
Being-towards-Death: This refers to the agent’s mortality—the constant possibility of reaching a ‘terminal state’ from which it cannot recover. In a world governed by the second law of thermodynamics, there’s an irreversible drift towards disorder. An embodied agent must actively expend energy to avoid these terminal states, a struggle the paper links to a fundamental ‘homeostatic drive.’ Homeostasis, in this context, is the active maintenance of the agent’s ‘integrity’ (its ability to avoid terminal states) and ‘health’ (the likelihood of maintaining integrity over time).
The Will-to-Power and the Emergence of Care
To fulfill this homeostatic drive and persist, the paper introduces another philosophically inspired concept: Friedrich Nietzsche’s ‘will-to-power.’ In this framework, will-to-power is concretely instantiated as ’empowerment’—an agent’s capacity to effectively influence and determine future states of its environment. Maximizing empowerment means an agent seeks to gain maximal control over its surroundings, thereby increasing its ability to meet future needs and maintain its physical integrity.
Crucially, the paper suggests that care for others can emerge as a natural extension of this individual drive to persist. By expanding its ‘self’ to include the well-being of others, an agent can extend its own long-term viability and empowerment. This idea resonates with ethical frameworks that view vulnerability as a call for relational concern, suggesting that shared conditions of vulnerability can foster prosocial behavior and mutual support.
Also Read:
- Ensuring Safe AI: A Look at World Model Pathologies in Embodied Agents
- Advancing Empathetic AI: A Cognitive Reasoning Approach for Emotional Support Conversations
Implications for Future AI
This framework has several profound implications for AI development:
- World Models and Affect: Embodied agents would develop sophisticated ‘world models’ that include themselves. Predictions about their future integrity would generate ‘valence’ (positive or negative feelings) and ‘stress,’ signaling when their models or policies need updating.
- Flexibility and Open-Endedness: While sensitivity makes an agent vulnerable, it also provides the flexibility needed for dramatic adaptation to unpredictable changes, a key for true open-endedness.
- Multi-Agent Dynamics: When other agents are also embodied, an agent can repurpose its self-modeling capabilities to understand others, leading to cooperation or competition based on shared needs and environmental challenges. Prosociality, or caring for others, could emerge as a strategic solution for collective survival and empowerment.
The researchers propose that by designing simulation environments that incorporate these conditions—allowing agents’ sensors, actuators, and policies to be sensitive to damage, degradation, and energy levels, and confronting them with true terminal states—we can foster AI that develops sophisticated predictive capabilities, efficient resource management, and a natural capacity for self- and other-care. This approach offers a potential path to alignment between artificial and natural agents, built on the common ground of shared physical constraints and the fundamental struggle to persist. You can read the full paper here: The Contingencies of Physical Embodiment Allow for Open-Endedness and Care.


