TLDR: Researchers developed a multi-agent framework called the Psychodynamic Model to simulate human consciousness in large language models (LLMs), drawing on psychoanalytic theory. It features layered consciousness agents (self-awareness, preconsciousness, unconsciousness) that interact through “Interconscious Reasoning,” and a Personalization Module with fixed and flexible states. Through fine-tuning and integrating dynamic needs, the model showed improved emotional depth, reduced output variance, and better personalization, as evaluated by an “LLM as a Judge” framework.
A new research paper introduces a novel approach to modeling artificial consciousness in large language models (LLMs), drawing inspiration from psychoanalytic theory. Titled “Modeling Layered Consciousness with Multi-Agent Large Language Models,” the study proposes a “Psychodynamic Model” that simulates self-awareness, preconsciousness, and unconsciousness through the interaction of specialized AI agents. This framework aims to move beyond the current limitations of LLMs, which, despite their impressive language capabilities, lack internal motivational states, unconscious drives, or enduring personality structures.
The research, conducted by Sang Hun Kim, Jongmin Lee, Dongkyu Park, So Young Lee, and Yosep Chong, addresses fundamental questions about how human consciousness can be effectively modeled, what factors are essential for personalization, and how such a model can be evaluated. Their work makes three key contributions: a layered architecture of consciousness, a Personalization Module, and an innovative LLM-as-a-Judge evaluation framework.
The Layered Consciousness Module
At the heart of the Psychodynamic Model is the Consciousness Module, which implements Freud’s tripartite model of the mind. Each layer—self-awareness, preconsciousness, and unconsciousness—is represented by an independent LLM agent. Self-awareness handles immediate thought and intentional reasoning, preconsciousness manages social norms and expectations, and unconsciousness acts as a repository for repressed impulses and emotions. OpenAI’s GPT-4o was selected as the foundational model for these agents due to its robust performance.
These three agents engage in what the researchers call “Interconscious Reasoning,” a multi-turn, chat-based exchange where each agent articulates its perspective on a given situation. This internal negotiation process leads to a “Final Action,” which can be a verbal response or a behavioral directive, often including an explicit emotional state.
The Personalization Module
Recognizing that human responses are highly variable and context-dependent, the researchers developed a Personalization Module. This module consists of two components: a Fixed State Component and a Flexible State Component. The Fixed State captures stable, long-term characteristics like personal traits (e.g., birthplace, age, gender) and long-term memories (key life events). The Flexible State, on the other hand, incorporates dynamic, short-term factors such as recent experiences from short-term memory, fluctuating needs (inspired by Maslow’s hierarchy), and current emotional states. This allows the model to produce responses that are both context-sensitive and uniquely personalized.
Training and Evaluation
To enhance the model’s capabilities, particularly the unconsciousness agent’s ability to express deep inner emotions, parameter-efficient fine-tuning (PEFT) was applied. The LLaMA 3.1 8B model was fine-tuned using a curated dataset of emotionally rich dialogues, leading to improved emotional expressiveness and more human-like inner dialogue. The Personalization Module was further refined by implementing “Needs and States” in natural language, allowing the system to simulate dynamic internal states more accurately.
The evaluation involved a two-phase process. First, the fine-tuned model was compared against a baseline model (GPT-4o without additional training) across various assessment dimensions. Second, the model’s performance was tested under eight distinct personalized conditions, reflecting different internal needs and states. A unique “LLM as a Judge” framework, utilizing GPT-4o as an automated evaluator, assessed the models based on consciousness fidelity, personalization, and clarity of reasoning. This approach was chosen due to the complex, latent cognitive processes involved, which are difficult for non-specialists to assess reliably.
Also Read:
- Simulating Online Communities: When AI Agents Learn to Connect
- AI Agents Model Human Psychology for Advanced Teacher Training
Promising Results and Future Directions
The evaluation results showed that the fine-tuned Psychodynamic Model significantly outperformed the baseline, with a 71.4% preference across evaluation items. It demonstrated strong performance in modeling and reasoning, and notably improved in personalization, especially in expressing emotional depth. The integration of structured condition inputs further enhanced the precision and reliability of individualized response generation, reducing performance variability.
This research represents a significant step towards cognitively grounded AI, focusing on internal mental simulation rather than just surface-level imitation of human behavior. The authors emphasize the importance of transparency and ethical considerations, ensuring that agent roles and interaction mechanisms are clearly defined to avoid anthropomorphizing AI in misleading ways. For more details, you can read the full paper here.
Future work will involve extending the model to a broader range of character types, exploring how short-term memory and internal needs interact over time, and investigating multi-character interactions with more advanced architectures.


