TLDR: This research explores integrating Large Language Models (LLMs) into the Clarion cognitive architecture to overcome limitations in knowledge, natural language communication, and implicit-explicit interaction. By leveraging Clarion’s dual-process theory, LLMs are used to model human intuition and instinct, enhancing the architecture’s ability to handle real-world complexity and psychological realism. The integration allows for seamless linguistic communication between explicit and implicit processes, leading to more capable and human-like AI systems.
Computational cognitive architectures are comprehensive models designed to mimic the human mind, integrating various psychological functions like action, reasoning, memory, and learning into a single framework. These models aim for psychological realism and have been validated against empirical data. However, their computational capabilities have historically been limited by the available tools and techniques.
The Rise of LLMs and a New Opportunity
Recently, Large Language Models (LLMs) have emerged as exceptionally powerful computational tools, surpassing previous methods. This development presents a significant opportunity to enhance cognitive architectures, allowing them to better handle real-world complexity while maintaining psychological realism. This research paper explores a synergistic combination of the Clarion cognitive architecture and LLMs as a detailed case study.
Clarion: A Foundation in Dual-Process Theory
Clarion, a cognitive architecture developed since the 1990s by Ron Sun, is grounded in dual-process theories from psychology and philosophy. This means it distinguishes between implicit (System 1) and explicit (System 2) processes. Explicit processes are conscious, symbolic, and rule-based, while implicit processes are less accessible to consciousness, more associative, and holistic. Clarion’s original structure reflects this duality, with a top level for explicit processes (symbolic representations like chunk nodes and rules) and a bottom level for implicit processes (neural networks, specifically Backpropagation networks).
Key Subsystems of Clarion
Clarion is composed of four main subsystems:
- Action-Centered Subsystem (ACS): Manages actions and procedural knowledge.
- Non-Action-Centered Subsystem (NACS): Handles memory and reasoning, involving declarative knowledge.
- Motivational Subsystem (MS): Provides intrinsic motivation (drives) for other subsystems.
- Metacognitive Subsystem (MCS): Regulates other subsystems based on motivational states.
Each of these subsystems has both explicit and implicit levels that interact dynamically through mechanisms like top-down activation (explicit guiding implicit) and bottom-up activation (implicit influencing explicit).
Current Limitations of Cognitive Architectures
Despite its strengths, the original Clarion, like many other cognitive architectures, faces several shortcomings:
- Limited Knowledge: Especially a lack of vast implicit knowledge, such as human intuition and instinct, which simple neural networks struggle to capture.
- Lack of Natural Language Communication: A crucial capability for interacting with humans or human-like agents.
- Limited Implicit-Explicit Interaction: While some interaction algorithms exist, more are needed to fully capture the richness of human “inner dialogue” and other forms of interplay.
LLMs as a Solution: Complementary Strengths
LLMs offer strengths that directly address Clarion’s limitations. They excel at natural language communication, possess an enormous amount of implicit knowledge gleaned from vast text data, and can handle various input/output forms beyond just text (e.g., code, structured data). This makes them ideal candidates for integration.
Understanding LLMs’ Role: Intuition and Instinct
The paper argues that LLMs can roughly correspond to human intuition and instinct. Their training on massive linguistic data allows them to capture statistical patterns and structures of the world, leading to a broad range of “intuition” comparable to humans. While LLMs also show some symbolic processing capabilities, these are often limited, similar to how human implicit processes have limited symbolic capabilities.
Integrating LLMs into Clarion: A Seamless Approach
The most comprehensive approach to integrating LLMs into Clarion involves using multiple fine-tuned LLMs to capture all forms of implicit processes, including natural language processing, intuition, instinctual action selection, and implicit memory stores. This approach leverages Clarion’s fundamental implicit-explicit dichotomy for a natural and seamless integration.
In this enhanced Clarion, LLMs take on roles such as:
- Implicit natural language processing.
- Intuitive reasoning and metacognitive reflection.
- Implicit action selection.
- Implicit memory stores (semantic, procedural, episodic) and mental models of the world.
- Learning from and helping explicit processes learn.
Meanwhile, explicit processes continue to direct LLMs, perform precise rule-based reasoning, explicit action selection and planning, manage explicit memory, and engage in explicit metacognitive reflection.
Cross-Level Interaction and Learning
Communication between the explicit and implicit levels in the new Clarion primarily uses linguistic representation. Top-down activation becomes prompts from the explicit level to LLMs, and bottom-up activation involves LLM outputs being incorporated into the explicit level. This interaction resembles an “inner dialogue” where the explicit mind converses with the implicit mind. Learning mechanisms like top-down learning (assimilating explicit knowledge into LLMs) and bottom-up learning (extracting symbolic rules from LLMs) are also adapted.
Also Read:
- Building Trustworthy AI: A Neuro-Symbolic Approach to Autonomous Diagnostics
- Assessing Foundation Models for Planning Assistance
Practical Examples and Benefits
The paper illustrates this integration with examples like an agent navigating an unfamiliar house to find a knife or solving a complex problem. In these scenarios, LLMs provide intuitive suggestions and reasoning, while explicit processes guide and refine these suggestions. The enhanced Clarion also incorporates various memory stores (episodic, working memory) and maintains its motivational and metacognitive subsystems, now potentially augmented by LLMs for more realistic drive activations and metacognitive reflections.
This integration promises several advantages:
- A unified framework with more capable LLMs in key roles for action, reasoning, memory, motivation, and metacognition.
- Better capabilities through combining symbolic, neural, and LLM computational methods.
- A psychologically plausible and validated structure, leveraging Clarion’s existing foundation.
For LLMs themselves, benefits include specialization through modular integration, better handling of action-reasoning cycles, the addition of symbolic processes, and the incorporation of intrinsic motivation and metacognition. This research highlights a principled and beneficial integration of LLMs into cognitive architectures, paving the way for more capable and human-like AI systems. You can read the full paper for more technical details and specifications here.


