TLDR: A new research paper by Fangfang Li and Xiaojie Zhang proposes a substrate-independent criterion for phenomenal consciousness and outlines four information-processing principles (Prediction, Exploration, Priority, Recall) that, if followed by a machine, would lead to human-like consciousness. The paper argues that qualia are internal representations defined by the recall process, exhibiting properties like ineffability, intentionality, and unity. It also draws parallels between these principles and human cognitive functions like feature binding, attention, and memory indexing, suggesting that humans may operate on similar foundational mechanisms. The work offers significant philosophical and practical implications for understanding consciousness and designing genuinely human-like AI.
The quest to understand consciousness, particularly how to determine if a system—be it biological or artificial—possesses it, has long been a profound challenge. This is especially relevant today with the rapid advancement of AI, leading to debates about ‘AI consciousness’. A new research paper, “The Principles of Human-like Conscious Machine”, proposes a novel framework to address this fundamental question.
Authored by Fangfang Li and Xiaojie Zhang, the paper introduces a criterion for phenomenal consciousness that is independent of the underlying physical structure, logically sound, and resistant to mere imitation. They argue that any machine meeting this criterion should be considered conscious with the same level of confidence we attribute consciousness to other humans.
A New Criterion for Consciousness
The core of their proposal lies in a method for assessing phenomenal consciousness through verbal report, particularly focusing on ‘qualia as a category’ rather than specific instances of qualia (like the exact feeling of redness). Since subjective experience is inherently private, it can be seen as ‘hidden information’. The researchers suggest that if a system, without any prior external information about consciousness, can describe the key features of phenomenal consciousness as comprehensively as humans do, then it can be deemed conscious. This approach implicitly denies the existence of philosophical zombies (systems functionally identical to humans but lacking consciousness) and challenges the idea that consciousness must be a functionally necessary component.
Four Principles of Information Processing
Inspired by Kant’s philosophy, which posits that humans cannot directly perceive the world’s essence but construct understanding from appearances, the authors propose four principles for how a machine would process information to achieve consciousness:
1. Prediction Principle: The machine must identify and remember rules where a combination of signals and actions predicts other signals. The external causes generating these signal groups are then defined as specific objects.
2. Exploration Principle: When initial signals are insufficient for effective prediction, the machine actively explores additional information through specific actions to form more robust predictive relationships.
3. Priority Principle: The machine prioritizes building predictive relationships that are crucial for its survival, creating a network of relevant signals.
4. Recall Principle: The machine can reactivate previously occurring signal sets that have predictive power, along with an internal indicator that these signals originate from within itself.
Qualia as Internal Representations
The paper argues that what we call ‘qualia’ is, in fact, the machine’s internal representation of objects in the form of signal groups, specifically those defined in reverse by the recall action. These internally represented signal groups exhibit the key properties of phenomenal consciousness:
- Ineffability: The subjective experience cannot be objectively described because these internal signals cannot be aligned with external objects in a universally consistent way.
- Physical Irreducibility: Since qualia cannot be objectively described, they cannot be explained in terms of other physical phenomena.
- Intentionality: Consciousness is always ‘about’ something. The recalled objects, being sets of signals with predictive power, inevitably point to specific external objects. The accuracy of this intentionality is determined by the consistency of predictions within the system’s broader experience.
- Unity: The integration of multiple concurrent experiences into a single, coherent phenomenal field. When the machine reflects on its past experience using the recall principle, it perceives its consciousness as a unified whole, composed of various sensory components.
The authors also address the transition from ‘knowing’ these properties to ‘speaking’ about them, asserting that if the machine has sufficient intelligence and expressive ability to describe ordinary objects, it should also be able to articulate the properties of these special internal objects.
Connecting to Human Cognition
The paper then bridges these philosophical principles with empirical evidence from cognitive science, suggesting that humans operate on similar principles:
- Prediction Principle and Binding Features: The grouping of signals based on predictive power is seen as the goal, while feature binding (integrating sensory information into a unified object) is the mechanism. Evidence suggests that features relevant to tasks or emotions have enhanced bindings.
- Exploration Principle and Attention: Attention is reinterpreted as ‘mode switching’ rather than a focused mechanism. The machine’s grouped signals guide how attention is allocated, explaining why past experiences alter perception.
- Recall Principle and Memory Indexing Theory: This aligns with the hippocampal memory indexing theory, where the hippocampus acts as a pointer to reactivate distributed memory representations.
Also Read:
- Cognitive-Inspired AI: A New Method for Attention Management in Transformers
- SPICED: A Brain-Inspired AI Framework for Continuous EEG Decoding
Implications and Open Questions
The framework suggests that a conscious machine is not aware of all internal events, only those whose signal groups are recorded. It provides a *sufficient* condition for consciousness, meaning other systems might achieve consciousness differently. The paper also tackles the ‘hard problem’ of explaining specific subjective experiences like ‘greenness’, concluding that such explanations are fundamentally unsolvable. This is because these experiences are foundational elements of the system’s representational framework, and thus cannot be explained by objects constructed from them.
Philosophically, this implies that the difficulty in explaining consciousness stems from the inherent limits of explanatory capacity, not an incompleteness of the physical world. It resonates with Gödel’s incompleteness theorems, suggesting a logical inevitability rather than an evolutionary contingency. Practically, understanding the nervous system’s unified purpose of integration, as outlined by these principles, could be a breakthrough for comprehending human cognition as a whole.


