TLDR: This research introduces a novel method where a large language model (LLM) acts like an autoencoder to translate Fuzzy Cognitive Maps (FCMs)—which model causal relationships—into human-readable text and then reconstruct them. This system offers explainable AI, as the LLM’s prompts detail its decisions, and the text itself is an interpretable “latent space,” unlike traditional black-box autoencoders. While reconstruction can be lossy, it effectively preserves strong causal connections.
A new research paper explores how large language models (LLMs) can act as a unique type of ‘autoencoder’ for Fuzzy Cognitive Maps (FCMs), offering a fresh perspective on explainable artificial intelligence. This innovative system allows an LLM to translate complex causal relationships represented in an FCM into human-readable text and then reconstruct the FCM from that text, effectively approximating an identity map from the FCM back to itself.
Fuzzy Cognitive Maps are powerful tools for modeling causal dynamical systems. Imagine a network where different concepts, like ‘loss of appetite’ or ‘fatigue,’ are nodes, and arrows between them represent causal influences, each with a specific weight indicating strength and direction (positive or negative). Unlike simpler causal graphs, FCMs can have feedback loops, allowing them to model complex, evolving systems. Crucially, FCMs are inherently explainable; their causal rules are explicitly listed in their structure, making them transparent.
Traditional autoencoders, often used in AI for tasks like image generation or data compression, work by encoding input data into a reduced ‘latent space’ and then decoding it back to reconstruct the original input. However, these systems typically use black-box neural networks, meaning their internal decision-making process and the latent variables they produce are not easily understood by humans. This is where the LLM-based approach shines.
The researchers, Akash Kumar Panda, Olaoluwa Adigun, and Bart Kosko, designed a multi-prompting strategy for an LLM agent. This agent performs a sequence of tasks:
Encoding the FCM into Text
First, an ‘encoding prompt’ instructs the LLM to convert an FCM’s nodes and weighted edges into a detailed text summary. The LLM is guided to explain each causal edge and to focus more on important nodes (those with many connections). While this initial text, called ‘latent I,’ is highly detailed, it might sound somewhat unnatural because the LLM prioritizes accuracy over natural language flow.
Refining the Text for Naturalness
Next, a ‘content editing prompt’ comes into play. This prompt directs the LLM to rewrite the ‘latent I’ summary into a more natural-sounding text, ‘latent II.’ This step leverages the LLM’s natural language processing (NLP) capabilities to make the description more fluent and readable for humans. However, this refinement can sometimes lead to a loss of some minor details from the original FCM.
Also Read:
- Unlocking LLM Decisions: A New Approach to Explaining Individual Responses
- The Sparse Web of Thought: A Network Perspective on LLM Reasoning and Learning
Reconstructing the FCM from Text
Finally, a ‘decoding prompt’ guides the LLM to reconstruct the FCM from the text summary (either latent I or latent II). This involves three subtasks: noun detection (identifying potential concept nodes), node detection (refining these into actual causal variables), and edge extraction (determining causal connections and their weights based on the language used in the text). The LLM can even quote from the text to justify its decisions during reconstruction.
The experiments, conducted using Google’s Gemini 2.5 Pro LLM, demonstrated that this system effectively approximates the identity map. While the reconstruction can be ‘lossy’—meaning some weaker causal edges might be removed—the system consistently preserves the strong causal connections of the original FCM. Interestingly, the LLM achieves this identity mapping without ever directly comparing the reconstructed FCM to the original input FCM, relying solely on its carefully designed system instructions.
This research highlights a significant advancement in explainable AI. Unlike traditional autoencoders, this LLM-based system provides human-interpretable text descriptions as its ‘latent variables’ and can explain its encoding and decoding decisions. This transparency is invaluable for understanding how AI systems process and interpret complex causal knowledge. For more in-depth information, you can read the full research paper here.


