TLDR: The research paper introduces a novel framework for classifying hallucinations in large language models (LLMs) using geometric cluster analysis in a low-dimensional embedding space. Unlike previous methods focused on binary detection, this approach demonstrates that correct and hallucinated responses, along with different types of hallucinations, form distinct, separable clusters. By measuring distances between these clusters’ centroids, the framework offers a robust and computationally efficient method for identifying and categorizing LLM inaccuracies, thereby laying a foundation for more precise hallucination detection and mitigation.
Large Language Models (LLMs) have brought about significant advancements in natural language processing, transforming how we acquire information. However, a major challenge with these powerful AI systems is their tendency to ‘hallucinate’ – producing responses that are misleading or entirely made up. These hallucinations are particularly difficult to detect because LLMs generate very sophisticated, human-like text, even when it’s factually incorrect. This can have serious consequences, especially in critical fields like medicine, law, and finance, leading to misinformation, financial losses, and a loss of trust.
Current research has explored several strategies to reduce hallucinations. One popular method is Retrieval-Augmented Generation (RAG), which provides LLMs with external, verified knowledge sources to ensure their outputs are based on authoritative information. Other approaches include post-generation verification, where secondary models or rule-based checkers assess factual accuracy, and real-time detection methods like the MIND framework or Oxford’s semantic-entropy algorithm, which identify inaccuracies during the generation process itself. More advanced methods involve multiplex agent pipelines, where multiple AI agents review and fact-check each other’s outputs.
Despite these efforts, a significant limitation has been the lack of a systematic way to classify hallucinations. Most existing systems focus on simply detecting whether a hallucination has occurred, without categorizing its type, cause, or severity. This absence of classification makes it difficult to handle errors appropriately, especially in high-risk applications where a severe factual fabrication might be mistaken for a minor error. A structured classification system could enable better risk management, targeted corrections, and improved transparency for users.
Also Read:
- Tracking Truth: A Geometric Framework for Detecting AI Hallucinations
- Unraveling LLM Hallucinations: A Framework for Tracing Semantic Failures
A Novel Approach to Hallucination Classification
A recent research paper, titled A novel hallucination classification framework, introduces a groundbreaking method for classifying LLM hallucinations. Authored by Maksym Zavhorodnii, Dmytro Dehtiarov from Instituto Superior Técnico, Universidade de Lisboa, and Anna Konovalenko from Molde University College, Norway, this study is the first to explore generating diverse types of hallucinations and projecting them into a low-dimensional representation space for classification.
The paper highlights that hallucinations can broadly be categorized into two fundamental types: factuality hallucinations, where content contradicts real-world knowledge (e.g., factual contradiction, fabrication), and faithfulness hallucinations, where outputs deviate from source materials or instructions (e.g., misinterpretation, context inconsistency, structural hallucinations).
The proposed solution method involves a systematic approach using geometric cluster analysis in an embedding space. The core hypothesis is that correct and hallucinated responses will exhibit distinct patterns when converted into high-dimensional vector representations. The process includes:
- Data Preparation: Selecting question-answer datasets and generating both correct and hallucinated responses using LLMs like Llama3.1, Gemma2, and Phi3.
- Embedding Conversion: Transforming all responses into high-dimensional vector embeddings using models like all-MiniLM-L6-v2.
- Dimensionality Reduction and Visualization: Applying techniques like UMAP (Uniform Manifold Approximation and Projection) to reduce the embeddings to a lower-dimensional space, making them easier to visualize and analyze.
- Clustering: Constructing separate clusters for correct and hallucinated responses and computing their centroids.
- Quantitative Analysis: Measuring the distances between these cluster centroids to quantify the geometric separation between different response types.
The experiments, conducted on a MacBook Pro with varying dataset sizes, consistently demonstrated that the distance between the centroid of correct LLM responses and the ground truth answers was significantly smaller than the distance between hallucinated responses and the ground truth. This indicates a clear semantic separation in the embedding space, where hallucinated responses form a distinct cluster.
These findings suggest that it is possible to classify whether a new response is a hallucination or a correct answer with high probability based on these centroid distances. Furthermore, this method opens the door for more fine-grained classification of hallucination types (e.g., fabrication, distortion, omission) through distinct clustering patterns. This embedding-driven classification approach offers a computationally efficient way to detect hallucinations in real-time, moving beyond reactive post-generation validation to a more proactive and scalable solution.
In conclusion, this research provides a novel and promising foundation for training dedicated machine learning models to detect and even classify the types of hallucinations produced by LLMs, paving the way for more reliable and trustworthy AI systems in various industrial applications.


