TLDR: This research introduces a Knowledge-Graph (KG) based framework to evaluate Retrieval Augmented Generation (RAG) systems, improving upon existing methods like RAGAS. By constructing KGs from input and context, and employing multi-hop semantic matching and community-based overlap algorithms, the framework offers more sensitive and fine-grained assessments of factual correctness, faithfulness, and answer relevancy, showing strong correlation with human judgments, especially in distinguishing highly relevant from irrelevant content.
Large Language Models (LLMs) have become a cornerstone of artificial intelligence, powering everything from text generation to advanced dialogue systems. A particularly impactful application is Retrieval Augmented Generation (RAG), which significantly boosts the reliability and relevance of LLM-generated content by connecting models to external information sources. However, accurately evaluating these sophisticated RAG systems has remained a considerable challenge.
Traditional evaluation methods, often relying on simple word-overlap metrics, struggle to grasp the nuanced semantic richness and naturalness of modern LLM outputs. While frameworks like RAGAS have made strides by using techniques such as breaking down sentences into “atomic facts” – the smallest units of information that can stand alone – they still face difficulties with complex or lengthy documents where finer-grained evaluation is needed.
Addressing this gap, researchers Sicheng Dong, Vahid Zolfaghari, Nenad Petrović, and Alois Knoll from the Technical University of Munich have introduced a novel “Knowledge-Graph Based RAG System Evaluation Framework.” This new approach extends the principles of existing frameworks like RAGAS by integrating Knowledge Graphs (KGs) to enable more comprehensive multi-hop reasoning and semantic analysis. The goal is to provide a deeper understanding of RAG system performance and a more precise evaluation, especially for intricate, multi-fact relationships.
How the KG-Based Evaluation Works
The framework operates in three main stages, starting with the construction of a unified Knowledge Graph:
First, an LLM extracts atomic factual triplets (subject, relation, object) from both the input query and the retrieved context. These triplets form the foundation of two separate, initial KGs. Each subject, relation, and object becomes a distinct node, connected by “structural edges” that represent the relationships within the triplets. Unique suffixes are added to relations to prevent incorrect merging of similar-sounding but unrelated facts.
Next, “semantic edges” are established between these two initially separate graphs. This involves identifying conceptual overlaps between entity nodes (subjects and objects) from the input and context. A pre-trained sentence embedding model encodes the labels of these entities into high-dimensional vectors. If the cosine similarity between any two entity pairs across the graphs exceeds a certain threshold, a “SIMILAR” edge is added, with its weight reflecting the similarity score and its cost being inversely proportional to it. This creates a unified graph where both explicit structural relationships and implicit semantic connections are represented.
Once the Knowledge Graph is built, two primary algorithms are employed for evaluation:
1. Multi-Hop Semantic Matching: This method uses a weighted version of Dijkstra’s algorithm to search for paths between entity nodes from the input and context within the unified KG. The presence of a path within a defined cost threshold indicates semantic relatedness. The score is calculated as the proportion of input nodes that can reach at least one context node, effectively measuring how well the input’s key entities are covered by the context.
2. Community-Based Semantic Overlap: This approach applies the Louvain community detection algorithm to partition the combined KG into communities. The core idea is that if the input and context are semantically similar, their respective nodes are more likely to be grouped into the same communities. The score reflects the proportion of communities that contain entities from both the input and the context, indicating a broader semantic alignment.
Key Findings and Contributions
The research aimed to answer two main questions: Can KG-based metrics improve factuality/faithfulness evaluation over RAGAS, and how well do these metrics correlate with human judgment?
Empirical evaluations showed that the KG-based metrics exhibit moderate to high correlations with RAGAS scores across most metrics, except for context relevancy. When compared against human annotations, both Multi-Hop Semantic Matching and Community-Based Semantic Overlap demonstrated moderate to high correlations for factual correctness, faithfulness, and answer relevancy, further validating their effectiveness.
A significant finding from sensitivity analysis was that the Multi-Hop Semantic Matching method proved particularly responsive to extreme situations – assigning scores consistently close to 1 for correct answers and nearly 0 for incorrect ones. This indicates a higher sensitivity to the strong presence or absence of semantic relevance compared to RAGAS, which showed a less distinct shift in scores.
The study concludes that these two KG-based methods offer complementary strengths: Multi-Hop is effective for closely related entities, while Community-based is better for complex entity relationships. They provide more sensitive insights into semantic consistency, making them an ideal complement to existing frameworks like RAGAS.
Also Read:
- Enhancing Access to UK Clinical Guidelines with AI: A RAG System for Healthcare
- Enhancing Financial Question Answering with Knowledge Graphs: A New Benchmark
Limitations and Future Directions
A primary limitation identified is the scalability of the system, particularly the high computational cost associated with graph construction, which can hinder efficiency when dealing with very large input contexts.
Future research directions include extending similarity measurements to the triplet level, finding optimal hyperparameters for fine-grained evaluation, and exploring other metrics such as negative rejection and long-context accuracy. For more details, you can read the full paper here.


