TLDR: This research introduces the problem of interactive query answering on Knowledge Graphs (KGs) with “soft entity constraints,” which are vague, context-dependent preferences that cannot be expressed by traditional logical queries. The paper proposes a Neural Query Reranker (NQR) that interactively refines KG query answers by adjusting entity scores based on incremental examples of preferred and non-preferred entities. NQR is designed to capture these soft constraints while preserving the original query’s answer quality. Experiments on extended benchmarks demonstrate NQR’s effectiveness in balancing preference satisfaction and overall ranking performance.
Knowledge Graphs (KGs) are powerful databases that store information in a structured way, using connections between different pieces of information. Think of them as vast networks where entities (like people, movies, or diseases) are connected by relationships (like ‘stars in’ or ’causes’). Traditionally, when you ask a question to a Knowledge Graph, you use very precise, logical rules – for example, ‘Find all movies starring Leonardo DiCaprio that won an Oscar for Best Picture’. These are called ‘hard constraints’.
However, real-world questions often aren’t so clear-cut. What if you’re looking for ‘awards received by movies starring Leonardo DiCaprio’ but are specifically interested in nominations ‘related to the audio-visual design of the movie’, like Best Costume Design or Best Sound, rather than artistic vision awards? This kind of preference is vague and context-dependent; it can’t be easily translated into strict logical rules. This is where the concept of ‘soft constraints’ comes in.
A recent research paper, Interactive Query Answering on Knowledge Graphs with Soft Entity Constraints, introduces and tackles this very problem. The authors highlight that existing methods for querying KGs primarily focus on these strict, logical conditions. They propose a new approach to allow users to incorporate these ‘soft constraints’ into their queries, making the interaction with Knowledge Graphs much more intuitive and flexible.
The core idea is to refine query answers based on examples. If you’re looking for audio-visual awards, you might provide ‘Best Costume Design’ and ‘Best Sound’ as examples of what you prefer, and perhaps ‘Best Actor’ as something you don’t. The system then learns from these ‘preferences’ to adjust the ranking of potential answers.
To achieve this, the researchers developed a model called the Neural Query Reranker (NQR). NQR works by taking the initial list of answers and their scores from a standard Knowledge Graph query system and then subtly adjusting these scores based on the user’s soft preferences. It’s designed to do this without completely overturning the original, logically derived answers. The process is interactive: as a user provides more examples of preferred or non-preferred entities, NQR continuously refines the answer list.
NQR has two main components. One part learns to understand the overall ‘meaning’ or ‘direction’ of the user’s preferences from the examples provided. The other part then uses this understanding to make small, additive adjustments to the scores of all potential answers. This ensures that entities similar to your preferred examples get a boost, while those similar to non-preferred examples get a reduction, all while maintaining the integrity of the initial query results.
A significant challenge in developing such a system is creating the necessary training data. The researchers addressed this by extending existing Knowledge Graph benchmarks (like FB15k237 and Hetionet, a biomedical KG) with automatically generated ‘preference data’ by clustering entities based on their textual descriptions. This allowed them to train NQR effectively.
In their experiments, they evaluated NQR on two key aspects: ‘pairwise accuracy’ (how well preferred entities are ranked above non-preferred ones) and ‘general ranking quality’ (how well the overall correct answers are still ranked). They found that incorporating soft constraints involves a trade-off: achieving very high pairwise accuracy can sometimes reduce the overall ranking quality. However, NQR demonstrated a superior ability to balance this trade-off compared to other methods. Its unique training objective, which includes a ‘KL divergence’ term, was crucial in preventing large deviations from the original, accurate rankings while still effectively capturing user preferences.
Also Read:
- Hypergraphs Unify Knowledge: A New RAG Approach for Complex AI Questions
- Making Knowledge Graph Rules Understandable with AI
This research opens up a new, more flexible way for users to interact with Knowledge Graph databases. By allowing for vague, example-based preferences, NQR makes query answering systems more adaptable and responsive to nuanced user feedback, moving beyond the limitations of purely logical queries.


