TLDR: REx is a new AI method that generates scientific explanations for drug repurposing predictions using knowledge graphs. It employs reinforcement learning with a unique reward system that prioritizes both accuracy and the scientific relevance, simplicity, completeness, and coherence of explanations. Evaluated on biomedical datasets, REx outperforms existing methods in predictive performance and generates more insightful, expert-validated explanations, making AI-driven drug discovery more transparent and trustworthy.
Artificial intelligence is rapidly transforming scientific discovery, particularly in complex fields like drug repurposing. While AI models can make powerful predictions, their widespread adoption as credible scientific tools hinges on their ability to not only be accurate but also to provide clear, meaningful scientific explanations for their outputs. This is where a new approach called REx comes into play, offering a novel way to generate scientifically grounded explanations for drug repurposing hypotheses using knowledge graphs.
Drug repurposing, the process of finding new therapeutic uses for existing drugs, is a critical area where AI can accelerate discovery. Knowledge graphs (KGs) are powerful tools for modeling the vast, intricate web of multi-relational data in biomedicine, connecting drugs, diseases, genes, and other entities. They are excellent for generating new hypotheses, such as identifying a drug that could treat a previously unassociated disease.
The Challenge of Explainability in AI
Traditional AI models often act as ‘black boxes,’ making predictions without revealing the underlying reasoning. For scientific applications, especially in medicine, understanding *why* a prediction is made is as important as the prediction itself. Scientific explanations need to be causally adequate, relevant, complete, coherent with existing knowledge, and ideally, simple. Existing methods for explaining AI predictions in KGs often fall short, focusing on identifying relevant data points rather than constructing a full scientific narrative.
Introducing REx: Explanations Guided by Scientific Principles
REx, which stands for Rewarding Explainability, is a novel method designed to generate scientific explanations for AI-driven hypotheses in knowledge graphs. It focuses on identifying ‘explanatory paths’ within a KG that connect a drug to a disease, providing a step-by-step rationale for a potential drug repurposing. What makes REx unique is its integration of desirable properties of scientific explanation directly into its learning process.
The core of REx involves a reinforcement learning (RL) agent that navigates the knowledge graph. This agent is guided by a sophisticated reward system. This system doesn’t just reward the agent for finding any path between a drug and a disease (which is called ‘fidelity’); it also rewards for ‘relevance.’ Relevance is measured by the ‘information content’ of the entities in the path – essentially, how specific and insightful the connections are. Paths involving less common, more specific entities are considered more relevant, as they are more likely to reveal meaningful biological mechanisms rather than generic associations.
To ensure ‘simplicity,’ REx incorporates an early stopping mechanism, encouraging the agent to find concise explanations without unnecessary detours or loops. Furthermore, to achieve ‘completeness’ and ‘coherence,’ REx enriches these explanatory paths with domain-specific ontologies. These ontologies provide a richer context and ensure that the explanations are grounded in established biomedical knowledge, making them more understandable and scientifically sound.
Demonstrated Effectiveness in Drug Repurposing
REx was rigorously evaluated using three popular biomedical knowledge graph benchmarks: Hetionet, PrimeKG, and OREGANO. The results were compelling. REx consistently outperformed state-of-the-art methods like MINERVA and PoLo in predictive performance, measured by metrics like MRR (Mean Reciprocal Rank) and Hits@k. This indicates that the explanations generated by REx are not only interpretable but also highly accurate in validating drug repurposing hypotheses.
Beyond predictive accuracy, REx’s ability to generate relevant explanations was a key highlight. Analysis showed that REx’s paths had a significantly higher information content compared to other methods, meaning they offered more specific and insightful biological details. An ablation study, where components of REx were systematically removed, confirmed the crucial roles of both the early stopping mechanism (for simplicity) and the relevance reward in achieving its superior performance.
Crucially, REx’s explanations were validated against known scientific mechanisms. In a ground-truth evaluation, a significant number of REx’s identified path types were consistent with established drug repurposing mechanisms. Moreover, domain experts, including life sciences graduates, rated REx-generated explanations as being of higher quality and more satisfactory than those from other methods, further underscoring their scientific validity and utility.
Also Read:
- AI Agents That Understand Their Own Limits in Complex Data
- Agentic Reinforcement Learning: Empowering LLMs as Autonomous Decision-Makers
Advancing AI-Driven Scientific Discovery
REx represents a significant step forward in making AI predictions in drug discovery more transparent and trustworthy. By explicitly incorporating scientific explainability properties into its design, REx provides a powerful tool for researchers to not only identify potential drug repurposing candidates but also to understand the underlying biological rationale. This capability is vital for accelerating the translation of AI insights into real-world medical applications. For more in-depth information, you can read the full research paper here.


