TLDR: EASE is a novel framework for real-time fake news detection that dynamically adapts its strategy based on the sufficiency of available evidence. It employs a sequential evaluation mechanism with three perspectives: evidence-based, reasoning-based (leveraging LLM world knowledge), and sentiment-based as a fallback. The framework introduces instruction tuning with pseudo labels to enhance evaluator accuracy and interpretability. Additionally, the paper presents RealTimeNews-25, a new benchmark dataset of recent news for evaluating model generalization under evidence scarcity. EASE achieves state-of-the-art performance on historical news and significantly improves generalization to real-time news.
In today’s fast-paced digital world, misinformation spreads rapidly, making it incredibly difficult to distinguish between real and fake news, especially when events are unfolding in real-time. This challenge is amplified by the scarcity of reliable evidence for new and emerging stories. Traditional methods of fake news detection often fall short because they rely heavily on readily available external evidence, which is frequently absent or unreliable for breaking news.
A new research paper, “Towards Real-Time Fake News Detection under Evidence Scarcity,” introduces an innovative solution called Evaluation-Aware Selection of Experts (EASE). Developed by a team of researchers including Guangyu Wei, Ke Han, Yueming Lyu, Yu Luo, Yue Jiang, Caifeng Shan, and Nicu Sebe, EASE is a framework designed to tackle the problem of real-time fake news detection even when supporting evidence is scarce.
Understanding the EASE Framework
EASE operates through a clever sequential evaluation process that adapts its decision-making based on how much reliable evidence is available. It employs three distinct “experts” or perspectives:
- Evidence-based evaluation: This is the primary approach. EASE first attempts to find and assess external evidence from the web. If the evidence is strong and sufficient, an “evidence expert” uses it to determine the news’s authenticity. This involves an “evidence agent” that iteratively searches and summarizes web pages, and an “evidence evaluator” that checks the consistency, sufficiency, and credibility of the retrieved information.
- Reasoning-based evaluation: If external evidence is insufficient or unreliable, EASE turns to the vast “world knowledge” embedded within large language models (LLMs). A “reasoning agent” generates logical inferences, and a “reasoning evaluator” then scrutinizes the reliability of this internal reasoning before a “reasoning expert” makes a decision. This helps when direct facts are hard to find but common sense or logical deduction can be applied.
- Sentiment-based fallback: As a last resort, if neither external evidence nor internal reasoning proves reliable, EASE activates a “sentiment expert.” This expert analyzes the emotional tone, subjectivity, and stylistic cues within the news content itself. Often, fake news uses exaggerated, provocative, or biased language to manipulate readers, and this expert is trained to identify such patterns.
A key aspect of EASE is its use of instruction tuning with pseudo labels. This means that the evaluators are guided by high-quality, machine-generated assessments to learn how to justify their decisions with interpretable reasoning, making the system more transparent and reliable.
A New Benchmark for Real-Time News
To properly test and advance research in this challenging area, the authors also introduced RealTimeNews-25. This is a new benchmark dataset comprising 3,487 recent news articles collected between June 2024 and September 2025. Unlike older datasets, RealTimeNews-25 focuses on emerging events where evidence is often limited, providing a more realistic scenario for evaluating real-time fake news detection models. The dataset even masks the true source of news during evidence retrieval to simulate real-world conditions where authoritative sources might not be immediately available.
Also Read:
- FinVet: A Smarter Way to Detect False Financial Claims
- Enhancing Web Safety: A Multi-Agent LLM Framework for Misinformation Defense
Impressive Results and Real-World Application
Extensive experiments showed that EASE not only achieved state-of-the-art performance on existing historical news benchmarks but also significantly improved its ability to generalize to real-time news with limited evidence. This demonstrates its effectiveness in practical, time-sensitive situations.
An open-world case study further highlighted EASE’s robustness. For example, it correctly identified a fake news story about Venezuela’s capital by leveraging reasoning knowledge to spot a factual inconsistency (Alaskan vs. Caracas). In another instance, it classified a highly emotional post about a power outage as fake by recognizing exaggerated and biased language through its sentiment expert, even when evidence and reasoning were inconclusive.
While the research acknowledges limitations, such as the inherent difficulty in collecting real-time news with high evidence scarcity and potential challenges with objectively written news lacking strong emotional cues, EASE represents a significant step forward. By dynamically evaluating evidence quality and adaptively selecting the most trustworthy knowledge, EASE offers a robust and interpretable framework for combating misinformation in real-time scenarios.


