spot_img
HomeResearch & DevelopmentAI Framework for Smarter Pre-Consultation in Healthcare

AI Framework for Smarter Pre-Consultation in Healthcare

TLDR: A new multi-stage AI framework uses Large Language Models (LLMs) to automatically generate comprehensive pre-consultation questionnaires from Electronic Medical Records (EMRs). It breaks down EMRs into atomic facts, builds causal networks, and synthesizes disease knowledge to create tailored personal and standardized disease-specific questionnaires, significantly improving information coverage, diagnostic relevance, and generation efficiency compared to traditional methods.

Pre-consultation is a vital step in healthcare, involving the collection of essential patient information before a clinical visit. Traditionally, this process relies heavily on manual effort from healthcare professionals, which is time-consuming and often struggles to cover all necessary information comprehensively, especially given the complex and dynamic nature of individual patient conditions and medical histories.

Previous attempts to automate this process, such as rule-based systems or methods using knowledge graphs, have faced limitations. Rule-based systems struggle with the non-standardized descriptions from patients, leading to low matching rates. Knowledge graph-based systems find it challenging to construct comprehensive and up-to-date graphs for the complex medical domain, and they often struggle with the individualized nature of patient data.

The rapid advancements in Large Language Models (LLMs) offer new opportunities for processing complex, unstructured Electronic Medical Records (EMRs) and generating comprehensive, context-aware pre-consultation questionnaires. However, directly applying LLMs to this task can be difficult due to challenges in ensuring information completeness, logical order, and synthesizing disease-level knowledge from vast amounts of data.

A Novel Multi-Stage Framework

To address these challenges, researchers have proposed a novel multi-stage LLM-driven framework designed to automate the generation of comprehensive and clinically relevant pre-consultation questionnaires directly from EMRs. This framework moves beyond direct text-to-questionnaire approaches by integrating structured knowledge representation.

The framework consists of three interconnected stages:

Stage 1: Atomic Assertion Extraction

This initial stage processes raw, unstructured EMR text to identify and extract discrete, factual medical statements along with their associated timings. These are called ‘atomic assertions’ – the smallest, indivisible semantic units representing a single, concrete medical fact or observation. This process breaks down complex narratives into manageable, verifiable units, preserving semantic relationships and ensuring accuracy.

Stage 2: Causal Network Construction

Building on the extracted atomic assertions, this stage first constructs a ‘personal causal network’ for each individual EMR. This network captures patient-specific health event dependencies, revealing how different medical conditions and events influence each other. Subsequently, by analyzing and clustering these personal causal networks from a large collection of EMRs for a specific disease, the framework synthesizes a collective ‘disease knowledge representation’. This representation reflects common clinical pathways and their prevalence for a given disease, filtering out individual peculiarities to reveal prevalent causal patterns.

Stage 3: Pre-consultation Questionnaire Generation

Leveraging the structured representations from the preceding stages, this final stage generates the actual questionnaires. It produces two distinct types: ‘personal questionnaires’ for individual patients, using their specific medical facts and relationships, and ‘disease-specific questionnaires’ which are standardized tools based on the collective disease knowledge synthesized in Stage 2. The LLM translates this structured information into clear, patient-friendly questions, ensuring comprehensive coverage and logical flow.

Also Read:

Demonstrated Effectiveness and Efficiency

The framework was evaluated on a real-world EMR dataset from a general hospital in Shanghai, China. For personal questionnaire generation, the proposed framework significantly outperformed direct LLM approaches in terms of key fact coverage and diagnostic relevance. For instance, in a sample case, it covered 32 out of 38 key facts, compared to only 16 facts covered by a direct LLM approach. Quantitatively, it achieved 84.2% personal key fact coverage and a diagnostic relevance score of 8.5, far exceeding the direct LLM’s 42.10% coverage and 3.2 relevance score.

When generating disease-specific questionnaires, the framework achieved comparable quality to questionnaires manually crafted by clinical experts, with a disease-specific key fact coverage of 92.20% (compared to 90.80% for manual generation) and high scores for diagnostic relevance (9.2) and understandability (9.1). Crucially, the automated framework drastically reduced the generation time to 10.4 minutes, compared to 33.8 minutes for manual generation. This highlights a significant improvement in efficiency while maintaining high quality.

This innovative multi-stage LLM-driven framework demonstrates substantial practical potential for enhancing patient information collection by providing a robust, comprehensive, and efficient solution for automated pre-consultation questionnaire generation. For more details, you can refer to the research paper.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -