spot_img
HomeResearch & DevelopmentAI Model Predicts Neonatal Brain Age with Clear Explanations

AI Model Predicts Neonatal Brain Age with Clear Explanations

TLDR: A new research paper introduces an AI model that accurately predicts postmenstrual age (PMA) from neonatal brain MRI scans and provides clear, clinically relevant explanations for its predictions. By adapting a multimodal large language model (Qwen2.5-VL-7B) with a unique instruction tuning strategy, the model learns to perform precise age estimation while also generating interpretable insights into the developmental features informing its decisions, addressing the ‘black box’ problem in medical AI.

Accurately determining a newborn’s postmenstrual age (PMA) at the time of a scan is vital for evaluating their brain development and overall health. PMA, which combines gestational age and postnatal age, serves as a key indicator of brain maturity. While advanced deep learning models have shown great promise in predicting PMA from brain MRI scans with high accuracy, they often operate as ‘black boxes,’ meaning their decision-making process is not transparent, which can limit their adoption in clinical settings.

A recent research paper, titled Accurate and Interpretable Postmenstrual Age Prediction via Multimodal Large Language Model, addresses this challenge by introducing a novel approach that combines high prediction accuracy with clear, clinically relevant explanations. The work, led by Qifan Chen, Jin Cui, Cindy Duan, Yushuo Han, and Yifei Shi from King’s College London, Imperial College London, and Columbia University, adapts a multimodal large language model (MLLM) to achieve this dual goal.

The Innovative Approach

The researchers adapted the Qwen2.5-VL-7B model, a powerful MLLM known for its strong vision and language capabilities. Instead of training the entire model, they used a technique called Parameter-Efficient Fine-Tuning (PEFT) with Low-Rank Adaptation (LoRA). This method allows for efficient training by only updating a small fraction of the model’s parameters, preventing computational overload and ‘catastrophic forgetting’ of its pre-trained knowledge.

A key innovation lies in their ‘instruction tuning’ strategy, which uses different prompts for training and inference. During training, the model is given four 2D cortical surface projection maps derived from neonatal MRI scans (representing cortical thickness, curvature, myelination, and sulcal depth) and asked to predict a numerical PMA. This focuses the model purely on the regression task.

However, during validation and real-world use, a different, more complex prompt is used. This prompt asks the model not only to predict the PMA but also to provide a primary reason for its prediction. Because the Qwen model is adept at following instructions, it can generalize to this new format, accurately performing the prediction while leveraging its vast pre-trained knowledge to generate meaningful explanations based on the visual features it has learned.

Promising Results and Clinical Impact

The fine-tuned model achieved a low prediction error, with a 95% confidence interval of 0.78–1.52 weeks. While its average performance was competitive with existing specialized models, the ability to provide interpretable outputs is its standout feature. For instance, in successful cases, the model’s explanations accurately identified developmental features like ‘advanced cortical thickness, curvature, and sulcal depth’ and ‘deep, well-defined sulci’ as reasons for its age prediction, aligning with expert observations.

A user study involving clinical practitioners further validated the model’s utility. Clinicians rated the model’s explanations highly for interpretability, acceptance, and completeness, with no instances of ‘hallucination’ (generating factually incorrect information) noted. All participating clinicians deemed the prediction errors clinically acceptable, highlighting the model’s practical value.

Also Read:

Looking Ahead

While the current work uses 2D projections, which might lose some information compared to full 3D analysis, and the explanations require further clinical validation, this research marks a significant step towards transparent and trustworthy AI systems in perinatal neuroscience. Future work could explore incorporating textual features from patient medical records alongside visual data and handling temporal sequences of data for even more accurate predictions and insights into developmental trends.

This framework represents a robust step towards building the next generation of interactive AI tools for clinical medicine, fostering greater trust and potentially aiding in training and diagnostics by moving beyond ‘black box’ AI.

Ananya Rao
Ananya Raohttps://blogs.edgentiq.com
Ananya Rao is a tech journalist with a passion for dissecting the fast-moving world of Generative AI. With a background in computer science and a sharp editorial eye, she connects the dots between policy, innovation, and business. Ananya excels in real-time reporting and specializes in uncovering how startups and enterprises in India are navigating the GenAI boom. She brings urgency and clarity to every breaking news piece she writes. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -