TLDR: A study successfully fine-tuned Llama 3.2 large language models to accurately extract vaccine mentions from emergency department triage notes. This AI-powered approach outperforms traditional methods, enabling efficient, near real-time vaccine safety surveillance and early detection of adverse events, even in resource-constrained environments due to model quantization.
A recent study explores how advanced artificial intelligence can significantly improve the way we monitor vaccine safety. Researchers from the Murdoch Children’s Research Institute, the Department of Health, Victoria, and The University of Melbourne have developed a new method to quickly identify vaccine-related information from emergency department (ED) triage notes. This innovative approach aims to enhance near real-time surveillance of adverse events following immunization (AEFI).
Traditionally, monitoring vaccine safety relies on passive surveillance, which can be slow. This study introduces a more proactive method by leveraging Natural Language Processing (NLP) and Large Language Models (LLMs) to extract crucial details from unstructured clinical notes. These ED triage notes, often written quickly by nurses, contain valuable information about patients’ presenting complaints and medical history, including potential reactions to vaccines.
The research focused on fine-tuning Llama 3.2 models, specifically the 1 billion and 3 billion parameter versions, to extract vaccine names from these notes. The process involved an initial phase of prompt engineering to create a labeled dataset, which was then carefully reviewed and confirmed by human experts. This human validation step was crucial to ensure the accuracy of the data used for training the AI models.
The study compared the performance of these fine-tuned Llama models against models that only used prompt engineering and a traditional rule-based approach. The results were promising. The fine-tuned Llama 3 billion parameter model showed superior accuracy in identifying vaccine names. Both fine-tuned Llama models (1B and 3B) demonstrated better performance than the rule-based method, especially in reducing false negatives, meaning they were better at catching actual vaccine mentions.
One key finding was the effectiveness of model quantization. This technique allows the large language models to be deployed efficiently even in environments with limited computing resources, such as on standard consumer-grade GPUs. This means the technology can be more widely adopted without requiring expensive, high-end hardware.
The researchers highlighted that while the rule-based method also performed well, it demanded significantly more development effort and ongoing maintenance compared to the AI models. The fine-tuned LLMs, on the other hand, showed greater adaptability to changes in vaccine-related ED data and are also suitable for other tasks like identifying symptoms.
Also Read:
- On-Device AI Model Enhances Medical Transcription with Privacy and Cost Efficiency
- Enhancing French Electronic Health Records with AI for Social Determinants of Health
In conclusion, this study demonstrates the significant potential of large language models in automating data extraction from emergency department notes. This automation can greatly support efficient vaccine safety surveillance and enable the early detection of emerging adverse events following immunization issues. The findings suggest a future where AI plays a vital role in public health monitoring, making vaccine safety surveillance more robust and responsive. You can read the full research paper for more details: Research Paper.


