TLDR: SerialBEHRT is a new AI foundation model designed to better understand Electronic Health Records (EHRs) by converting them into a serialized text format. It combines pre-training on scientific literature and these serialized EHRs, allowing it to capture temporal and contextual relationships in patient data. Evaluated on antibiotic susceptibility prediction, SerialBEHRT demonstrated superior and more consistent performance compared to existing models and representation strategies, highlighting the importance of domain-aligned pre-training for healthcare AI.
Foundation models, a significant advancement in artificial intelligence, are increasingly being applied across various fields, including healthcare. These powerful models are adept at learning complex representations from vast datasets, typically through self-supervised pre-training. In healthcare, they hold immense promise for tasks like decision-making, summarization, and automated coding, leveraging large-scale clinical data.
However, a key challenge in applying these models to Electronic Health Records (EHRs) is the inherent structural mismatch between the tabular, event-based nature of EHRs and the sequential processing capabilities of natural language models. This discrepancy often hinders their ability to capture the crucial longitudinal dependencies and temporal relationships within a patient’s health journey.
Addressing this challenge, researchers have introduced SerialBEHRT, a novel domain-aligned foundation model. SerialBEHRT extends the existing SciBERT model by incorporating an additional pre-training phase on structured EHR sequences. The core idea behind SerialBEHRT is to convert complex tabular EHR data into a serialized textual format. This transformation allows the model to process EHRs as sequences, aligning them with the sequential priors of natural language models.
The pre-training corpus for SerialBEHRT is unique, combining scientific texts from Semantic Scholar with serialized EHRs sourced from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database. This dual-source approach is hypothesized to provide a more comprehensive understanding, as scientific texts offer domain-specific knowledge, while serialized EHRs provide granular, patient-specific clinical information often absent from standard medical literature. To accommodate the new EHR-specific terminology, SerialBEHRT’s tokenizer was expanded, integrating a domain-specific vocabulary of 10,000 tokens with SciBERT’s original vocabulary, resulting in a total of 42,000 unique tokens.
The model utilizes Masked Language Modeling (MLM) as its primary pre-training objective. In this process, a portion of tokens from both scientific texts and serialized EHRs are randomly masked, and the model is tasked with predicting these masked tokens based on their surrounding context. This self-supervised learning technique helps SerialBEHRT effectively capture both scientific and EHR-specific language, including clinical concepts like ICD coding.
SerialBEHRT’s effectiveness was rigorously evaluated on the clinically significant task of antibiotic susceptibility prediction. This involves predicting suitable antibiotics for individual patients, a critical aspect of antibiotic stewardship. The model was benchmarked against several state-of-the-art EHR representation strategies and prominent medical foundation models, including Bio_ClinicalBERT, BioMegatron, MedBERT, and SciBERT, as well as traditional methods like tabular data, Word2Vec, and DistilBERT.
The results demonstrated that SerialBEHRT consistently achieved superior and more stable performance across various metrics, such as F1 score, ROC-AUC, and PRC-AUC. Its strong performance, particularly in PRC-AUC, highlights its ability to handle class imbalance and identify rare resistant cases, which is vital in real-world clinical settings. This stability underscores the benefits of its temporally ordered EHR pre-training, which more effectively encodes patient trajectories compared to static text embeddings.
Also Read:
- Evaluating AI Models for Cancer Diagnosis Classification in Electronic Health Records
- Keeping Medical AI Up-to-Date: A New Framework for Precise Knowledge Editing in LLMs
This research suggests that optimizing the representation strategy for the unique structure of healthcare data yields greater gains than merely expanding architectural size. SerialBEHRT represents a step towards modality-aligned foundation modeling in clinical machine learning, where inductive biases rooted in the temporal and compositional dynamics of EHRs are central to effective generalization and interpretability. For more detailed information, you can refer to the full research paper here.


