TLDR: UNIPHY+Lab is a new framework that uses continuous, non-invasive photoplethysmogram (PPG) signals to estimate clinical lab test results like potassium, glucose, and lactate. It combines a PPG foundation model for local signal encoding with a patient-aware Mamba state-space model for long-term temporal tracking and personalized predictions. Evaluated on ICU datasets, UNIPHY+Lab significantly outperforms traditional methods, demonstrating the potential for continuous, personalized biochemical monitoring in critical care.
Clinical laboratory tests are vital for diagnosing and treating patients, providing crucial biochemical measurements. However, these tests are often invasive and only offer intermittent snapshots of a patient’s health, making it difficult to track rapid changes in physiological parameters. Imagine if we could continuously monitor these critical lab values without the need for frequent blood draws.
A new research paper introduces UNIPHY+Lab, an innovative framework designed to estimate clinical lab test result trajectories directly from continuous photoplethysmogram (PPG) signals. PPG is a non-invasive signal, routinely collected in intensive care units (ICUs) through bedside sensors, that reflects cardiovascular dynamics. This approach offers a pathway toward continuous, personalized biochemical surveillance in critical care settings.
The Challenge of Continuous Lab Monitoring
Current lab tests, such as those for electrolytes, lactate, and blood glucose, are essential but come with limitations. They require blood samples, which can be painful and technically challenging, especially for arterial draws. Even in ICUs, where tests are more frequent, their temporal sparsity means rapid physiological shifts related to blood flow, metabolism, or acid-base balance can be missed. While deep learning models have shown promise in using PPG for disease detection and estimating parameters like heart rate, extending these capabilities to formal clinical lab test estimation, especially for long-term trends, has been a significant hurdle.
Introducing UNIPHY+Lab
The UNIPHY+Lab framework addresses this gap by combining several advanced techniques. It uses a large-scale PPG foundation model for encoding local waveform information, capturing instantaneous cardiovascular features. This is then integrated with a patient-aware Mamba model, a type of state-space model (SSM), for long-range temporal modeling. The architecture tackles three main challenges:
- Capturing extended temporal trends in laboratory values.
- Accounting for patient-specific baseline variations using FiLM-modulated initial states.
- Performing multi-task estimation for interrelated biomarkers.
How UNIPHY+Lab Works
The framework operates in three core components. First, a pre-trained PPG foundation model (specifically, PPG-GPT) acts as a local feature encoder. It segments continuous PPG waveforms into short windows (e.g., 30 seconds) and extracts representation embeddings, capturing local morphology and short-range waveform dynamics. These localized features provide immediate cardiovascular context.
Next, a stack of Mamba-based state-space model (SSM) blocks handles long-range continuous modeling. These SSM blocks maintain internal hidden states that evolve over time, allowing the model to accumulate long-range dependencies crucial for predicting lab values that change over minutes to hours.
A key innovation is the Patient Conditioning Initial State (PCS) module. Physiological waveforms are highly individual, so UNIPHY+Lab incorporates a learnable initial state for the SSM, modulated by patient-specific information. This initial state is derived from embeddings of multiple average results from different historical labs and the mean pooling of randomly sampled waveform embeddings from the same patient’s history. This personalization allows the model to adapt its trajectory dynamics based on the patient’s unique context.
Finally, for multiple lab estimation, UNIPHY+Lab uses a multi-task prediction strategy. Each target biomarker (like Potassium, Calcium, Sodium, Glucose, and Lactate) is modeled with a distinct task layer, enabling specialization in both the statistical distribution and temporal dynamics of each biomarker. This multi-task learning also leverages inter-lab correlations, improving robustness.
Evaluation and Results
The researchers evaluated UNIPHY+Lab on two large-scale ICU datasets: one from an academic medical institute (3,796 patients) and the MIMIC-III Waveform Database Matched Subset (4,146 patients). They focused on five key biomarkers: Potassium, Calcium, Sodium, Glucose, and Lactate.
The results showed substantial improvements over traditional baselines, including a Long Short-Term Memory (LSTM) model and a “last observation carried forward” (LOCF) method. UNIPHY+Lab, especially when combining the Patient Conditioning Initial State (PCS) and multi-task learning, consistently achieved the highest R2 scores and lowest error metrics (MAE, RMSE). This indicates its ability to preserve long-term temporal coherence while accounting for individual-specific waveform characteristics.
Interestingly, for sodium, which tends to be more temporally stable in ICU patients, LOCF already performed very well. While UNIPHY+Lab with PCS narrowed the gap, it didn’t substantially surpass LOCF, suggesting less added value for biomarkers with low short-term variability.
Also Read:
- UNIPHY+: A New AI Framework for Continuous Health Monitoring from ICU to Home
- Bridging Vision and Physiology: A New Approach to Medical Time Series Classification
Future Implications
This work demonstrates the feasibility of continuous, personalized lab value estimation from routine PPG monitoring. It offers a significant step towards non-invasive biochemical surveillance in critical care, potentially allowing clinicians to track patient status more dynamically and intervene earlier. Future work includes analyzing the impact of foundation model scale and incorporating additional physiological signals like ECG for an even richer multi-modal representation. You can read the full paper here.


