TLDR: The ERDES dataset is the first open-access collection of ocular ultrasound video clips specifically labeled for retinal detachment (RD) and macular status (macula-intact vs. macula-detached). This dataset, along with benchmark models and a two-stage diagnostic pipeline, aims to facilitate the development of machine learning algorithms for automated, accurate, and timely diagnosis of this vision-threatening condition, addressing the current lack of expertise in ultrasound interpretation and public datasets.
Retinal detachment (RD) is a serious eye condition where the light-sensitive tissue at the back of the eye separates from its supporting layers. If not treated quickly, it can lead to permanent vision loss. A critical factor in determining both the urgency of treatment and the potential for preserving vision is the status of the macula—the central part of the retina responsible for sharp, detailed vision. Whether the macula is still attached (macula-intact) or has also detached (macula-detached) significantly impacts a patient’s visual outcome.
Point-of-care ultrasound (POCUS) is a fast, non-invasive, and accessible imaging method widely used to detect retinal detachment. However, interpreting these ultrasound images requires specialized expertise, which isn’t always readily available, especially in areas with limited resources. While deep learning offers a promising path to automate this interpretation, there haven’t been any machine learning algorithms approved for clinical use to detect RD, nor has there been prior research specifically on assessing macular status using ultrasound in RD cases. Crucially, no public dataset existed to support the development of machine learning models for macular-based RD classification using ultrasound video clips.
To address this significant gap, researchers have introduced the Eye Retinal DEtachment ultraSound, or ERDES, dataset. This groundbreaking open-access dataset is the first of its kind, offering ocular ultrasound clips meticulously labeled for two key aspects: the presence or absence of retinal detachment, and, for cases with RD, whether the macula is intact or detached. The primary goal of ERDES is to accelerate the development and evaluation of machine learning models designed to detect retinal detachment and assess macular status.
The ERDES dataset comprises 5,381 ultrasound clips, totaling approximately 5 hours and 10 minutes of video data, collected from unique participants. These clips are organized into a clear hierarchical structure, categorizing eyes as without retinal detachment (Non-RD), posterior vitreous detachment (PVD), or retinal detachment (RD). Within the RD category, clips are further subdivided into Macula_Detached and Macula_Intact, reflecting the critical clinical distinction. The data was acquired using various ultrasound devices by physicians trained in POCUS at the University of Arizona, following a standardized scanning protocol. All patient information was thoroughly de-identified to ensure privacy and compliance with HIPAA guidelines.
Before being released, all ultrasound clips underwent a crucial preprocessing step. Device-generated text overlays, which are irrelevant for automated classification, were removed. A YOLOv8-based approach was used to automatically localize the region of interest—the ocular globe—in each video frame, ensuring that classification models focus solely on the relevant anatomical structures.
The researchers also provided baseline benchmarks using multiple spatiotemporal convolutional neural network (CNN) architectures. They trained and evaluated eight different deep learning models for two binary classification tasks: distinguishing Non-RD from RD, and classifying Macula-intact from Macula-detached cases. Models like 3D ResNet, 3D U-Net, and various UNet architectures were tested. Performance was measured using standard metrics such as accuracy, precision, sensitivity, specificity, and F1-score. Among the evaluated models, the 3D U-Net architecture demonstrated superior sensitivity for both tasks, making it a strong candidate for clinical application.
Also Read:
- Advancing Eye Care with Multimodal AI: A Comprehensive Overview
- Precision in Retinal Imaging: A Novel Loss for Artery/Vein Classification
A Two-Stage Diagnostic Approach
Based on their benchmarking results, the team designed a cascaded two-stage classification pipeline that mirrors real-world clinical decision-making. The first stage focuses on detecting the presence or absence of retinal detachment. If RD is detected, the pipeline proceeds to a second stage to assess macular involvement, determining if the macula is intact or detached. This sequential diagnosis is crucial because macula-intact detachments often require urgent intervention (within 24 hours) to preserve central vision, while macula-detached cases typically have a different, less urgent surgical timeline.
This research represents a significant step forward in applying deep learning to ocular ultrasound for diagnosing vision-threatening conditions. By providing the first open-access dataset for macular-based RD classification using ultrasound video clips, along with comprehensive benchmarks and an intuitive two-stage diagnostic pipeline, the ERDES project aims to accelerate further research and development in this critical area of medical imaging. All clips, labels, and training code are publicly available to foster collaborative advancements in automated ophthalmic ultrasound interpretation. You can find more details about the ERDES dataset and its resources at the project’s official page.


