TLDR: A new AI system, Attention-Augmented Wavelet YOLO (AAW-YOLO), has been developed for real-time segmentation of brain vessels using Transcranial Color-coded Doppler (TCCD). This system significantly improves accuracy, especially for hard-to-see contralateral arteries, and operates efficiently enough for clinical use (70.4 FPS), reducing the need for highly experienced operators in stroke risk assessment.
Stroke is a devastating global disease, and the Circle of Willis (CoW), a network of arteries at the base of the brain, is crucial for maintaining consistent blood flow. Assessing the CoW accurately is vital for identifying individuals at risk of stroke and guiding their treatment. Transcranial Color-coded Doppler (TCCD) is a promising imaging method for CoW assessment because it is radiation-free, affordable, and accessible. However, its widespread use has been limited because reliable TCCD assessments heavily depend on the operator’s expertise to identify anatomical landmarks and perform accurate angle correction. This operator dependency often leads to inconsistent results and restricts TCCD’s use to specialized facilities.
To address these challenges, researchers have proposed an AI-powered real-time CoW auto-segmentation system. This study introduces a novel system called Attention-Augmented Wavelet YOLO (AAW-YOLO), specifically designed for TCCD data. The goal of AAW-YOLO is to provide real-time guidance for brain vessel segmentation in the CoW, aiming to eliminate the reliance on operator experience for TCCD cerebrovascular screening.
Developing AAW-YOLO
The researchers collected TCCD data from 15 subjects, resulting in 738 annotated frames and 3,419 labeled artery instances. This high-quality dataset was used for training and evaluating the model. The YOLO-11 framework was chosen as the baseline due to its efficiency, real-time processing capabilities, and ability to segment multiple targets simultaneously, which is ideal for the multi-object nature of CoW analysis in TCCD.
To enhance performance, especially for small, low-contrast contralateral arteries, two key modifications were introduced. First, the Attention-Augmented YOLO (AA-YOLO) was developed by integrating lightweight attention mechanisms. Specifically, bottleneck layers in the YOLO-11 backbone were replaced with an Attention-C2F bottleneck block. This change helps the model focus on relevant arterial regions and prioritize small arteries over background noise, which is crucial for detecting challenging contralateral vessels.
Second, building on AA-YOLO, the Attention-Augmented Wavelet YOLO (AAW-YOLO) was created by incorporating Wavelet Convolution (WTConv). WTConv expands the model’s receptive field, allowing it to capture multi-scale vascular structures more effectively while maintaining computational efficiency. WTConv was integrated into a new Wavelet C2F bottleneck block, replacing the convolution head of the original C2F block. This ensures that fine vascular structures are captured with greater clarity, particularly in small arterial regions.
Performance and Efficiency
The proposed AAW-YOLO system demonstrated superior performance in segmenting both ipsilateral (same side) and contralateral (opposite side) CoW vessels. It achieved impressive average scores: Dice coefficient of 0.901, Intersection over Union (IoU) of 0.823, Precision of 0.882, Recall of 0.926, and mean Average Precision (mAP) of 0.953. These metrics indicate high accuracy in identifying and outlining the brain vessels.
A significant finding was AAW-YOLO’s ability to segment contralateral arteries, which are typically harder to visualize due to lower contrast and anatomical complexity. The model achieved the smallest performance difference between ipsilateral and contralateral segmentation, with a Dice drop of only 0.026 and a recall difference of just 0.010. This shows its improved generalization for complex and small anatomical structures.
Crucially, AAW-YOLO also proved to be computationally efficient, making it suitable for real-time clinical applications. It achieved an inference speed of 14.199 milliseconds per frame, which translates to 70.427 frames per second (FPS). This speed is well above the typical 20-30 FPS required for clinical TCCD scanning, validating its practical usability in real-world settings. While slightly slower than the baseline YOLO-11, the enhanced accuracy justifies this trade-off.
Also Read:
- AI Tool Improves Image Quality for Acute Ischemic Stroke Treatment
- Smart Blood Pressure Monitoring: Collaborative AI for Embedded Devices
Future Outlook
This study represents the first application of deep learning for segmenting the Circle of Willis in TCCD imaging, offering a new path for automating cerebrovascular assessment. The AAW-YOLO model significantly reduces operator dependency, potentially making TCCD technology more widely accessible for early detection and monitoring of stroke-related conditions, especially in resource-limited environments.
Despite its strengths, the study acknowledges some limitations. The current system performs frame-wise segmentation without considering sequential frames, and it is limited to unilateral TCCD analysis. Future research will explore integrating video-tracking practices to improve consistency across frames, developing pipelines for bilateral CoW analysis to leverage mirror-view information, and incorporating contrast-enhanced TCCD to visualize challenging vessels like the anterior and posterior communicating arteries. Additionally, multi-center studies with diverse patient populations are needed to confirm the model’s generalizability and readiness for widespread clinical adoption. For more details, you can refer to the full research paper.


