spot_img
HomeResearch & DevelopmentAutomated Spinal Imaging: A Robotic Ultrasound and AI-Powered Reconstruction...

Automated Spinal Imaging: A Robotic Ultrasound and AI-Powered Reconstruction System

TLDR: A new integrated system combines robotic ultrasound with real-time AI-based shape completion to improve spinal visualization. It autonomously acquires ultrasound scans, extracts vertebral surfaces, and reconstructs the complete spinal anatomy, overcoming limitations of traditional methods like shadowing artifacts and complex CT-to-US registration. The system provides interactive, real-time visualization, enhances procedural consistency, and can enable navigation to target locations, validated through phantom and volunteer studies.

Ultrasound imaging is becoming an increasingly vital tool in spinal procedures, offering real-time guidance without the radiation exposure associated with other imaging methods. This capability significantly enhances clinical precision and patient safety in various interventions, from spinal injections to tumor resections and lumbar punctures. However, a major challenge in using ultrasound for spine procedures is the presence of shadowing artifacts caused by bone surfaces, which can obscure deeper tissues and make accurate target localization difficult.

Traditionally, this lack of complete anatomical information has been addressed through CT-to-ultrasound registration. This method involves aligning pre-operative CT scans with real-time ultrasound images to provide a more comprehensive view of the anatomy. However, this approach is complex, requires specific anatomical knowledge, and is limited by potential differences in spine curvature due to patient positioning changes between scans. It also necessitates a recent CT scan, which may not always be available.

A novel integrated system has been introduced that combines robotic ultrasound with real-time shape completion to significantly enhance spinal visualization. This innovative framework aims to overcome the limitations of traditional methods by autonomously acquiring ultrasound data and reconstructing the complete spinal anatomy using advanced deep learning techniques. The system is designed to provide interactive, real-time visualization, allowing clinicians to see augmented views of the spine. It also has the capability to autonomously repeat scans and potentially enable navigation to specific target locations, contributing to better consistency, reproducibility, and a deeper understanding of the underlying anatomy.

How the System Works

The proposed system operates in three main phases: Robotic Ultrasound Sweep, 3D Vertebral Shape Completion, and Real-time Visualization.

The first phase, Robotic Ultrasound Sweep, involves a robotic arm equipped with an ultrasound probe. The system is calibrated to precisely map ultrasound image coordinates to the robot’s 3D space. Three distinct robotic path planning strategies—Linear, U-shape, and Zig-Zag scans—were designed to acquire comprehensive ultrasound data of the lumbar spine. While an initial manual input is required to define start and end points, the subsequent acquisition process is fully automated. During these scans, the robot operates in a force-controlled mode to ensure consistent probe-skin contact, which is crucial for optimal image quality. As the robot scans, a 2D U-Net architecture is used to automatically segment the surface of the vertebrae in the B-mode ultrasound images, identifying visible bone structures.

The second phase focuses on 3D Vertebral Shape Completion. The segmented ultrasound masks are compounded into a 3D volume using the robot’s tracked coordinates. This 3D segmentation is then converted into a point cloud representing the spine surface. To identify individual vertebral levels (L1-L5), a PointNet model, trained on a large dataset of CT scans, classifies each point. Once individual vertebral levels are identified, a deep learning-based shape completion model reconstructs the missing regions of the spine. This model, trained on pairs of incomplete and complete vertebral shapes derived from CT data, learns to recover full vertebral morphology from partial ultrasound observations. It uses a combination of coarse shape completion and refinement networks to generate accurate and detailed 3D models.

The final phase is Real-time Visualization. After an initial complete scan and 3D reconstruction, the robot can execute the same trajectory again. During this repeated scan, the completed 3D vertebral structures are overlaid onto the live ultrasound images, providing the clinician with an augmented view of the anatomy in real-time. This integrated visualization can significantly reduce the cognitive burden on clinicians by eliminating the need for mental reconstruction of 3D anatomy from 2D images.

Validation and Results

The system’s performance was rigorously evaluated through several experiments. A quantitative assessment compared the three robotic acquisition methods (Linear, U-shape, and Zig-Zag) on a phantom setup, using a CT scan as ground truth. The linear scan generally demonstrated superior geometric accuracy in reconstructing the spine, particularly in preserving local anatomical structures, which is critical for precise surgical guidance.

Qualitative evaluations were also performed on a volunteer using the linear scanning approach. While no ground truth was available for human data, the results showed realistic shape completion, with lateral processes aligning well with the input and vertebral bodies matching expected sizes. This indicates the pipeline’s potential for real-world clinical application.

Furthermore, the researchers investigated patient-specific refinement by fine-tuning the pre-trained shape completion network with pre-operative CT scans of individual patients. This approach consistently improved reconstruction accuracy across all evaluation metrics, highlighting the significant benefit of incorporating patient-specific anatomical priors to reduce shape ambiguity and achieve more precise vertebral reconstructions. This refinement process occurs pre-operatively, ensuring the system remains computationally efficient for real-time use during procedures.

Also Read:

Conclusion and Future Outlook

This integrated system represents a significant step forward in spinal imaging, combining the benefits of robotic ultrasound with advanced shape completion techniques. By autonomously acquiring data and reconstructing complete spinal anatomy in real-time, it addresses key limitations of current methods and holds strong potential to advance spinal imaging and enable more autonomous and reliable interventions. Future work will focus on optimizing robotic acquisition trajectories based on evolving shape estimates and improving reconstruction accuracy by leveraging relative pose information across adjacent vertebrae, aiming to complete the lumbar spine as a whole rather than one vertebra at a time. For more detailed information, you can refer to the full research paper here.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -