spot_img
HomeResearch & DevelopmentVideo vs. Wearable Sensors: Benchmarking Technologies for Remote Movement...

Video vs. Wearable Sensors: Benchmarking Technologies for Remote Movement Analysis

TLDR: This preclinical study compares state-of-the-art deep-learning-based 3D human pose estimation (HPE) models using monocular video against inertial measurement units (IMUs) for kinematic assessment in daily living activities. Using the VIDIMU dataset of healthy adults, researchers evaluated MotionAGFormer, MotionBERT, MMPose, and NVIDIA BodyTrack. MotionAGFormer demonstrated superior overall accuracy, achieving the lowest RMSE and MAE and highest correlation. The study concludes that both technologies are viable for out-of-the-lab kinematic assessment, but highlights trade-offs in cost, accessibility, precision, and inference time, suggesting model selection should be activity-dependent.

Assessing human movement accurately outside of traditional lab settings is becoming increasingly important for fields like telemedicine, sports science, and rehabilitation. Imagine being able to monitor a patient’s recovery or an athlete’s performance from their home, without the need for expensive, bulky lab equipment. This is the promise of new technologies like monocular video-based 3D human pose estimation (HPE) and wearable inertial measurement units (IMUs).

A recent preclinical study, detailed in the paper “Paving the Way Towards Kinematic Assessment Using Monocular Video: A Preclinical Benchmark of State-of-the-Art Deep-Learning-Based 3D Human Pose Estimators Against Inertial Sensors in Daily Living Activities”, dives deep into this very challenge. Researchers Mario Medrano-Paredes, Carmen Fernández-González, Francisco-Javier Díaz-Pernas, Hichem Saoudi, Javier González-Alonso, and Mario Martínez-Zarzuela conducted a comprehensive comparison of leading deep-learning-based video models against IMU sensors, using a dataset of healthy adults performing everyday activities.

The Challenge of Movement Assessment

Traditional motion capture systems, often found in specialized laboratories, offer high precision but come with significant costs and complex setup requirements. While IMUs provide a more portable solution, they still involve wearing multiple sensors, which can be uncomfortable and require careful placement and calibration. This study explores video-based methods as a potentially more accessible and cost-effective alternative, leveraging ubiquitous devices like smartphone cameras.

How the Study Was Conducted

The researchers utilized the VIDIMU dataset, which includes video recordings and IMU data from 54 healthy adults performing 13 clinically relevant daily activities. These activities ranged from simple movements like walking and sitting to standing, to more complex actions like bottle manipulation and assembling LEGOs. The study focused on comparing joint angles derived from four state-of-the-art deep learning frameworks—MotionAGFormer, MotionBERT, MMPose 2D-to-3D pose lifting, and NVIDIA BodyTrack—against joint angles calculated from IMU data using OpenSim inverse kinematics.

Each video-based model processes video to detect key points on the human body, which are then used to calculate 3D joint angles. The IMU data, considered the reference in this out-of-the-lab context, undergoes a similar processing pipeline to ensure a fair comparison. The performance was evaluated using several metrics, including Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Pearson correlation, and the coefficient of determination (R²), which help quantify accuracy and temporal agreement.

Key Findings: MotionAGFormer Leads the Pack

Overall, MotionAGFormer demonstrated superior performance across all activities. It achieved the lowest RMSE (9.27° ± 4.80°) and MAE (7.86° ± 4.18°), along with the highest Pearson correlation (0.86 ± 0.15) and R² (0.67 ± 0.28). This suggests that MotionAGFormer is consistently better at approximating IMU measurements.

MotionBERT, while promising due to its self-supervised pretraining, showed higher overall errors. MMPose and NVIDIA BodyTrack delivered intermediate results. However, the study also revealed that performance varied significantly depending on the specific activity. For instance, MMPose performed best in “walk_forward,” NVIDIA BodyTrack in “walk_backward,” and MotionBERT excelled in “sit_to_stand.” MotionAGFormer also showed strong results in activities like “walk_along” and “tear_both_arms.”

Also Read:

Trade-offs and Future Directions

The results confirm that both video-based human pose estimation and IMUs are viable for kinematic assessment outside the lab. However, they also highlight important trade-offs. MotionAGFormer, while most accurate, had the longest inference time, which could be a factor in real-time applications. MotionBERT offered faster inference but with reduced accuracy in some complex movements. MMPose provided a balanced solution for certain tasks, especially lower-limb movements, while NVIDIA BodyTrack, a proprietary solution, offered speed and simplicity but lacked transparency.

The study emphasizes that the choice of model should align with specific clinical needs, considering factors like required precision, computational resources, and the type of movement being analyzed. Future research will focus on developing hybrid systems that combine the strengths of both video and sensor modalities, expanding evaluations to more diverse real-world conditions, and fine-tuning models for specific clinical populations and tasks. The ultimate goal is to integrate these objective kinematic measures with established clinical outcome scales to support better decision-making in remote patient care.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -