spot_img
HomeResearch & DevelopmentBiND: A New Neural Model for Precise Bimanual Movement...

BiND: A New Neural Model for Precise Bimanual Movement Prediction in Brain-Computer Interfaces

TLDR: BiND (Bimanual Neural Discriminator–Decoder) is a novel two-stage neural model designed to accurately predict bimanual hand movements in Brain-Computer Interfaces (BCIs). It first classifies the motion type (unimanual left, unimanual right, or bimanual) and then uses specialized GRU-based decoders, augmented with a trial-relative time index, to predict continuous 2D hand velocities. Benchmarked against six state-of-the-art models on a publicly available intracortical dataset from a tetraplegic patient, BiND achieved a mean R2 of 0.76 for unimanual and 0.69 for bimanual trajectory prediction, outperforming the next-best model (GRU) by +2% in both tasks and demonstrating greater robustness to session variability.

Brain-Computer Interfaces (BCIs) hold immense promise for individuals with motor impairments, such as those resulting from stroke or spinal cord injuries, by translating brain signals into control commands for prosthetic devices or robotic limbs. These advanced systems aim to restore essential motor functions, particularly hand movements, which are crucial for daily independence.

However, a significant challenge in BCI development has been accurately decoding bimanual hand movements – tasks that require the coordinated use of both hands, like eating or dressing. The complexity arises from overlapping neural representations of movements from both hands and intricate non-linear interactions between limbs, often leading to reduced decoding accuracy, especially for the non-dominant hand.

Addressing this critical challenge, researchers have introduced a novel two-stage model called BiND (Bimanual Neural Discriminator–Decoder). BiND is designed to enhance the precision of bimanual trajectory prediction in BCIs. The core idea behind BiND is to first identify the type of motion being intended – whether it’s a unimanual movement of the left hand, a unimanual movement of the right hand, or a bimanual movement involving both hands. Once the motion type is classified, the system then employs specialized decoders to predict the continuous 2D velocities of the hands.

The architecture of BiND is quite innovative. It begins with a ‘Discriminator’ stage, which uses a Long Short-Term Memory (LSTM) layer and a dense layer to classify the movement type. This classification is crucial because it allows the model to route the neural signals to one of three specialized decoders: an ‘L-Decoder’ for unimanual left-hand movements, an ‘R-Decoder’ for unimanual right-hand movements, and a ‘Bi-Decoder’ for bimanual movements. Interestingly, the Bi-Decoder is trained on all movement types, suggesting that even unimanual data can help uncover patterns relevant for bimanual coordination.

All three decoders are built upon Gated Recurrent Unit (GRU) layers, which are particularly effective at capturing temporal dependencies in neural signals. A key feature integrated into BiND is an ‘onset counter’ or time index. This auxiliary feature provides a relative time position within a trial, helping the model to account for long-range temporal dependencies that might otherwise be lost, thereby improving the overall temporal awareness of the system.

To evaluate BiND’s effectiveness, it was rigorously benchmarked against six other state-of-the-art models, including Support Vector Regression (SVR), XGBoost, Feedforward Neural Networks (FNN), Convolutional Neural Networks (CNN), Transformer, and a standard GRU model. The evaluation was conducted using a publicly available dataset from a tetraplegic patient who performed a bimanual cursor control task through imagined joystick movements across 13 sessions. Neural data was collected from microelectrode arrays implanted in the motor-related brain regions.

The results were compelling. BiND consistently outperformed all benchmarked models. It achieved a mean R2 score of 0.76 for unimanual and 0.69 for bimanual trajectory prediction, surpassing the next-best model (GRU) by approximately 2% in both tasks. Furthermore, BiND demonstrated superior robustness to session variability, showing accuracy improvements of up to 4% compared to GRU in cross-session analyses. This highlights the significant benefits of its task-aware discrimination and sophisticated temporal modeling.

While decoding the non-dominant hand in bimanual tasks still presented a greater challenge, BiND’s architecture, which explicitly separates decoding pathways and accounts for asymmetric neural encoding, proved highly effective. The model’s ability to accurately track the temporal dynamics and directional trends of imagined motor outputs on previously unseen data was clearly demonstrated.

Also Read:

In conclusion, BiND represents a significant advancement in neural decoding for BCIs. By integrating a motion-type discriminator with specialized GRU-based decoders and a time-index feature, it offers a principled approach to accurately decode both unimanual and bimanual motor intentions. This causal decoding pipeline, validated under inter-session fine-tuning, ensures robustness and direct applicability to real-time BCI implementations. Future work aims to extend BiND to adaptive online settings, explore lightweight implementations for embedded BCI hardware, and investigate its integration with sensory feedback channels, ultimately bringing BCIs closer to restoring naturalistic hand function for paralyzed patients. You can read the full research paper here.

Ananya Rao
Ananya Raohttps://blogs.edgentiq.com
Ananya Rao is a tech journalist with a passion for dissecting the fast-moving world of Generative AI. With a background in computer science and a sharp editorial eye, she connects the dots between policy, innovation, and business. Ananya excels in real-time reporting and specializes in uncovering how startups and enterprises in India are navigating the GenAI boom. She brings urgency and clarity to every breaking news piece she writes. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -