TLDR: This research compares two distinct approaches for classifying motor imagery EEG signals in Brain-Computer Interfaces (BCIs): ANFIS-FBCSP-PSO, an interpretable fuzzy-logic model, and EEGNet, a deep learning model. The study found that ANFIS-FBCSP-PSO excelled in personalized, within-subject scenarios, offering clear explanations of its decisions. In contrast, EEGNet demonstrated superior generalization across different users in cross-subject tests, albeit as a “black box.” The findings provide guidance for selecting BCI systems based on whether the primary goal is interpretability for individual users or robust performance across a wider user base.
Brain-Computer Interfaces (BCIs) are rapidly advancing, offering new ways for individuals with motor impairments to interact with external devices. A key challenge in this field, particularly with Motor Imagery (MI) Electroencephalography (EEG) classification, is finding the right balance between achieving high accuracy and ensuring the system’s decisions are understandable, or ‘interpretable’. A recent study delves into this very trade-off, comparing two distinct approaches: a transparent fuzzy-reasoning model called ANFIS–FBCSP–PSO and a well-known deep-learning benchmark known as EEGNet.
Motor Imagery involves mentally simulating a movement without physically performing it, which creates detectable changes in brain activity. Traditionally, classifying these EEG signals has relied on complex feature engineering, where raw signals are preprocessed and specific features are extracted. While effective, these methods often lack clarity on how neural features directly relate to motor tasks. Deep learning models, like EEGNet, have emerged to address this by learning features and classifiers simultaneously, often achieving impressive performance. However, these models frequently operate as ‘black boxes,’ making their decision-making process opaque.
The research paper, available for further reading here, highlights the importance of Explainable Artificial Intelligence (XAI) in addressing the interpretability challenge. While some methods attempt to explain black-box models after the fact, inherently interpretable models, such as Adaptive Neuro-Fuzzy Inference Systems (ANFIS), provide clear, human-readable explanations through IF–THEN rules. The ANFIS–FBCSP–PSO pipeline combines feature extraction with fuzzy logic rules optimized using Particle Swarm Optimization (PSO), aiming for both accuracy and transparency.
The study systematically compared ANFIS–FBCSP–PSO with EEGNet using the publicly available BCI Competition IV-2a dataset. This dataset includes EEG recordings from nine participants performing four different motor imagery tasks. The researchers evaluated both models using two main strategies: ‘within-subject’ experiments, where models were trained and tested on data from the same individual, and ‘cross-subject’ (Leave-One-Subject-Out or LOSO) tests, where models were trained on data from all participants except one, which was then used for testing.
Key Findings and Trade-offs
The results revealed a clear distinction in performance based on the evaluation strategy. In within-subject experiments, the ANFIS–FBCSP–PSO model demonstrated superior performance, achieving an average accuracy of 68.58% ± 13.76% and a Kappa score of 58.04% ± 18.43%. This suggests that the fuzzy-neural model is highly effective at learning personalized, subject-specific patterns, offering strong agreement between predicted and actual class labels. Its interpretable nature means that the rules it generates can provide physiologically meaningful insights into how specific brain activity patterns correspond to motor imagery tasks.
Conversely, in cross-subject evaluations, EEGNet exhibited stronger generalization capabilities. It achieved an average accuracy of 68.20% ± 12.13% and a Kappa value of 57.33% ± 16.22%. This indicates that EEGNet, with its deep convolutional structure, is better at capturing invariant spatial–temporal features that are consistent across different users, making it more robust when applied to unseen individuals.
Also Read:
- New AI Network Predicts Epileptic Seizures with High Accuracy and Early Warning
- Enhancing Autonomous Driving with Human Brain-Inspired Cognition
Practical Implications
The study provides valuable guidance for designing MI-BCI systems. If the primary goal is a personalized, explainable BCI where understanding the system’s reasoning is crucial (e.g., for clinical applications or user trust), then an interpretable model like ANFIS–FBCSP–PSO might be the better choice. Its ability to provide explicit, human-readable explanations can enhance trust and facilitate system refinement for individual users.
However, if the objective is a generalizable, high-throughput system that needs to perform robustly across a wide range of users without extensive individual calibration, then a deep learning model like EEGNet would be more suitable. Its strength lies in its ability to learn complex patterns directly from raw data and generalize them effectively to new subjects, albeit with less transparency.
In conclusion, this research underscores that the choice of model architecture for MI-BCI systems should align with the specific application goals. Future work aims to explore hybrid and Transformer-based neuro-symbolic models that could potentially combine the best of both worlds: interpretability with scalability for real-world BCI deployment.


