TLDR: A new research paper introduces REFS, a robust EEG feature selection method that effectively recognizes multi-dimensional emotions even when some emotional labels are missing. It achieves this by reconstructing missing labels using advanced matrix factorization and selecting important features while minimizing redundancy, outperforming existing methods on various public datasets.
Affective Brain-Computer Interfaces (a-BCI) are becoming increasingly important for understanding and interacting with human emotions. These systems often rely on Electroencephalography (EEG), a non-invasive technology that monitors brain activity and can quickly respond to different emotional states. While discrete emotional models categorize emotions into distinct states, multi-dimensional models offer a richer way to describe how emotional states evolve.
However, using multi-type EEG features for emotion recognition comes with significant challenges. One major issue is the high dimensionality of these features combined with a relatively small number of high-quality EEG samples. This can lead to problems like classifier overfitting and slower real-time performance. Furthermore, in practical applications, it’s common to encounter partially missing multi-dimensional emotional labels. This can happen because of the open nature of data acquisition environments, or simply due to the ambiguity and variability in how individuals perceive and label emotions.
To tackle these challenges, a new study proposes a novel EEG feature selection method called REFS, which stands for Robust EEG feature selection with missing multi-dimensional annotation for emotion recognition. This method is designed to effectively select important EEG features and reconstruct missing emotional labels, even when the data is incomplete.
REFS works by integrating several advanced techniques. It uses an adaptive orthogonal non-negative matrix factorization to reconstruct the multi-dimensional emotional label space. This process leverages second-order and higher-order correlations within the emotional labels, which helps to reduce the negative impact of missing values and outliers during label reconstruction. Simultaneously, REFS employs least squares regression, combined with graph-based manifold learning regularization and global feature redundancy minimization regularization. This allows the method to select a robust subset of EEG features despite the missing information, ultimately leading to more reliable EEG-based multi-dimensional emotion recognition.
The researchers conducted extensive simulation experiments to evaluate REFS. They used three widely recognized multi-dimensional emotional datasets: DREAMER, DEAP, and HDED. The performance of REFS was compared against thirteen other advanced feature selection methods. The results consistently showed that REFS outperformed all other methods in terms of robustness for EEG emotional feature selection across various performance metrics, including macro-F1, micro-F1, ranking loss, average precision, coverage, and hamming loss.
A key finding was REFS’s ability to recover missing labels effectively, successfully reconstructing over 60% of the missing emotional labels. Ablation experiments, which tested the contribution of each component of REFS, confirmed that all modules play a crucial role in its superior performance. Additionally, the study found that REFS is not highly sensitive to its various parameters, indicating its stability. Despite its advanced capabilities, REFS demonstrated a relatively low computational cost and achieved quick convergence during its optimization process.
Also Read:
- Adaptive Learning for Emotion Recognition with Missing Physiological Data
- Advancing Emotion Recognition Through Cross-Modal Data Fusion
In conclusion, the REFS method represents a significant step forward in EEG-based multi-dimensional emotion recognition, particularly for scenarios where emotional labels are partially missing. By intelligently combining label reconstruction with robust feature selection, REFS offers a powerful tool for advancing affective brain-computer interface applications. You can read the full research paper here.


