A Feature-Fused Convolutional Neural Network for Emotion Recognition From Multichannel EEG Signals

Discriminative model Feature (linguistics)
DOI: 10.1109/jsen.2022.3172133 Publication Date: 2022-05-03T20:04:07Z
ABSTRACT
Automatic emotion recognition based on multichannel electroencephalogram (EEG) data is a fundamental but challenging problem. Some previous researches ignore the correlation information of brain activity among inter-channel and inter-frequency bands, which may provide potential related to emotional states. In this work, we propose 3-D feature construction method spatial-spectral information. First, power values per channel are arranged into 2-D spatial representation according position electrodes. Then, features from different frequency bands integration tensor capture their complementary Simultaneously, novel framework fusion modules dilated bottleneck-based convolutional neural networks (DBCN) builds more discriminative model process for EEG recognition. Both participant-dependent participant-independent protocols conducted evaluate performance proposed DBCN DEAP benchmark datasets. Mean 2-class classification accuracies 89.67% / 90.93% (for participant-dependent) 79.45% 83.98% participant-independent) were respectively achieved arousal valence. These results suggest spectral could be extended assessment mood disorder human-computer interaction (HCI) applications.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (54)
CITATIONS (28)