Continuous Capsule Network Method for Improving Electroencephalogram-Based Emotion Recognition

Electroencephalogram Emotion Recognition Differential Entropy Baseline Reduction 3D Cube Capsule Network Continuous Convolution.

Authors

  • I Made Agus Wirawan 1) Department of Computer Science and Electronics, Faculty of Mathematics and Natural Science, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia. 2) Education of Informatics Engineering Department, Faculty of Engineering and Vocational, Universitas Pendidikan Ganesha, Singaraja 81116,, Indonesia
  • Retantyo Wardoyo
    rw@ugm.ac.id
    Department of Computer Science and Electronics, Faculty of Mathematics and Natural Science, Universitas Gadjah Mada, Yogyakarta 55281,, Indonesia https://orcid.org/0000-0001-7604-2109
  • Danang Lelono Department of Computer Science and Electronics, Faculty of Mathematics and Natural Science, Universitas Gadjah Mada, Yogyakarta 55281,, Indonesia
  • Sri Kusrohmaniah Department of Psychology, Faculty of Psychology, Universitas Gadjah Mada, Yogyakarta 55281,, Indonesia

Downloads

The convolution process in the Capsule Network method can result in a loss of spatial data from the Electroencephalogram signal, despite its ability to characterize spatial information from Electroencephalogram signals. Therefore, this study applied the Continuous Capsule Network method to overcome problems associated with emotion recognition based on Electroencephalogram signals using the optimal architecture of the (1) 1st, 2nd, 3rd, and 4th Continuous Convolution layers with values of 64, 128, 256, and 64, respectively, and (2) kernel sizes of 2í—2í—4, 2í—2í—64, and 2í—2í—128 for the 1st, 2nd, and 3rd Continuous Convolution layers, and 1í—1í—256 for the 4th. Several methods were also used to support the Continuous Capsule Network process, such as the Differential Entropy and 3D Cube methods for the feature extraction and representation processes. These methods were chosen based on their ability to characterize spatial and low-frequency information from Electroencephalogram signals. By testing the DEAP dataset, these proposed methods achieved accuracies of 91.35, 93.67, and 92.82% for the four categories of emotions, two categories of arousal, and valence, respectively. Furthermore, on the DREAMER dataset, these proposed methods achieved accuracies of 94.23, 96.66, and 96.05% for the four categories of emotions, the two categories of arousal, and valence, respectively. Finally, on the AMIGOS dataset, these proposed methods achieved accuracies of 96.20, 97.96, and 97.32% for the four categories of emotions, the two categories of arousal, and valence, respectively.

 

Doi: 10.28991/ESJ-2023-07-01-09

Full Text: PDF