Emotions
Emotions
ABSTRACT Emotion is considered to be critical for the actual interpretation of actions and relationships.
Recognizing emotions from EEG signals is also becoming an important computer-aided method for diag-
nosing emotional disorders in neurology and psychiatry. Another advantage of this approach is recognizing
emotions without clinical and medical examination, which plays a major role in completing the Brain-
Computer Interface (BCI) structure. Emotions recognition ability, without traditional utilization strategies
such as self-assessment tests, is of paramount importance. EEG signals are considered the most reliable
technique for emotions recognition because of the non-invasive nature. Manual analysis of EEG signals
is impossible for emotions recognition, so an automatic method of EEG signals should be provided for
emotions recognition. One problem with automatic emotions recognition is the extraction and selection
of discriminative features that generally lead to high computational complexity. This paper was design
to prepare a new approach to automatic two-stage classification (negative and positive) and three-stage
classification (negative, positive, and neutral) of emotions from EEG signals. In the proposed method,
directly apply the raw EEG signal to the convolutional neural network and long short-term memory network
(CNN-LSTM), without involving feature extraction/selection. In prior literature, this is a challenging
method. The suggested deep neural network architecture includes 10-convolutional layers with 3-LSTM
layers followed by 2-fully connected layers. The LSTM network in a fusion of the CNN network has been
used to increase stability and reduce oscillation. In the present research, we also recorded the EEG signals
of 14 subjects with music stimulation for the process. The simulation results of the proposed algorithm for
two-stage classification (negative and positive) and three-stage classification (negative, neutral and positive)
of emotion for 12 active channels showed 97.42% and 96.78% accuracy and Kappa coefficient of 0.94 and
0.93 respectively. We also compared our proposed LSTM-CNN network (end-to-end) with other hand-
crafted methods based on MLP and DBM classifiers and achieved promising results in comparison with
similar approaches. According to the high accuracy of the proposed method, it can be used to develop the
human-computer interface system.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
139332 VOLUME 8, 2020
S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals
analysis [2]. Russell suggested one prevailing hypothesis, performance for emotions recognition. Lin et al. [6] validated
which stated that emotion consists of two arousals and the emotion-specific features based on EEG power spectral
valence elements [3]. Arousal denotes the emotional activa- changes and assessed the relation between EEG dynamics
tion level, whereas valence identifies positivity or negativity. and emotional states triggered by music. Koelstra et al. [24]
This representation depicts emotions systematically and is introduced the dataset for emotion analysis using physiolog-
commonly used as background knowledge in countless stud- ical signals (DEAP) for human affective states, and among
ies, including current work. 32 participants extracted the spectral power feature of five
A large body of research has been devoted to exploring frequency bands. They noticed that the frontal and parietal
the neural correlates of emotions to build devices that can lobe features can provide discriminative emotion-producing
interpret emotions. In these efforts, different pictorial [4], information. Chanel et al. [25], employed the Naive Bayes
musical [5]–[7], music video and video [8]–[10] triggers classifier to classify three emotion classes from specific fre-
have elicited emotions. Music listening encompasses a range quency bands at specific electrode locations. Zheng et al. [26]
of psychological processes, such as multimodal and per- provided the SJTU emotion EEG dataset (SEED) for emo-
ception integration, syntactic processing and encoding of tions recognition. During their study, the authors extracted
sense knowledge, sentiment, concentration, emotion, atten- six different spectral features to analyze the neural signatures
tion and social cognition [11]. Also, music will bring out for positive, negative, and neutral emotions; by this, they
strong emotions [12]. Emotional processing includes the var- found that the EEG patterns were relatively stable within
ious structures of the human brain and changes their opera- and between sessions at sensitive frequency bands and brain
tion [11]. It also induces some other physiological responses regions. Therefore, Changes in EEG power spectral can pre-
that are side effects of brain activity, such as changes in dict distinct emotional states of subjects in different brain
heart rate [13], skin conductance, and body temperature [14]. regions. Thammasat et al. [27] used physiological signals
Researchers have used different modalities such as PET [15], to extract power density spectra and fractal dimensions for
fMRI [16], NIRS [17], and EEG [18] to study neural cor- emotion analysis and found that using less familiar music
relates of emotion. High temporal resolution, non-invasive improved the accuracy of recognition regardless of whether
availability, portability, and relatively low data processing the classifier was support vector (SVM) or multilayer percep-
costs have made EEG an effective candidate for studying tion. Kumagai et al. [28] evaluated the relationship between
the neural correlates of different cognitive functions such cortical response and music familiarity. They found that when
as emotion. Various computational methods based on EEG listening to scrambled or unfamiliar music, the two peaks
signals have been developed for the observation and analysis of the cross-correlation values were significantly larger than
of automatic emotions recognition, which will be discussed familiar ones. Such results collectively indicate that the corti-
below. cal response to unfamiliar music is greater than familiar music
Balconi et al. [19] employed the EEG frequency bands and is, therefore, highly valued by BCIs programs for classifi-
in combination with hemodynamic testing to analyze brain cation applications. Zhao et al. [29] analyzed the volunteers’
reactions. Sammler et al. [13] found that listening to enjoy- EEG signals when they were watching effective films. SVM
able music in the frontal midline region increases theta power has been used as a classifier after extracting EEG features
of EEG signals. Balasubramanian et al. [7] also found a to recognize human emotions. Lu et al. [30] selected nine
higher frontal midline theta band energy for liked songs. musical passages as stimulus and divided them into three cat-
They also studied the strength of EEG components col- egories according to the two-dimensional model of emotions
lected by decomposing the wavelet packet when listening using the variance test and t-test. To evaluate EEG signals,
to liked and disliked music. Zheng et al. [20] used EEG the power density spectra were derived from different bands,
spectral power as a feature in the discovery and emotional and the dimensionality reduction of the principal component
identification of EEG channels by coarse canonical corre- analysis (PCA) was used to select the features. They found
lation analysis. Ozel et al. [21] implemented multivariate that the accuracy of SVM classifier emotions recognition
synchrosqueezing transform to extract EEG features for emo- was higher than other bands using average beta and gamma-
tional state recognition. For one of the emotional states, they band power information. Therefore, beta and gamma bands
achieved 93% classification accuracy. Hasanzadeh et al. [22] appear to be effective for emotional discrimination. Yoon and
used a nonlinear autoregressive model and genetic algo- Chung [31] suggested a classifier based on Bayes’ theorem
rithm to predict emotional states by the EEG power range that used supervised learning algorithms to classify human
when listening to music. Soleymani et al. [23] developed emotions based on EEG signals from volunteers. In this
a multimodal database to investigate emotions recognition. work, Fast Fourier Transform (FFT) analysis was used to
Based on this study, they measured the correlation between extract the feature in recognition. Li et al. [32] extracted
the electrode’s EEG spectral power and valence scores and 816 features from 16 electrodes and improved the recogni-
found that higher frequency components on the frontal, pari- tion rate by reducing CFS dimensions and machine learning.
etal, and occipital lobes had a higher correlation with the Signals related to electrode position of O2, Fp1, F3, T3, and
valence response based on self-evaluation. They also fused Fp2 were also suggested to be most closely associated with
the PSD and facial features to improve the classification mild depression. Yimin et al. [33] used a Correlation Based
Feature Selection (CFS) model to extract features from EEG used the Dual-tree Complex Wavelet Transform (DT-CWT)
signals to classify different emotions (relaxation, happiness, for feature extraction on EEG signals. These researchers also
sadness, and grief) into 8 subjects. They used BP, SVM, LDA, used the Simple Recurrent Units (SRU) network to train
and C4.5 classifiers for the classification, and concluded that emotion models. This method resulted in an average accuracy
the C4.5 classifier works better for emotions recognition than of 85.65%, 85.45%, and 87.99%, respectively, with low/high
other classifiers. Fatemeh et al. [34] used a fuzzy parallel arousal, valence, and liking classes. The challenging step in
cascade (FPC) to predict a continuous subjective assessment emotions recognition is to select the discriminative features of
of the emotional content of music from the EEG signals on different stages of emotion. In the works that are most avail-
15 subjects. They also compared the FPC model with LSTM able, first, statistical features are extracted, and then the best
and linear regression (LR) models. The RMSE of their model discriminatory features are selected manually or using com-
was about 0.089. Their proposed model RMSE was lower mon feature selection methods, which is a time-consuming
than the other models for estimating both valence and arousal. method that requires high complication complexity. Also,
Panayu et al. [35] used CNN to extract features from the for one case, the best features may not be regarded as
EEG signals for arousal and valence classification into 12 optimal in another. Therefore, implementing an algorithm
subjects. Their network architecture consisted of six layers that learns the correct features corresponding to each case
of convolution. They compared their proposed algorithm with is essential. This will remain as the main benefit of this
SVM and concluded that CNN was better in emotions recog- research.
nition. Yang et al. [36] used a hybrid neural network that In the proposed method, pre-processing operations were
combined CNN and the Recurrent Neural Network (RNN) performed on the data after recording data with the auditory
to automatic emotions recognition from EEG signals. In their stimulation. Then a fusion of a deep convolutional network
experiments, these researchers used the DEAP benchmarking and an LSTM network is used to train two-stage classification
dataset. Also, in their proposed method, 1D EEG signals (positive and negative) and three-stage classification of emo-
have been converted to 2D EEG frames. The reported accu- tions (positive, neutral and negative). The approach suggested
racy for both valence and excitement classes is reported to can be used as an end-to-end classifier, in which there is no
be 90.80% and 91% respectively. Yang et al. [37] used a need for a feature selection/extraction method and the correct
multi-column structured CNN model for emotions recogni- features of each class will be automatically learned with a
tion. The DEAP database was also used by these researchers. deep neural network.
The final accuracy reported for their multi-column model for The remainder of the paper is structured as follows:
the classification of valence and arousal is reported to be The experimental database based on the stimulation of the
90% and 90%, respectively. Chen et al. [38] used parallel auditory, the related mathematical background of CNN and
hybrid convolutional recurrent neural networks to classify the LSTM network are given in Section II. The proposed method
binary emotions of EEG signals. The DEAP database was is presented in Section III. The simulation results and com-
also used by these researchers. The final accuracy reported parison of the proposed method with the common meth-
for the classification of valence and arousal is reported to ods are given in Section IV; finally, section V is as to the
be 93.64% and 93.26%, respectively. Chen Wei et al. [39] conclusion.
TABLE 1. Validation of subjects in the EEG signal recording process for emotions recognition.
II. MATERIALS AND METHODS more than 21 on the test were excluded from the process-
In this section, we first introduce the experiments of collect- ing and conclusion process by the psychological standards.
ing EEG data based on the stimulation of the auditory at the To collect the database, 16 people (6 females and 10 males)
University of Tabriz. Then, the mathematical background of between the ages of 20 and 28 were invited to participate in
CNN and LSTM will be provided. the experiment. Participants’ EEG signal was recorded while
listening to music. All EEG signals were recorded at 29◦ C
A. EEG COLLECTING between 9 and 11 a.m. to ensure that the participants were
A database of three positive, neutral and negative emotions not tired. Also, to avoid EOG noise, all subjects were asked
was created to recognize emotions from the EEG signal. The to keep their eyes closed during the signal recording process.
nine-degree version of the Self-Assessment Manikin (SAM) The experiment was conducted using Encephalan 21-channel
test was also used in the testing process. In this test, a score EEG recorder and Macbook air 2017 (Corei5 and 8 Ram). All
below 3 is considered to be low and a score over 6 is channel data were referenced to the two reference electrodes
considered high. Before recording the signal, all partici- A1 and A2 and digitized from an international 10-20 system-
pants were asked to sign their consent form (no history of based 21-channel electrode cap at 250 Hz. Fig. 1. shows the
mental illness, no history of epilepsy, no use of psychi- EEG recording while the participant is listening to music. The
atric drugs, normal sleep at night, no fatty food, no pre- descriptive results of the BDI and SAM tests have been shown
test caffeine, and no pre-test hair washing). Participants were in Table 1. For example, according to Table 1, subject 3 was
asked to complete a beck depression inventory (BDI). After excluded from the processing-process due to a mismatch in
completing the questionnaire, participants who had scored the SAM test. Subject 1 entered the process with a mean
FIGURE 4. Part of the EEG signal for positive, negative and neutral stages of C4 and F4 channels for subject 1.
FIGURE 7. Shows the overlap operation for 2-stage and 3-stage classification of emotion.
FIGURE 11. The proposed network accuracy for (a). 2-stage and (b).
3-stage classification of emotion.
FIGURE 12. Shows the confusion matrix for (a). 2-stage and (b). 3-stage classification of emotion.
FIGURE 14. The t-SEN chart of the proposed method for (a). 2-stage and (b). 3-stage classification of emotion.
FIGURE 15. The performance of the suggested method compared to the CNN, MLP and DBM networks for
classifying 3-stage of emotion.
shows the Bar-chart diagram of sensitivity, specificity, and of the suggested method for two-stage classification and
accuracy of two-stage and three-stage classification. Further- three-stage classification.
more, the precision, sensitivity and specificity values of the In order to show the performance of the proposed
two-stage and three-stage classification are very promising LSTM-CNN method with different data types as input,
in this figure. Table 4 shows the F-measure obtained for a the accuracy of the classification is obtained using the other
2-stage (positive and negative),and 3-stage (positive, neg- common methods for 3-stage emotion recognition. In this
ative, and neutral) classification of emotion. According to regard, time data and several manual features from time
Table 4, the F-Measure for the 2-and 3-class stages is 97.42% data, along with DBM and MLP, are selected as comparative
and 95.24%, respectively. Fig. 14. also shows the t-SEN chart methods [43], [47], [48]. The number of hidden layers is
for the raw signal, Conv5, and Softmax layers for two-stage considered 3 for DBM and MLP, and the learning rate is
classification and three-stage classification of emotion. As it chosen as 0.001. Also, for CNN, the proposed architecture
is clear from the last layer, almost all the samples are sepa- in Table 3 was selected regardless of the LSTM layers.
rated for the evaluation set, indicating the optimal efficiency The parameters of the minimum, maximum, skewness, crest
TABLE 5. Comparison of the suggested network’s computational complexity with CNN, MLP and DBM for classifying 3-stage of emotion in 180 iterations.
on the experience or the trial-and-error method. Therefore, [19] M. Balconi, E. Grippa, and M. E. Vanutelli, ‘‘What hemodynamic (fNIRS),
it would be better to develop a more systematic method for electrophysiological (EEG) and autonomic integrated measures can tell
us about emotional processing,’’ Brain Cognition, vol. 95, pp. 67–76,
selecting the appropriate parameters. Second, the training Apr. 2015.
time of the proposed algorithm is relatively high, which can [20] W. Zheng, ‘‘Multichannel EEG-based emotion recognition via group
be solved using a graphical processing unit (GPU) systems. sparse canonical correlation analysis,’’ IEEE Trans. Cognit. Develop. Syst.,
vol. 9, no. 3, pp. 281–290, Sep. 2017.
In addition, deep learning is more dependent on powerful [21] P. Ozel, A. Akan, and B. Yilmaz, ‘‘Synchrosqueezing transform based fea-
computation compared to traditional machine learning meth- ture extraction from EEG signals for emotional state prediction,’’ Biomed.
ods, and will spend much more time on general training. Signal Process. Control, vol. 52, pp. 152–161, Jul. 2019.
[22] F. Hasanzadeh and S. Moghimi, ‘‘Emotion estimation during listening to
In order to save computational complexity; in future research, music by EEG signal and applying NARX model and genetic algorithm,’’
we could consider introducing some pre-training models or presented at the Nat. Conf. Technol., Energy Data Elect. Comput. Eng.,
combining transfer learning methods to accelerate model 2015.
training. We also intend to use a number of more emotional [23] M. Soleymani, S. Asghari-Esfeden, Y. Fu, and M. Pantic, ‘‘Analysis of
EEG signals and facial expressions for continuous emotion detection,’’
states for classification purposes in future studies. The pro- IEEE Trans. Affect. Comput., vol. 7, no. 1, pp. 17–28, Jan. 2016.
posed method is also expected to be used in BCI applications. [24] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T.
Pun, A. Nijholt, and I. Patras, ‘‘DEAP: A database for emotion analysis;
Using physiological signals,’’ IEEE Trans. Affect. Comput., vol. 3, no. 1,
REFERENCES pp. 18–31, Jan. 2012.
[1] A. Goldenberg, D. Garcia, E. Halperin, and J. J. Gross, ‘‘Collective emo- [25] G. Chanel, J. Kronegg, D. Grandjean, and T. Pun, ‘‘Emotion assessment:
tions,’’ Current Directions Psychol. Sci., vol. 29, no. 2, pp. 154–160, 2020. Arousal evaluation using EEG’s and peripheral physiological signals,’’ in
[2] T. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, ‘‘Electron spectroscopy Proc. Int. Workshop Multimedia Content Represent., Classification Secur.
studies on magneto-optical media and plastic substrate interface,’’ IEEE Berlin, Germany: Springer, 2006, pp. 530–537.
Transl. J. Magn. Jpn., vol. 2, no. 8, pp. 740–741, Aug. 1987. [26] W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, ‘‘Identifying stable patterns over
[3] J. A. Russell, ‘‘A circumplex model of affect,’’ J. Personality Social time for emotion recognition from EEG,’’ IEEE Trans. Affect. Comput.,
Psychol., vol. 39, no. 6, p. 1161, Dec. 1980. vol. 10, no. 3, pp. 417–429, Jul. 2019.
[4] P. C. Petrantonakis and L. J. Hadjileontiadis, ‘‘Emotion recognition from [27] N. Thammasan, K. Moriyama, K.-I. Fukui, and M. Numao, ‘‘Familiarity
EEG using higher order crossings,’’ IEEE Trans. Inf. Technol. Biomed., effects in EEG-based emotion recognition,’’ Brain Informat., vol. 4, no. 1,
vol. 14, no. 2, pp. 186–197, Mar. 2010. pp. 39–50, Mar. 2017.
[5] A. M. Bhatti, M. Majid, S. M. Anwar, and B. Khan, ‘‘Human emotion [28] Y. Kumagai, M. Arvaneh, and T. Tanaka, ‘‘Familiarity affects entrainment
recognition and analysis in response to audio music using brain signals,’’ of EEG in music listening,’’ Frontiers Hum. Neurosci., vol. 11, p. 384,
Comput. Hum. Behav., vol. 65, pp. 267–275, Dec. 2016. Jul. 2017.
[6] Y.-P. Lin, C.-H. Wang, T.-P. Jung, T.-L. Wu, S.-K. Jeng, J.-R. Duann, and [29] G. Zhao, Y. Zhang, and Y. Ge, ‘‘Frontal EEG asymmetry and middle
J.-H. Chen, ‘‘EEG-based emotion recognition in music listening,’’ IEEE line power difference in discrete emotions,’’ Frontiers Behav. Neurosci.,
Trans. Biomed. Eng., vol. 57, no. 7, pp. 1798–1806, Jul. 2010. vol. 12, p. 225, Nov. 2018.
[7] G. Balasubramanian, A. Kanagasabai, J. Mohan, and N. P. G. Seshadri, [30] J. Lu, D. Wu, H. Yang, C. Luo, C. Li, and D. Yao, ‘‘Scale-free brain-wave
‘‘Music induced emotion using wavelet packet decomposition—An EEG music from simultaneously EEG and fMRI recordings,’’ PLoS ONE, vol. 7,
study,’’ Biomed. Signal Process. Control, vol. 42, pp. 115–128, Apr. 2018. no. 11, Nov. 2012, Art. no. e49773.
[8] Y. Ding, X. Hu, Z. Xia, Y.-J. Liu, and D. Zhang, ‘‘Inter-brain EEG feature
[31] H. J. Yoon and S. Y. Chung, ‘‘EEG-based emotion estimation using
extraction and analysis for continuous implicit emotion tagging during
Bayesian weighted-log-posterior function and perceptron convergence
video watching,’’ IEEE Trans. Affect. Comput., early access, Jun. 22, 2018,
algorithm,’’ Comput. Biol. Med., vol. 43, no. 12, pp. 2230–2237,
doi: 10.1109/TAFFC.2018.2849758.
Dec. 2013.
[9] F. Noroozi, M. Marjanovic, A. Njegus, S. Escalera, and G. Anbarjafari,
[32] X. Li, B. Hu, S. Sun, and H. Cai, ‘‘EEG-based mild depressive detection
‘‘Audio-visual emotion recognition in video clips,’’ IEEE Trans. Affect.
using feature selection methods and classifiers,’’ Comput. Methods Pro-
Comput., vol. 10, no. 1, pp. 60–75, Jan. 2019.
grams Biomed., vol. 136, pp. 151–161, Nov. 2016.
[10] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, ‘‘A multimodal
database for affect recognition and implicit tagging,’’ IEEE Trans. Affect. [33] F. Hasanzadeh, M. Annabestani, and S. Moghimi, ‘‘Continuous emotion
Comput., vol. 3, no. 1, pp. 42–55, Jan. 2012. recognition during music listening using EEG signals: A fuzzy parallel cas-
[11] S. Koelsch, Brain and Music. Hoboken, NJ, USA: Wiley, 2012. cades model,’’ 2019, arXiv:1910.10489. [Online]. Available: http://arxiv.
[12] T. Eerola and J. K. Vuoskoski, ‘‘A comparison of the discrete and dimen- org/abs/1910.10489
sional models of emotion in music,’’ Psychol. Music, vol. 39, no. 1, [34] Y. Hou and S. Chen, ‘‘Distinguishing different emotions evoked by music
pp. 18–49, Jan. 2011. via electroencephalographic signals,’’ Comput. Intell. Neurosci., vol. 2019,
[13] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, ‘‘Music and emotion: Mar. 2019, Art. no. 3191903.
Electrophysiological correlates of the processing of pleasant and unpleas- [35] P. Keelawat, N. Thammasan, M. Numao, and B. Kijsirikul, ‘‘Spatiotem-
ant music,’’ Psychophysiology, vol. 44, no. 2, pp. 293–304, Mar. 2007. poral emotion recognition using deep CNN based on EEG during music
[14] L.-O. Lundqvist, F. Carlsson, P. Hilmersson, and P. N. Juslin, ‘‘Emotional listening,’’ 2019, arXiv:1910.09719. [Online]. Available: http://arxiv.org/
responses to music: Experience, expression, and physiology,’’ Psychol. abs/1910.09719
Music, vol. 37, no. 1, pp. 61–90, Jan. 2009. [36] Y. Yang, Q. Wu, M. Qiu, Y. Wang, and X. Chen, ‘‘Emotion recognition
[15] A. J. Blood and R. J. Zatorre, ‘‘Intensely pleasurable responses to music from multi-channel EEG through parallel convolutional recurrent neural
correlate with activity in brain regions implicated in reward and emotion,’’ network,’’ in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2018,
Proc. Nat. Acad. Sci. USA, vol. 98, no. 20, pp. 11818–11823, Sep. 2001. pp. 1–7.
[16] K. Mueller, T. Fritz, T. Mildner, M. Richter, K. Schulze, J. Lepsien, [37] H. Yang, J. Han, and K. Min, ‘‘A multi-column CNN model for emo-
M. L. Schroeter, and H. E. Möller, ‘‘Investigating the dynamics of the tion recognition from EEG signals,’’ Sensors, vol. 19, no. 21, p. 4736,
brain response to music: A central role of the ventral striatum/nucleus Oct. 2019.
accumbens,’’ NeuroImage, vol. 116, pp. 68–79, Aug. 2015. [38] J. Chen, D. Jiang, Y. Zhang, and P. Zhang, ‘‘Emotion recognition from
[17] S. Moghimi, A. Kushki, S. Power, A. M. Guerguerian, and T. Chau, spatiotemporal EEG representations with hybrid convolutional recurrent
‘‘Automatic detection of a prefrontal cortical response to emotionally rated neural networks via wearable multi-channel headset,’’ Comput. Commun.,
music using multi-channel near-infrared spectroscopy,’’ J. Neural Eng., vol. 154, pp. 58–65, Mar. 2020.
vol. 9, no. 2, Apr. 2012, Art. no. 026022. [39] C. Wei, L.-L. Chen, Z.-Z. Song, X.-G. Lou, and D.-D. Li, ‘‘EEG-based
[18] S. M. Alarcao and M. J. Fonseca, ‘‘Emotions recognition using EEG sig- emotion recognition using simple recurrent units network and ensem-
nals: A survey,’’ IEEE Trans. Affect. Comput., vol. 10, no. 3, pp. 374–393, ble learning,’’ Biomed. Signal Process. Control, vol. 58, Apr. 2020,
Jul. 2019. Art. no. 101756.
[40] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cam- ZOHREH MOUSAVI received the B.Sc. degree
bridge, MA, USA: MIT Press, 2016. [Online]. Available: http://www. in mechanical engineering from the University of
deeplearningbook.org Yasuj, Yasuj, Iran, in 2012, and the M.Sc. degree
[41] S. L. Hung and H. Adeli, ‘‘Parallel backpropagation learning algorithms in mechanical engineering from the University of
on CRAY Y-MP8/864 supercomputer,’’ Neurocomputing, vol. 5, no. 6, Vali Asr Rafsanjan, Kerman, Iran, in 2015. She is
pp. 287–302, Nov. 1993. currently pursuing the Ph.D. degree in mechan-
[42] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and ical engineering with the University of Tabriz,
R. R. Salakhutdinov, ‘‘Improving neural networks by preventing co-
Tabriz, Iran. Her current research interests include
adaptation of feature detectors,’’ 2012, arXiv:1207.0580. [Online].
vibration and biomedical signal processing, com-
Available: http://arxiv.org/abs/1207.0580
[43] Z. Mousavi, T. Yousefi Rezaii, S. Sheykhivand, A. Farzamnia, and pressed sensing, mechanical systems, structural
S. N. Razavi, ‘‘Deep convolutional neural network for classification of health monitoring (SHM), machine learning, neural networks, and deep
sleep stages from single-channel EEG signals,’’ J. Neurosci. Methods, learning.
vol. 324, Aug. 2019, Art. no. 108312.
[44] A. Graves, ‘‘Generating sequences with recurrent neural networks,’’ 2013,
arXiv:1308.0850. [Online]. Available: http://arxiv.org/abs/1308.0850 TOHID YOUSEFI REZAII received the B.Sc.,
[45] S. Hochreiter and J. Schmidhuber, ‘‘Long short-term memory,’’ Neural M.Sc., and Ph.D. degrees in electrical engi-
Comput., vol. 9, no. 8, pp. 1735–1780, 1997. neering (communication) from the University of
[46] O. Wichrowska, N. Maheswaranathan, M. W. Hoffman, S. G. Colmenarejo, Tabriz, Tabriz, Iran, in 2006, 2008, and 2012,
M. Denil, N. de Freitas, and J. Sohl-Dickstein, ‘‘Learned optimizers that
respectively. He is currently with the Faculty of
scale and generalize,’’ in Proc. 34th Int. Conf. Mach. Learn., vol. 70, 2017,
Electrical and Computer Engineering, University
pp. 3751–3760.
[47] R. Salakhutdinov and G. Hinton, ‘‘Deep Boltzmann machines,’’ in Arti- of Tabriz. His current research interests include
ficial Intelligence and Statistics. Clearwater Beach, FL, USA, Apr. 2009, biomedical signal processing, data compression,
pp. 448–455. compressed sensing, statistical signal process-
[48] Y.-L. Hsu, Y.-T. Yang, J.-S. Wang, and C.-Y. Hsu, ‘‘Automatic sleep ing, pattern recognition-statistical learning, and
stage recurrent neural classifier using energy features of EEG signals,’’ adaptive filters.
Neurocomputing, vol. 104, pp. 105–114, Mar. 2013.