EEG Based Classification of Emotions With CNN and RNN
EEG Based Classification of Emotions With CNN and RNN
Volume 4 Issue 4, June 2020 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470
@ IJTSRD | Unique Paper ID – IJTSRD30374 | Volume – 4 | Issue – 4 | May-June 2020 Page 1289
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
learning demonstrated a better performance in the emotion The differences and ratios between the differential entropy
detection in physiological signals, despite being conceived (DE) features of 27 pairs of hemispheric asymmetry
for the object recognition in images. electrodes give the differential asymmetry (DASM) and
rational asymmetry (RASM) features. By using the
For classification of emotion, the most popular method, K- conventional moving average and linear dynamic systems
Nearest Neighbour algorithm [5] had achieved 62.3% overall (LDS) approaches, all the features are further levelled.
accuracy by applying features namely, wavelet energy and
entropy. The results showed 78.7±2.6% sensitivity, B. Methodology
82.8±6.3% specificity and 62.3±1.1% accuracy on the ‘DEAP’ The project exhibits emotion recognition using EEG signals
database. to detect emotions, namely, happiness, sadness, neutral and
fear using SEED dataset. Two different neural models are
In [6] the paper showed that Hierarchical Structure-Adaptive used, a simple Convolutional Neural Network and Recurrent
RNN (HSA-RNN) for video summarization tasks, can Neural Network (RNN) as the classifiers. Both models are
adaptively exploit the video structure and generate the augmented using contemporary deep learning techniques
summary simultaneously. like Dropout technique and Rectilinear Units in order to
In [7] the results showed that the proposed framework introduce non-linearity in our model. Both models are
improves the accuracy and efficiency performance of EEG implemented using Keras [1] libraries in Python.
signal classification compared with traditional methods,
including support vector machine (SVM), artificial neural
network (ANN), and standard CNN by 74.2%.
Proposed Work
A. Database Agglomeration and Description
The proposed method is conducted by SJTU Emotion EEG
Dataset (SEED) which is a collection of EEG dataset provided
by the Brain-like Computing and Machine Intelligence
(BCMI) laboratory. This dataset contains the subjects' EEG
signals when they were watching film clips. The film clips are
prudently selected to induce different types of emotion,
which are neutral, sad, fear and happy. 15 subjects (7 males
and 8 females; Mean: 23.27, STD: 2.37) watched 15 video
clips in the experiments. Each subject is experimented three
times in about one week. The down-sampled, pre-processed
and segmented versions of the EEG data in Matlab (.mat file)
were obtained [2]. The detailed order of a total of 62
channels is included in the dataset. The EEG cap according to
the international 10-20 system for 62 channels is shown
figure [1].
@ IJTSRD | Unique Paper ID – IJTSRD30374 | Volume – 4 | Issue – 4 | May-June 2020 Page 1290
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
ℎݐݔ ܹ݊݅( ߪ = ݐ+ܹܿ݁ݎℎݐ−1) True Negative (TN): Predicted is negative and it’s true.
where Win is the input weight matrix, Wrec is the recurrent False Positive (FP) – Type 1 Error: Predicted is positive
matrix and σ is the hidden activation function. Each timestep and it’s false.
also computes outputs, based on the current hidden state: False Negative (FN) – Type 2 Error: Predicted is negative
and it’s false.
ݐݑܹ(݂ = ݐݕℎ)ݐ
Accuracy:
where Wout is the output weight matrix and f is the output
activation function.
Specificity:
We use a simple RNN with Rectified Linear hidden Units
(ReLUs). We train the RNN to classify the emotions in a video
agglomerated in the SEED dataset. Then using the confusion
matrix, we measure the performance of each classifier in Sensitivity:
terms of accuracy, specificity, sensitivity, and precision.
With these features of preprocessed EEG data, the existing From table (2) we see that RNN achieves the highest
CNN and proposed RNN models are trained. Then accuracy of 96.50%, 89.88%, 90.85% and 89.61% for
classification is performed using trained models. The Neutral, Sad, Fear and Happy while CNN model-based
experimental results show that the pools of features that classification output gives 95.53%, 88.13%, 88.01% and
have been chosen can achieve relatively classification 87.89% for the respective emotions.
performance across all the experiments of different subjects.
@ IJTSRD | Unique Paper ID – IJTSRD30374 | Volume – 4 | Issue – 4 | May-June 2020 Page 1291
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
A. CNN B. RNN
For CNN, figure 5 shows the exponential behaviour of the For RNN, figure 8 shows the exponential behaviour of the
accuracy during the training and testing for the 20 epochs. accuracy during the training and testing for the 20 epochs.
Similarly, in figure 6, the values of loss are displayed during Similarly, in figure 9, the values of loss are displayed during
the learning and testing that is decreasing for each epoch. the learning and testing that is decreasing for each epoch.
The confusion matrix is showing the results of prediction for The confusion matrix is showing the results of prediction for
the four classes, respectively (figure 7). the four classes, respectively (figure 10).
Fig.5. Accuracy result for CNN Fig.8 Accuracy result for RNN
Fig.6 Loss result for CNN Fig.9 Loss result for RNN
@ IJTSRD | Unique Paper ID – IJTSRD30374 | Volume – 4 | Issue – 4 | May-June 2020 Page 1292
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
Hence, this paper shows that RNN figure (10) has made the ABDULHAY, ANDN. ARUNKUMAR, “Using Deep
highest prediction for the four classes compared to CNN Convolutional Neural Network for Emotion Detection
figure (7). on a Physiological Signals Dataset (AMIGOS)”, New
Trends in Brain Signal Processing and Analysis, IEEE
Conclusion: Access (Volume: 7), 2018
Based on the existing research in the field of emotion
[5] Md. Rabiul Islam, Mohiuddin Ahmad, “Wavelet Analysis
recognition, this study explores the effectiveness of the
Based Classification of Emotion from EEG Signal”,
recurrent neural network. The experimental results show
International Conference on Electrical, Computer and
that the RNN model achieves higher precision to classify
Communication Engineering, 2019
emotions with CNN based approach. The reliability of
classification performance suggests that specific emotional [6] Bin Zhao1, Xuelong Li2, Xiaoqiang Lu2, “HSA-RNN:
states can be identified with brain activities. The learning by Hierarchical Structure-Adaptive RNN for Video
RNN suggests that neural signatures associated with Neutral, Summarization”, 2018 IEEE/CVF Conference on
Sad, Fear and Happy emotions do exist and they share Computer Vision and Pattern Recognition, 2018
commonality across individuals. Thus, RNN provides an
[7] GAOWEI XU, XIAOANG SHEN, SIRUI CHEN, YONGSHUO
appealing framework for propagating information over a
ZONG, CANYANG ZHANG, HONGYANG YUE, MIN LIU,
sequence using a continuous-valued hidden layer
FEI CHEN, and WENLIANG CHE “A Deep Transfer
representation.
Convolutional Neural Network Framework for EEG
Signal Classification”, IEEE Data-Enabled Intelligence
References
for Digital Health, 2019
[1] Guolu Cao, Yuliang Ma, Xiaofei Meng, Yunyuan Gao,
Ming Meng, “Emotion Recognition Based On CNN”, [8] Long Wen, X. Li, Xinyu Li*, Liang Gao, “A New Transfer
Proceedings of the 38th Chinese Control Conference, Learning Based on VGG-19 Network for Fault
2019 Diagnosis”, Proceedings of the 2019 IEEE 23rd
International Conference on Computer Supported
[2] Wei-Long Zheng, Jia-Yi Zhu, and Bao-Liang Lu*,
Cooperative Work in Design, 2019
“Identifying Stable Patterns over Time for Emotion
Recognition from EEG”, IEEE Transactions on Affective [9] Hussam Qassim, Abhishek Verma, David Feinzimer,
Computing, 2017 “Compressed Residual-VGG16 CNN Model for Big Data
Places Image Recognition”, 2018 IEEE 8th Annual
[3] Samarth Tripathi, Shrinivas Acharya, Ranti Dev
Computing and Communication Workshop and
Sharma, “Using Deep and Convolutional Neural
Conference (CCWC), 2018
Networks for Accurate Emotion Classification on DEAP
Dataset”, Proceedings of the Twenty-Ninth AAAI [10] Nur Yusuf Oktavia, Adhi Dharma Wibawa, Evi Septiana
Conference on Innovative Applications (IAAI-17)), Pane, Mauridhi Hery Purnomo, “Human Emotion
2017 Classification Based on EEG Signals Using Naive Bayes
Method”, International Seminar on Application for
[4] LUZ SANTAMARIA-GRANADOS, MARIO MUNOZ-
Technology of Information and Communication, 2019
ORGANERO, GUSTAVO RAMIREZ-GONZALEZ, ENAS
@ IJTSRD | Unique Paper ID – IJTSRD30374 | Volume – 4 | Issue – 4 | May-June 2020 Page 1293