AReviewofEEGEmotionRecognition
AReviewofEEGEmotionRecognition
net/publication/335982556
CITATIONS READS
0 126
2 authors, including:
SEE PROFILE
All content following this page was uploaded by Lars Rune Christensen on 19 January 2024.
Abstract — Emotion recognition is an important aspect of twoelectrodes, there are different techniques for specifying
HMI (Human Machine Interface) Field, EEG which two. Although the monopolar recording is more
(Electroencephalography) allows a simple and effective popular. The most famous placement of the electrodes is
elicitation of those emotions, increasing the accuracy of the called 10-20 position system, which is proposed by the
EEG signals is the focus of many researchers from across the International Federation of Societies for
globe, some are intending to improve the signals by focusing Electroencephalography and Clinical Neurophysiology.
on the signal processing techniques, some are focusing on [25],[30]
statistics or machine learning techniques. In this paper, we
will discuss the most common techniques, especially the II. REVIEW EXPERIMENTSETUP
studies that are yielding to the best result, but we also are
going to highlight the novel ways of classifying the emotions EEG Emotion Recognition is a still standing problem and it
even if the results weren’t the best. Also reviewing the has been for a long time. There is a lot of studies in this area,
common steps of making the emotion elicitation experiment most of the studies have a similar experimental setup to
setup, we will discuss the different techniques of collecting the extract the emotion from the user and then classify it,the steps
signals, then extracting the features then selecting the look like this:
features, as well as discussing some standing problems in the
field and future growth areas. 1- Emotion elicitation: by using an external stimulusfor
example (images, videos, audio, game) the studies
Keywords—EEG,Emotion Recognition, Emotion could develop their own stimulation material or they
Detection,HMI,BCI can choose from a readymade dataset to trigger a
studied range of emotions
I. INTRODUCTION 2- Collecting SAM (Self-Assessment Manikin): The
If the machine is conscious about the current user emotion, it same image could trigger a happy feeling in one
can drive it to take more informed decisions that will be subject but a sad feeling on the other, that’s why we
appreciated by the user, and reduce the user frustration, need to ask the subjects after the experiment to tell us
resulting in enhancement of the user machine communication, what is the feeling that they experienced, the depth or
hence driving the HMI (Human Machine Interaction) field the magnitude of the emotion may differ as well
forward.Humans emotion detection can be approached from a that’s why most of the studies are using variations of
number of angles, and number or rational. Researchers were SAM to rate pleasure, arousal, and dominance.
trying to detect the emotions from different angels like text, 3- Feature extraction and feature selection of the EEG
speech, facial images or videos, facial depth images, skin data.
conductivity, temperature, EOG (Electrooculogram), heart 4- Machine Learning Classification:the most common
rate, Eye blinking, Heart Rate Variability (HRV), and now classifier is the SVM but depending on the study
EEG.The biometrics reflects a high correlation between the researchers tend to use the otherclassifierbased on
readings and the human emotions,especially the brain signals. their data.
Brain signals can be measured by different techniques, there It’s not clear in this field (EEG Emotion Recognition) on
are several invasive and non-invasive techniques for collecting the range of emotion that can be detected or on how to
the brain signals such an EEG (Electroencephalogram), fMRI detect certain emotions, so each study is trying to either
(Functional Magnetic Resonance Imaging), MEG (Magneto focus on one or two emotions and describe the best
Encephalography), NIRS (Near-infrared Spectroscopy), PET approach for it, but there are no well-proven standards,
(Positron Emission Tomography), EROS (Event-related studies examples:
optical signal). In this work, we will focus on EEG Emotion 1- Distinct emotion: (happy, sad …) in this type of
Recognition. EEG signals are the voltage studies researches focus on a single emotion or a
fluctuationsmeasured by placing sensitive electrodes on the handful of emotions to recognize.
scalp measuring the voltage by microvolt (mV) with a certain 2- Positive vs Negative emotion detection.
frequency Ex. 100 Hz. The EEG signals could be monopolar 3- Arousal, Valancestate:high arousal high valence
or bipolar. Monopolar is measuring the pure voltage (HAHV),low arousal high valence (LAHV), high
readings,bipolar is measuring the voltage difference between arousal low valence (HALV).
[27], [28]
subjects
[11] Regret, rejoice, other emotion Channels: 64 feature extraction: Approximate entropy (ApEn) Classifier: Fisher Linear Extracting the regret emotion from the
2015 Discriminant (FLD) 25 subject using gambling paradigm signal
[12] ‘anger’, ‘contempt’, ‘disgust’, ‘fear’, Channels: 14 channels feature extraction: Wavelet Classifier: Mel-frequency cepstral A human can have more than one emotion
‘sad’, ‘surprise’, ‘happy’. coefficient (MFCC) multilayer perceptron (MLP) IAPS at a time
[13] positive/negative and the Channels: F8, FC2, FC6, C4, T8, CP2, CP6, and PO4, F7, FC1, FC5, C3, T7, CP1, CP5, PO3 Deep neural network for features extraction
2018 approach/withdrawal feature extraction: DEEP PHYSIOLOGICAL AFFECT NETWORK (Deep Learning model)
Classifier: multi-layer convolutional neural network (CNN), 1,280 videos, along with the 64
combinations of physiological signals per video.
[14] Normal, Abnormal Channels: 64 feature extraction: Wavelet, Discrete wavelet transform (DWT) Classifier: feed 75% for normal and 65% for abnormal
2016 forward back propagation, 10 subjects
[15] Valence, Arousal, Liking with Positive Channels: 32 and 10 feature extraction: Bandpower and PSD by Wavelet Transform The best combination is one-minute EEG
2014 Negative for each Classifier: Support Vector Machine (SVM), 32 participants DEAP data using band power filter from 10-
channel probes
[16] arousal, valence, dominance and Channels: 32-channel feature extraction: Gaussian Mixture Model and wavelet Classifier: unsupervised training have better results
2005 liking linear ridge regression and support vector regression (SVR), 40 one-minute long music than traditional classification
videos and let then score dominance (on a scale from 1 to 9) and familiarity (on scale 1 to 5).
From DEAP dataset
[31] happy, sad and neutral) Channels: Twelve channels (AF3, F7, F3, FC5, P7, O1, O2, P8, FC6, F4, F8, and AF4) feature Comparing feature selection techniques
2014 extraction: short-time Fourier transform (STFT) 1sec window. differential laterality (DLAT)
and differential causality (DCAU) Classifier: Gaussian Naïve Bayes (GNB) Music listening (24
trials per day).
3) Learning vector quantizationLVQ:it’s related the kNN [3] A. Samara, M. L. R. Menezes and L. Galway, "Feature Extraction for
and it applies a winner takes all approach, it’s a network Emotion Recognition and Modelling Using Neurophysiological Data,,"
15th International Conference on Ubiquitous Computing and
which uses supervised learning [1]. Communications and 2016 International Symposium on Cyberspace and
4) Artificial Neural NetworksANN: it’s a nonlinear Security (IUCC-CSS), 2016.
classifier, the most commonversion of ANN is (MLPNN) [4] A. Patil, C. Deshmukh and A. R. Panat, "Feature extraction of EEG for
Multi-Layer Perceptron Neural Network or (MLP)Multi- emotion recognition using Hjorth features and higher order crossings,"
Conference on Advances in Signal Processing (CASP), 2016 .
Layer Perceptron.
[5] S. W. Byun, S. P. Lee and H. S. Han, "Feature Selection and Comparison
5) RestrictedBoltzmannmachines RBMs: This study [8] at for the Emotion Recognition According to Music Listening,"
el is using3 layers of RBM to recognize 4 deferent distinct International Conference on Robotics and Automation Sciences
emotions. (ICRAS), 2017.
6) Others like: Linear Discriminant Analysis LDA [6] K. Yano and T. Suyama, , "Fixed low-rank EEG spatial filter estimation
for emotion recognition induced by movies," International Workshop on
assumes the features are Gaussian distributed and itfails if the Pattern Recognition in Neuroimaging (PRNI), 2016.
discriminatory function is not in mean but in the variance of [7] P. C. Petrantonakis and L. J. Hadjileontiadis, "Adaptive Emotional
the data [23], also NBC Naive Bayes classifier, Hidden Information Retrieval From EEG Signals in the Time-Frequency
Markov Model (HMM), Gaussian Mixture Models(GMM). Domain," IEEE Transactions on Signal Processing, 2012.
[8] Y. Gao, H. J. Lee and R. M. Mehmood, "Deep learninig of EEG signals
VII. CONCLUSION AND DISCUSSION for emotion recognition," IEEE International Conference on Multimedia
& Expo Workshops (ICMEW), 2015.
After collecting the SAMresults it has to be mapped with the
[9] Y. H. Liu, W. T. Cheng, Y. T. Hsiao, C. T. Wu and M. D. Jeng, "EEG-
EEG data which is a lot of work that requires precision and based emotion recognition based on kernel Fisher's discriminant analysis
carefulness, especially when tagging the EEG data with the and spectral powers," IEEE International Conference on Systems, Man,
SAM feedback, in the experiments that have been made by and Cybernetics (SMC), 2014.
other studies, a lot of the collected data had to be dropped due [10] D. Huang, C. Guan, Kai Keng Ang, Haihong Zhang and Yaozhang Pan,
to the low-quality data. Along with every subject had to tag "Asymmetric Spatial Pattern for EEG-based emotion detection,"
his own data as the emotion tend to differ from subject to International Joint Conference on Neural Networks (IJCNN), 2012.
another. Cleansing the data and tagging it for the classification [11] Ou Lin, Guang-Yuan Liu, Jie-Min Yang and Yang-Ze Du,
"Neurophysiological markers of identifying regret by 64 channels EEG
is a mandarin taskhence it’s a progress hindrance. signal," 12th International Computer Conference on Wavelet Active
Another issue is that there is no closed feedback loop to Media Technology and Information Processing (ICCWAMTIP), 2015.
enhance the accuracy of emotion detection, by closed [12] D. Handayani, H. Yaacob, A. Wahab and I. F. T. Alshaikli, "Statistical
feedback loop we mean a method to show the subject the Approach for a Complex Emotion Recognition Based on EEG Features,"
classified data and allow him to judge it and enhance his brain 4th International Conference on Advanced Computer Science
Applications and Technologies (ACSAT), 2015.
wave next time to get more accurate result, allowing the
human mind to train with the model, it’s a way to make the [13] B. H. Kim and S. Jo , "Deep Physiological Affect Network for the
Recognition of Human Emotions," IEEE Transactions on Affective
classifier and the mind to learn in real-time. Computing,, 2018 .
Another phenomena that will affect the result is peoplebrain [14] S. G. Mangalagowri and P. C. P. Raj, "EEG feature extraction and
signals are deferent from each other’s [9]which meansthat the classification using feed forward backpropogation algorithm for emotion
EEG data is unique per person and the tanning of the classifier detection," International Conference on Electrical, Electronics,
has to be per person, the model is specific to each person, Communication, Computer and Optimization Techniques (ICEECCOT),
2016 .
which can be a problem for the applications that require
[15] I. Wichakam and P. Vateekul,, "An evaluation of feature extraction in
recognition directly without the possibility of doing the EEG-based emotion prediction with support vector machines," 11th
training session first. International Joint Conference on Computer Science and Software
Moreover, there is no general agreement on which feature is Engineering (JCSSE), 2014.
best to describe which emotion in other words (emotion to [16] JOSEPH A. MIKELS, BARBARA L. FREDRICKSON, GREGORY R.
features mapping).Furthermore, humans can have more than LARKIN, CASEY M. LINDBERG, SAM J. MAGLIO, PATRICIA A.
one emotion at the same time [13] so currently, there is no REUTER-LORENZ, "Emotional category data on images from the
International Affective Picture System," Behavior Research Methods ,
way to classify more than one emotion, the state of the art 2005.
struggle with classifying one. [17] Elise S. Dan-Glauser, Klaus R. Scherer, "The Geneva affective picture
database (GAPED): a new 730-picture database focusing on valence and
REFERENCES normative significance," Behavior Research Methods, 2011.
[1] Esmeralda C. Djamal, Poppi Lodaya, "EEG based emotion monitoring [18] Artur Marchewka, Łukasz Żurawski, Katarzyna Jednoróg, Anna
using wavelet and learning vector quantization," 4th International Grabowska, "The Nencki Affective Picture System (NAPS): Introduction
Conference on Electrical Engineering, Computer Science and to a novel, standardized, wide-range, high-quality, realistic picture
Informatics (EECSI), 2017. database," Behavior Research Methods, 2014.
[2] P. Ackermann, C. Kohlschein, J. Á. Bitsch, K. Wehrle and S. Jeschke, [19] Benedek Kurdi, Shayn Lozano, Mahzarin R. Banaji , "Introducing the
"EEG-based automatic emotion recognition: Feature extraction, selection Open Affective Standardized Image Set (OASIS)," Behavior research
and classification methods," 18th International Conference on e-Health methods, 2017.
Networking, Applications and Services (Healthcom), 2016. [20] R. Jenke, A. Peer and M. Buss, "Feature Extraction and Selection for
Emotion Recognition from EEG," IEEE Transactions on Affective [28] A. Saidatul, M. P. Paulraj, S. Yaacob and N. F. Mohamad Nasir,
Computing, 2014. "Automated System for Stress Evaluation Based on EEG Signal: A
[21] J. Kaur and A. Kaur, "A review on analysis of EEG signals," Prospective Review," IEEE 7th International Colloquium on Signal
International Conference on Advances in Computer Engineering and Processing and its Applications, 2011.
Applications, 2015. [29] X. Zhuang, V. Rozgić and M. Crystal, "Compact unsupervised EEG
[22] Margaret M.Bradley, Peter J.Lang, "Measuring emotion: The self- response representation for emotion recognition," International
assessment manikin and the semantic differential," Journal of Behavior Conference on Biomedical and Health Informatics (BHI), 2014.
Therapy and Experimental Psychiatry, 2002. [30] M. A. B. S. Akhanda, S. M. F. Islam and M. M. Rahman, "Detection of
[23] M. Rajya Lakshmi, Dr. T. V. Prasad, Dr. V. Chandra Prakash, "Survey Cognitive State for Brain-Computer Interfaces," International
on EEG Signal Processing Methods," International Journal of Advanced Conference on Electrical Information and Communication Technology
Research in Computer Science and Software Engineering (IJARCSSE), (EICT), 2014.
2014. [31] Y. P. Lin and T. P. Jung, "Exploring day-to-day variability in EEG-based
[24] Sunil Kalagi, José Machado, Vitor Carvalho, Filomena Soares, Demétrio emotion classification," IEEE International Conference on Systems,
Matos, "Brain computer interface systems using non-invasive Man, and Cybernetics (SMC), 2014.
electroencephalogram signal : A literature review," International
conference on engineering technology and innovation (ICE/ITMC),
2017.
[25] S. Vaid, P. Singh and C. Kaur, "EEG Signal Analysis for BCI Interface:
A Review," Fifth International Conference on Advanced Computing &
Communication Technologies, 2015.
[26] S. H. Kim and N. A. N. Thi, "Feature extraction of emotional states for
EEG-based rage control," 39th International Conference on
Telecommunications and Signal Processing (TSP), 2016.
[27] A. F. Rabbi, K. Ivanca, A. V. Putnam, A. Musa, C. B. Thaden and R.
Fazel-Rezai, "Human performance evaluation based on EEG signal
analysis: A prospective review," Annual International Conference of the
IEEE Engineering in Medicine and Biology Society, 2009.