0% found this document useful (0 votes)
4 views

AReviewofEEGEmotionRecognition

The document is a review of EEG emotion recognition techniques, focusing on the methods used to improve the accuracy of EEG signals for emotion detection. It discusses various approaches including signal processing, machine learning, and the experimental setup for emotion elicitation. The paper highlights the challenges in the field and summarizes multiple studies that have explored different emotions and classification techniques using EEG data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

AReviewofEEGEmotionRecognition

The document is a review of EEG emotion recognition techniques, focusing on the methods used to improve the accuracy of EEG signals for emotion detection. It discusses various approaches including signal processing, machine learning, and the experimental setup for emotion elicitation. The paper highlights the challenges in the field and summarizes multiple studies that have explored different emotions and classification techniques using EEG data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335982556

A Review of EEG Emotion Recognition

Article in International Journal of Computer Trends and Technology · June 2019


DOI: 10.14445/22312803/IJCTT-V67I6P106

CITATIONS READS

0 126

2 authors, including:

Lars Rune Christensen


IT University of Copenhagen
56 PUBLICATIONS 629 CITATIONS

SEE PROFILE

All content following this page was uploaded by Lars Rune Christensen on 19 January 2024.

The user has requested enhancement of the downloaded file.


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

A Review of EEG Emotion Recognition


Mohamed Ahmed Abdullah#1, Lars Rune Christensen*2
1#
Sudan University of Science and Technology,2#IT University of Copenhagen

Abstract — Emotion recognition is an important aspect of twoelectrodes, there are different techniques for specifying
HMI (Human Machine Interface) Field, EEG which two. Although the monopolar recording is more
(Electroencephalography) allows a simple and effective popular. The most famous placement of the electrodes is
elicitation of those emotions, increasing the accuracy of the called 10-20 position system, which is proposed by the
EEG signals is the focus of many researchers from across the International Federation of Societies for
globe, some are intending to improve the signals by focusing Electroencephalography and Clinical Neurophysiology.
on the signal processing techniques, some are focusing on [25],[30]
statistics or machine learning techniques. In this paper, we
will discuss the most common techniques, especially the II. REVIEW EXPERIMENTSETUP
studies that are yielding to the best result, but we also are
going to highlight the novel ways of classifying the emotions EEG Emotion Recognition is a still standing problem and it
even if the results weren’t the best. Also reviewing the has been for a long time. There is a lot of studies in this area,
common steps of making the emotion elicitation experiment most of the studies have a similar experimental setup to
setup, we will discuss the different techniques of collecting the extract the emotion from the user and then classify it,the steps
signals, then extracting the features then selecting the look like this:
features, as well as discussing some standing problems in the
field and future growth areas. 1- Emotion elicitation: by using an external stimulusfor
example (images, videos, audio, game) the studies
Keywords—EEG,Emotion Recognition, Emotion could develop their own stimulation material or they
Detection,HMI,BCI can choose from a readymade dataset to trigger a
studied range of emotions
I. INTRODUCTION 2- Collecting SAM (Self-Assessment Manikin): The
If the machine is conscious about the current user emotion, it same image could trigger a happy feeling in one
can drive it to take more informed decisions that will be subject but a sad feeling on the other, that’s why we
appreciated by the user, and reduce the user frustration, need to ask the subjects after the experiment to tell us
resulting in enhancement of the user machine communication, what is the feeling that they experienced, the depth or
hence driving the HMI (Human Machine Interaction) field the magnitude of the emotion may differ as well
forward.Humans emotion detection can be approached from a that’s why most of the studies are using variations of
number of angles, and number or rational. Researchers were SAM to rate pleasure, arousal, and dominance.
trying to detect the emotions from different angels like text, 3- Feature extraction and feature selection of the EEG
speech, facial images or videos, facial depth images, skin data.
conductivity, temperature, EOG (Electrooculogram), heart 4- Machine Learning Classification:the most common
rate, Eye blinking, Heart Rate Variability (HRV), and now classifier is the SVM but depending on the study
EEG.The biometrics reflects a high correlation between the researchers tend to use the otherclassifierbased on
readings and the human emotions,especially the brain signals. their data.
Brain signals can be measured by different techniques, there It’s not clear in this field (EEG Emotion Recognition) on
are several invasive and non-invasive techniques for collecting the range of emotion that can be detected or on how to
the brain signals such an EEG (Electroencephalogram), fMRI detect certain emotions, so each study is trying to either
(Functional Magnetic Resonance Imaging), MEG (Magneto focus on one or two emotions and describe the best
Encephalography), NIRS (Near-infrared Spectroscopy), PET approach for it, but there are no well-proven standards,
(Positron Emission Tomography), EROS (Event-related studies examples:
optical signal). In this work, we will focus on EEG Emotion 1- Distinct emotion: (happy, sad …) in this type of
Recognition. EEG signals are the voltage studies researches focus on a single emotion or a
fluctuationsmeasured by placing sensitive electrodes on the handful of emotions to recognize.
scalp measuring the voltage by microvolt (mV) with a certain 2- Positive vs Negative emotion detection.
frequency Ex. 100 Hz. The EEG signals could be monopolar 3- Arousal, Valancestate:high arousal high valence
or bipolar. Monopolar is measuring the pure voltage (HAHV),low arousal high valence (LAHV), high
readings,bipolar is measuring the voltage difference between arousal low valence (HALV).
[27], [28]

ISSN: 2231-2803 http://www.ijcttjournal.org Page 41


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

Study Purpose of the study Technique Observations and result


[1] To detect the emotions (Excited, Channels: AF3, T7, T8, AF4 with 128 Hz, feature extraction: WaveletClassifier: Support Run on 10 subjects, using emotion database,
2017 Relax, Sad, Average (neutral)) Vector Machine (SVM) and Learning Vector Quantization (LVQ) (morning, noon, and night), window is 10s,
the accuracy (Excited 88%, Relax 90%, Sad
84%, Average 87%)
[2] Anger, Surprise, Other Channels: (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4) with 128Hz. feature RF better than SVM
2016 extraction: Short-Time Fourier Transform (STFT) and (mRMR), and 1sec window, Classifier:
Random Forests (RF) and SVM. they used the DEAP dataset to trigger the emotions
[3] Valence, Arousal Channels: (Fp1, Fp2, F3 and F4), feature extraction: Wavelet + Basic Statistics Higher Order Valence 82%, Arousal 76%
2016 Crossing (HOC), Classifier: SVM as a classifier. 32 participants involved, and they used the
DEAP.
[4] Happy, angry, sad, neutral Channels: 10-20 system, feature extraction: Principal Component Analysis (PCA) Fourier HOC is the best
2016 Transform (FT), Short-time Fourier Transform (STFT) slow cortical potential (SCP) and
Wavelet Packet Transform (WPT)
Discrete Wavelet Transform (DWT), Classifier: radial basis function neural network (RBFNN)
and SVM. 5 subjects
[5] - high arousal, high valence x - low Channels: 32, feature extraction: Statistical Feature, Hjorth Features, Non-Stationary Index, the statistical feature is more powerful
2017 arousal, high valence x - low arousal, Higher Order Crossings, Classifier: Relief algorithm, Bhattacharyya distance, DEAP, 32
low valence x - high arousal, low subjects, 40 Music videos
valence
[6] positive or negative Channels: 32 128Hz, feature extraction: spatial filter of common spatial pattern (CSP), spatial filters better than conventional
2016 Classifier: ElasticNet, LDA, QDA, SVM, set of movie clips, 23 subjects methods (CSP)
[7] low arousal low valence (LALV), low Channels: 3 channels, feature extraction: adaptive way (AsI-based algorithms), Classifier: AsI-based algorithms introduced performing
2012 arousal high valence (LAHV), high quadratic discriminant analysis (QDA), Mahalanobis distance (MD), -nearest neighbour( - better than simple SVM
arousal high valence (HAHV), and NN), support vector machine (SVM), 16 subjects, IAPS
high arousal low valence (HALV)
[8]20 happy, calm, sad, and scared Channels:Fpl, Fp2, C3, C4, F3, and F4 feature extraction: No Extraction Classifier: Three better recognition accuracy than KNN, SVM,
15 layers of restricted Boltzmann machines (RBMs), 21 subjects ANN
[9] (LALV), low arousal high valence Channels: 64, feature extraction: Wavelet Classifier: kernel Fisher’s discriminant analysis Novel feature extraction method KFEP
2014 (LAHV), high arousal high valence (KFDA) kernel eigen-emotion pattern (KEEP) 10 volunteers better than normal classification
(HAHV), and high arousal low
valence (HALV)
[10] Arousal detection (strong vs. calm), Channels: FP1/FP2, F7/F8, F3/F4, FT7/FT8, and FC3/FC4. feature extraction: asymmetric Novel feature extraction method
2012 and valence detection (positive vs. features and filter bank common spatial pattern (FBCSP) as a benchmark and proposing
negative). Recursive Fisher linear discriminant (RFLD) Classifier: K-nearest neighbor (KNN), Naive Bayes
(NB), and support vector machine (SVM) ,video clips, lasts for less than 20 minutes, 4

ISSN: 2231-2803 http://www.ijcttjournal.org Page 42


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

subjects
[11] Regret, rejoice, other emotion Channels: 64 feature extraction: Approximate entropy (ApEn) Classifier: Fisher Linear Extracting the regret emotion from the
2015 Discriminant (FLD) 25 subject using gambling paradigm signal
[12] ‘anger’, ‘contempt’, ‘disgust’, ‘fear’, Channels: 14 channels feature extraction: Wavelet Classifier: Mel-frequency cepstral A human can have more than one emotion
‘sad’, ‘surprise’, ‘happy’. coefficient (MFCC) multilayer perceptron (MLP) IAPS at a time
[13] positive/negative and the Channels: F8, FC2, FC6, C4, T8, CP2, CP6, and PO4, F7, FC1, FC5, C3, T7, CP1, CP5, PO3 Deep neural network for features extraction
2018 approach/withdrawal feature extraction: DEEP PHYSIOLOGICAL AFFECT NETWORK (Deep Learning model)
Classifier: multi-layer convolutional neural network (CNN), 1,280 videos, along with the 64
combinations of physiological signals per video.
[14] Normal, Abnormal Channels: 64 feature extraction: Wavelet, Discrete wavelet transform (DWT) Classifier: feed 75% for normal and 65% for abnormal
2016 forward back propagation, 10 subjects
[15] Valence, Arousal, Liking with Positive Channels: 32 and 10 feature extraction: Bandpower and PSD by Wavelet Transform The best combination is one-minute EEG
2014 Negative for each Classifier: Support Vector Machine (SVM), 32 participants DEAP data using band power filter from 10-
channel probes
[16] arousal, valence, dominance and Channels: 32-channel feature extraction: Gaussian Mixture Model and wavelet Classifier: unsupervised training have better results
2005 liking linear ridge regression and support vector regression (SVR), 40 one-minute long music than traditional classification
videos and let then score dominance (on a scale from 1 to 9) and familiarity (on scale 1 to 5).
From DEAP dataset
[31] happy, sad and neutral) Channels: Twelve channels (AF3, F7, F3, FC5, P7, O1, O2, P8, FC6, F4, F8, and AF4) feature Comparing feature selection techniques
2014 extraction: short-time Fourier transform (STFT) 1sec window. differential laterality (DLAT)
and differential causality (DCAU) Classifier: Gaussian Naïve Bayes (GNB) Music listening (24
trials per day).

Table 1 Studies on emotion detection using EEG

ISSN: 2231-2803 http://www.ijcttjournal.org Page 43


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

identify the emotion. Other studies focused on Rage [26] the


III. EMOTIONS STIMULI DATASETS study didn’t conduct a data collection sessions but they sued
There is a number of datasets that provide a studied emotion DEAP which is a premade data set to just identify that
stimulus and metadata around each item in the dataset: particular emotion.
There are other studies [5, 6, 10] used the music to trigger the
A. International Affective Picture System: IAPS consists of a
emotionsand collect the emotions by using SAM.
set of pictures used to cause a wide range of emotional Other studies used different kind of datasets [29] but the its
stimulations in the subject, every picture will contain the less common, with its own benefits and hazards.
expected dimensions valence, arousal, and dominance, to
be used as a reference. The dataset consists of a diverse IV. FEATURES EXTRACTION
number of pictures, snakes, accidents, contamination,
In EEG Emotion Recognition field there is no clear
insects, illness, attack scenes, loss, pollution, babies,
understanding or agreement on which feature is describing
puppies, and landscape scenes, among others. This dataset
which emotion [20], different studies are using
also contains metadata describing the dimensional aspects
differentfeature extraction techniques depending on their
of the emotion that will be triggered by the picture. For
application and sometimes trial and error [22], based on the
example, heart rate and facial electromyographic activity
best result. Here is a list of common ways to pre-process the
differentiate negative frompositive valence, whereas skin
signal and remove the artefacts like eyes blinking.
conductance. The dataset also trying to attach distinct
Removing artefacts:
emotion with the picture (sadness, disgust,fear, happiness
1) Principal Component Analysis (PCA)It is a
and nurturance) also contains valence and arousal
readings, and that they can be distinguished by facial techniquefor dimension reduction (noise reduction) of data
electromyographic, heart rate, and electrodermal. There is without loss of information. The data is linearly transformed
another dataset called International Affective Digital in such a way that only orthogonal components are
Sounds (IADS) which contain sounds stimuli instead of retained.[21]
pictures. [16] 2) Independent Component Analysis ICA:The goal is to
remove the artefacts, so after the PCA reduce the data to
B. Genevaaffective picture database: GAPEDusually the components then ICA will work on separatingthose
studies have multiple tries to get the measurement done, so components. Sothe distinction of the raw EEG data and the
if the subject saw the image they can’t show it again as it artefacts become clear. then those artefacts can be removed.
will be well known by the subject and it will not trigger the But the problem is that the number of factors that are affecting
stimuli again, that’s why the GAPED has being introduced the signal is not identified, so the assumption of EEG data and
to offer an alternative to those images.This dataset also artefacts is specially fixed is not always right.
contains anumber of measures to give a view on the 3) Fractional Dimension:fitting a minimum number of
dataset images like facial expressions and physiological circles in an original value will help us represent the EEG
reactions. The downfallof this dataset is the limited data, by doing that we will be reducing the complexity of the
number of Positive emotions compared to the negative signal.
ones. [17]
4) Other techniques: like Common Average Referencing
C. Nencki Affective Picture System:NAPS this dataset CAR, it will measure the potential of an electrode with respect
consists of 1,356 high-quality photographs. This to the average of all the other electrodes,this will reducethe
pictureconsists of five different categories (faces, noise by subtracting the commonbrain activity from the
people,animals, objects, and landscapes). The dataset position of interest. There are also methods like (SL) Surface
contains picture metadata, valence, arousal, and approach- Laplacian or (CSP) Common Spatial Patterns. [23]
avoidance dimensions using bipolar semantic slider scales
using Self-Assessment Manikin (SAM). All the images in the first step was to clean the data from the artefacts, then we
the dataset are1,600 by 1,200 in size. [18] can continue with feature extraction:

D. Open Affective Standardized Image Set: OASISits consists E. Statistical


of 900 pictures that are distributed into four categories 1) (AR) Autoregressive: it’s a time series modelling and
(people, animals, objects and scenes). It’s an open-access itrepresents the EEG signal and it’s widely used. There are
dataset, doesn’t have copyright restrictions like IAPS Also other modelsto calculate the randomness of a signallike a
the spread of the range of positive and negative images is weighted moving average filter.
reasonable.[19] 2) ARMA and MVAM: Autoregressive Moving Average
Images videos and sounds arethe only way to trigger and Multi-Variate Autoregressive,it’s also a time series that
emotions, there are other studies[11], that is conducting a can be used to analyze the signals.
gambling game to trigger two types of emotions Regret and 3) GARCH: Generalized Autoregressive conditional
Rejoiceto isolate the emotion and increase the ability to heteroskedasticity, being autoregressive makes it a time series

ISSN: 2231-2803 http://www.ijcttjournal.org Page 44


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

model, it’s used for time-varying volatility, the volatility here


is the standard deviation. I. Multi-layered neural network (deep learning) [8] in this
4) Others like Burg Method and Durbin Recursion and researchthey run thedeep learning on the raw signal
Yule-Walker without hand- crafted feature extraction or feature
F. Time domain selection techniques, they are relying on the layer of the
deep learning to provide an abstraction layers and to play
1) Event-related Potential ERP: it’s not trivialto detect
the role of the feature extraction and the features selection
ERP linked to emotion
in a traditional model. Three layers of Restricted
2) Hjorth features: Activity and Mobility and Complexity Boltzmann Machines (RBM) are introduced, and the
those are the three parameters (features) this model provides, results are rather acceptable, it’s easy and straight forward
the Activity represents the squared standard deviation in order to implement, but the accuracy variesmuch and the model
to get the signal power. takes a lot of data to be trained compared to traditional
3) Non-Stationary IndexNSI: The signal is divided into models.
smaller subsets and the average of each subset is calculated
the NSI is the standard deviation for those averages by doing V. FEATURES SELECTION
so, we are analyzing the variation of the local average over the importance of this step is to determine which subset of the
time, this will result in a measurement of the complexity. features is actually matteredthe most, to get the best out of the
4) Fractal DimensionFD:represents a measure of classification. There is no general agreement on which
thecomplexity and there aremultipleways to calculateit. features are better to identify which emotion. But we can use
5) Higher Order Crossings HOS: this method is one of the some techniques to be able to find the best features of our
most solid methods, it’s used in the pre-processingstep as a self’s but the best feature will vary depending on the
noise reduction technique.[4] application and the data shape. we can list the most effective
G. Frequency domain techniques in the EEG emotion recognition field:
1- (mRMR)Min-Redundancy-Max-Relevance:this method is
1) Band power: it’s a very common technique, the trying to identify the features that correlate to the result,
frequency bands can be as follow; delta (0-4 Hz), theta (4-8 which is maximizing the relevance, at the same time
Hz), alpha (8-16 Hz), beta (16-32 Hz), and gamma (32-64 Hz) reducing the redundancy, the feature could relate directly
used in this study [16] and the humming window is usually 1 to the result but it’s redundant.
second. widelyused with (DFT) Discrete Fourier Transform or 2- Relief: the way that these algorithmworks are to draw
(FFT) Fast Fourier Transform or(STFT) Short Time Fourier instance randomly, then calculatethe nearest neighbours,
Transform or (PSD)Power Spectra Density. STFT is the most then changethe feature weighting to give more weight to
commonapproach. the features that discriminate the instance from
2) Higher Order SpectraHOS:second-order measures neighbours of different classes.
assume that the signal has a Gaussian form (Normal 3- Bhattacharyyadistance:this method is measuring the
distribution), so HOS is an extension of second-order analogyof two discrete or continuous probability
measures. AnyGaussian signal will be characterized by its distributions. It isclosely related to the Bhattacharyya
mean and variance. But the HOS of Gaussian signals are either coefficient which is a measure of the amount of overlap
zero or contain redundant information. between two statistical samples. [5]
H. Time-frequency domain 4- Comparison studies: it’s more or less trial and error, for
example,the study [5] is trying to recognize theemotions
1) Hilbert–Huang TransformHHT: the way it works is to
from music listening. The best feature when it comes to
break down the signal to intrinsic mode functions (IMF) with channels are specific 3 channels, and the best wave bands
the trend, IMF is a function represent the signal part. HHT are beta and alpha. By minimizing the number of
work well withdata that is nonstationary and nonlinear. it’s featuresthey select the most relevant features to extract
more like an algorithm than a model, a set of steps need to be the emotions that they are targeting to classify.
done sequentially.
2) Short Time Fourier TransformSTFT: this method can VI. CLASSIFICATION
be considered as a bridge between the Fourier Transform and
1) Support Vector Machine SVM:is the most frequently
the Wavelet Transform.The FT does not provide time-
used classifier due to the high-quality classification
frequencyanalysis so the signal is broken down into parts and
results.[24]
the part signal is assumed to be stationary.
2) k-nearest neighbours k-NN: it’s a simple algorithm
3) Wavelet Transform:we can use DWT or CWT Discrete
easy to be applied but poor runtime performance, the output is
or Continuous. We can perform multi-resolution analysis
a class membership probability. [24]
(MRA) also known as a multi-scale approximation (MSA) to
balance time resolution and frequency resolution

ISSN: 2231-2803 http://www.ijcttjournal.org Page 45


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

3) Learning vector quantizationLVQ:it’s related the kNN [3] A. Samara, M. L. R. Menezes and L. Galway, "Feature Extraction for
and it applies a winner takes all approach, it’s a network Emotion Recognition and Modelling Using Neurophysiological Data,,"
15th International Conference on Ubiquitous Computing and
which uses supervised learning [1]. Communications and 2016 International Symposium on Cyberspace and
4) Artificial Neural NetworksANN: it’s a nonlinear Security (IUCC-CSS), 2016.
classifier, the most commonversion of ANN is (MLPNN) [4] A. Patil, C. Deshmukh and A. R. Panat, "Feature extraction of EEG for
Multi-Layer Perceptron Neural Network or (MLP)Multi- emotion recognition using Hjorth features and higher order crossings,"
Conference on Advances in Signal Processing (CASP), 2016 .
Layer Perceptron.
[5] S. W. Byun, S. P. Lee and H. S. Han, "Feature Selection and Comparison
5) RestrictedBoltzmannmachines RBMs: This study [8] at for the Emotion Recognition According to Music Listening,"
el is using3 layers of RBM to recognize 4 deferent distinct International Conference on Robotics and Automation Sciences
emotions. (ICRAS), 2017.
6) Others like: Linear Discriminant Analysis LDA [6] K. Yano and T. Suyama, , "Fixed low-rank EEG spatial filter estimation
for emotion recognition induced by movies," International Workshop on
assumes the features are Gaussian distributed and itfails if the Pattern Recognition in Neuroimaging (PRNI), 2016.
discriminatory function is not in mean but in the variance of [7] P. C. Petrantonakis and L. J. Hadjileontiadis, "Adaptive Emotional
the data [23], also NBC Naive Bayes classifier, Hidden Information Retrieval From EEG Signals in the Time-Frequency
Markov Model (HMM), Gaussian Mixture Models(GMM). Domain," IEEE Transactions on Signal Processing, 2012.
[8] Y. Gao, H. J. Lee and R. M. Mehmood, "Deep learninig of EEG signals
VII. CONCLUSION AND DISCUSSION for emotion recognition," IEEE International Conference on Multimedia
& Expo Workshops (ICMEW), 2015.
After collecting the SAMresults it has to be mapped with the
[9] Y. H. Liu, W. T. Cheng, Y. T. Hsiao, C. T. Wu and M. D. Jeng, "EEG-
EEG data which is a lot of work that requires precision and based emotion recognition based on kernel Fisher's discriminant analysis
carefulness, especially when tagging the EEG data with the and spectral powers," IEEE International Conference on Systems, Man,
SAM feedback, in the experiments that have been made by and Cybernetics (SMC), 2014.
other studies, a lot of the collected data had to be dropped due [10] D. Huang, C. Guan, Kai Keng Ang, Haihong Zhang and Yaozhang Pan,
to the low-quality data. Along with every subject had to tag "Asymmetric Spatial Pattern for EEG-based emotion detection,"
his own data as the emotion tend to differ from subject to International Joint Conference on Neural Networks (IJCNN), 2012.
another. Cleansing the data and tagging it for the classification [11] Ou Lin, Guang-Yuan Liu, Jie-Min Yang and Yang-Ze Du,
"Neurophysiological markers of identifying regret by 64 channels EEG
is a mandarin taskhence it’s a progress hindrance. signal," 12th International Computer Conference on Wavelet Active
Another issue is that there is no closed feedback loop to Media Technology and Information Processing (ICCWAMTIP), 2015.
enhance the accuracy of emotion detection, by closed [12] D. Handayani, H. Yaacob, A. Wahab and I. F. T. Alshaikli, "Statistical
feedback loop we mean a method to show the subject the Approach for a Complex Emotion Recognition Based on EEG Features,"
classified data and allow him to judge it and enhance his brain 4th International Conference on Advanced Computer Science
Applications and Technologies (ACSAT), 2015.
wave next time to get more accurate result, allowing the
human mind to train with the model, it’s a way to make the [13] B. H. Kim and S. Jo , "Deep Physiological Affect Network for the
Recognition of Human Emotions," IEEE Transactions on Affective
classifier and the mind to learn in real-time. Computing,, 2018 .
Another phenomena that will affect the result is peoplebrain [14] S. G. Mangalagowri and P. C. P. Raj, "EEG feature extraction and
signals are deferent from each other’s [9]which meansthat the classification using feed forward backpropogation algorithm for emotion
EEG data is unique per person and the tanning of the classifier detection," International Conference on Electrical, Electronics,
has to be per person, the model is specific to each person, Communication, Computer and Optimization Techniques (ICEECCOT),
2016 .
which can be a problem for the applications that require
[15] I. Wichakam and P. Vateekul,, "An evaluation of feature extraction in
recognition directly without the possibility of doing the EEG-based emotion prediction with support vector machines," 11th
training session first. International Joint Conference on Computer Science and Software
Moreover, there is no general agreement on which feature is Engineering (JCSSE), 2014.
best to describe which emotion in other words (emotion to [16] JOSEPH A. MIKELS, BARBARA L. FREDRICKSON, GREGORY R.
features mapping).Furthermore, humans can have more than LARKIN, CASEY M. LINDBERG, SAM J. MAGLIO, PATRICIA A.
one emotion at the same time [13] so currently, there is no REUTER-LORENZ, "Emotional category data on images from the
International Affective Picture System," Behavior Research Methods ,
way to classify more than one emotion, the state of the art 2005.
struggle with classifying one. [17] Elise S. Dan-Glauser, Klaus R. Scherer, "The Geneva affective picture
database (GAPED): a new 730-picture database focusing on valence and
REFERENCES normative significance," Behavior Research Methods, 2011.
[1] Esmeralda C. Djamal, Poppi Lodaya, "EEG based emotion monitoring [18] Artur Marchewka, Łukasz Żurawski, Katarzyna Jednoróg, Anna
using wavelet and learning vector quantization," 4th International Grabowska, "The Nencki Affective Picture System (NAPS): Introduction
Conference on Electrical Engineering, Computer Science and to a novel, standardized, wide-range, high-quality, realistic picture
Informatics (EECSI), 2017. database," Behavior Research Methods, 2014.
[2] P. Ackermann, C. Kohlschein, J. Á. Bitsch, K. Wehrle and S. Jeschke, [19] Benedek Kurdi, Shayn Lozano, Mahzarin R. Banaji , "Introducing the
"EEG-based automatic emotion recognition: Feature extraction, selection Open Affective Standardized Image Set (OASIS)," Behavior research
and classification methods," 18th International Conference on e-Health methods, 2017.
Networking, Applications and Services (Healthcom), 2016. [20] R. Jenke, A. Peer and M. Buss, "Feature Extraction and Selection for

ISSN: 2231-2803 http://www.ijcttjournal.org Page 46


International Journal of Computer Trends and Technology (IJCTT) – Volume 67 Issue 6 - June 2019

Emotion Recognition from EEG," IEEE Transactions on Affective [28] A. Saidatul, M. P. Paulraj, S. Yaacob and N. F. Mohamad Nasir,
Computing, 2014. "Automated System for Stress Evaluation Based on EEG Signal: A
[21] J. Kaur and A. Kaur, "A review on analysis of EEG signals," Prospective Review," IEEE 7th International Colloquium on Signal
International Conference on Advances in Computer Engineering and Processing and its Applications, 2011.
Applications, 2015. [29] X. Zhuang, V. Rozgić and M. Crystal, "Compact unsupervised EEG
[22] Margaret M.Bradley, Peter J.Lang, "Measuring emotion: The self- response representation for emotion recognition," International
assessment manikin and the semantic differential," Journal of Behavior Conference on Biomedical and Health Informatics (BHI), 2014.
Therapy and Experimental Psychiatry, 2002. [30] M. A. B. S. Akhanda, S. M. F. Islam and M. M. Rahman, "Detection of
[23] M. Rajya Lakshmi, Dr. T. V. Prasad, Dr. V. Chandra Prakash, "Survey Cognitive State for Brain-Computer Interfaces," International
on EEG Signal Processing Methods," International Journal of Advanced Conference on Electrical Information and Communication Technology
Research in Computer Science and Software Engineering (IJARCSSE), (EICT), 2014.
2014. [31] Y. P. Lin and T. P. Jung, "Exploring day-to-day variability in EEG-based
[24] Sunil Kalagi, José Machado, Vitor Carvalho, Filomena Soares, Demétrio emotion classification," IEEE International Conference on Systems,
Matos, "Brain computer interface systems using non-invasive Man, and Cybernetics (SMC), 2014.
electroencephalogram signal : A literature review," International
conference on engineering technology and innovation (ICE/ITMC),
2017.
[25] S. Vaid, P. Singh and C. Kaur, "EEG Signal Analysis for BCI Interface:
A Review," Fifth International Conference on Advanced Computing &
Communication Technologies, 2015.
[26] S. H. Kim and N. A. N. Thi, "Feature extraction of emotional states for
EEG-based rage control," 39th International Conference on
Telecommunications and Signal Processing (TSP), 2016.
[27] A. F. Rabbi, K. Ivanca, A. V. Putnam, A. Musa, C. B. Thaden and R.
Fazel-Rezai, "Human performance evaluation based on EEG signal
analysis: A prospective review," Annual International Conference of the
IEEE Engineering in Medicine and Biology Society, 2009.

ISSN: 2231-2803 http://www.ijcttjournal.org Page 47

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy