0% found this document useful (0 votes)
11 views4 pages

Emotion Recognition From EEG During Self-Paced

This study investigates emotion recognition from EEG data during self-paced emotional imagery, achieving a classification accuracy of 71.3% for distinguishing between positive and negative valence emotions. Participants engaged in guided imagery to elicit emotions, and the analysis utilized a machine learning approach to predict emotional valence based on spontaneous EEG activity. The results indicate potential for real-time applications in neurofeedback and emotion-based interfaces, despite challenges in generalizing across different emotional scenarios.

Uploaded by

chawad.aya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views4 pages

Emotion Recognition From EEG During Self-Paced

This study investigates emotion recognition from EEG data during self-paced emotional imagery, achieving a classification accuracy of 71.3% for distinguishing between positive and negative valence emotions. Participants engaged in guided imagery to elicit emotions, and the analysis utilized a machine learning approach to predict emotional valence based on spontaneous EEG activity. The results indicate potential for real-time applications in neurofeedback and emotion-based interfaces, despite challenges in generalizing across different emotional scenarios.

Uploaded by

chawad.aya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2013 Humaine Association Conference on Affective Computing and Intelligent Interaction

Emotion Recognition from EEG During Self-Paced


Emotional Imagery

Christian Andreas Kothe and Scott Makeig Julie Anne Onton


Swartz Center for Computational Neuroscience Naval Health Research Center
University of California San Diego San Diego, CA, USA
La Jolla, CA, USA julie.onton@med.navy.mil
{christian,scott}@sccn.ucsd.edu

Abstract—Here we present an analysis of a 12-subject movie clips [8,9], or music [10]. In contrast, we are here
electroencephalographic (EEG) data set in which participants concerned with inwardly imagined and felt emotions elicited
were asked to engage in prolonged, self-paced episodes of guided by the participants’ own imagination or recall of emotionally
emotion imagination with eyes closed. Our goal is to correctly loaded memories, an experiment design and data set reported
predict, given a short EEG segment, whether the participant was in [11]. While a small group of studies have applied a recall
imagining a positive respectively negative-valence emotional paradigm to elicit emotions in the context of machine learning
scenario during the given segment using a predictive model [12], the recall duration has been short (up to a few seconds)
learned via machine learning. The challenge lies in generalizing and/or did not allow the subject to proceed at his or her own
to novel (i.e., previously unseen) emotion episodes from a wide pace, factors that may limit the attainable depth and focus of
variety of scenarios including love, awe, frustration, anger, etc. their emotional experience. To bypass these limitations, this
based purely on spontaneous oscillatory EEG activity without experiment used a guided imagery paradigm [13] in which
stimulus event-locked responses. Using a variant of the Filter- participants are first invited to enter into to a deeply relaxed
Bank Common Spatial Pattern algorithm, we achieve an average state via a pre-recorded narrative, and are then invited to
accuracy of 71.3% correct classification of binary valence rating imagine experiencing a series of emotion-laden scenarios,
across 12 different emotional imagery scenarios under rigorous separated by guided relaxations. Participants sat in a
block-wise cross-validation.
comfortable chair with eyes closed and were encouraged to
Keywords—emotion; valence; brain-computer interface; EEG; exercise their emotional imagination for as long as they could,
machine learning; guided imagery pressing a handheld button first when they began to
somatically experience the suggested emotion, and again when
this experience waned or they were otherwise ready to continue
I. INTRODUCTION (in practice, after 1-5 min).
Emotion recognition is a central topic in affective
The unique characteristics of this experiment design pose
computing [1] that has received increasing attention over the
several analysis challenges. In particular, there were no
past decade. Nearly all recent studies of emotion recognition
condition repetitions; each of the 15 imagined scenarios was
have adopted a machine learning approach in which a
unique. This forces us to adopt a conservative cross-validation
predictive model, typically a classifier, is trained on features of
approach, leaving out complete blocks rather than more usual
a set of biophysiological data and then tested on a separate data
leave-one-trial-out or randomized cross-validation approaches.
set (usually obtained from the same participant and/or session).
Furthermore, in our study we test our classifiers on previously
Perhaps the most prolific branch of the field deals with
unseen conditions (for example, testing an emotion valence
emotion recognition from multimodal data including heart rate,
classifier trained during guided imagery of love on data from
galvanic skin response (GSR), Electromyography (EMG),
guided imagery of awe). As a side effect, this analysis may
Electrooculography (EOG), face video, and/or pupil dilation,
yield some of the strongest so-far presented evidence for or
and is therefore concerned with the extraction and fusion of
against the practicality of general-purpose recognition of
multimodal features into a single predictor [2,3]. In contrast,
emotional valence from EEG.
we are here concerned with emotion recognition purely from
EEG signals, which has itself spurred considerable interest in
recent years [4]. II. EXPERIMENTAL TASK
Most EEG-based emotion recognition studies employ The experiment was performed in a dimly-lit room
external stimulus presentations to elicit emotional states in their equipped with a comfortable chair; the experimenter was
participants using emotion-laden pictures (e.g., utilizing the seated in a separate control room. The recording session (ca. 80
International Affective Picture System [5,6]), sounds [7], min on average) was performed with eyes closed. Instructions

This work was supported by a gift from The Swartz Foundation (Old
Field, NY) as well as by grants from the National Institute for Mental Health
USA (R01 NS074293) and the National Science Foundation USA (IIS-
0613595).

978-0-7695-5048-0/13 $26.00 © 2013 IEEE 855


DOI 10.1109/ACII.2013.160
were delivered in the form of pre-recorded verbal narratives via positive and 5 as negative valence both by participants and in
earbud speakers. The recording session began with a 2-min separate group ratings.
resting period followed by a verbal explanation of the task and
a verbally guided relaxation exercise of about 5 min to promote IV. METHOD
a relaxed, inwardly-focused state. The subsequent main task
was a sequence of 15 blocks, each beginning with a (15-30 sec) To predict the valence of the emotion experienced by the
guided imagination narrative describing a particular emotion, participant from single EEG segments, we employ a predictive
followed by an imagination period that lasted until the model that estimates, from a given short EEG segment (here 6
participant pressed the response button a second time (on sec.), the probability that the subject experienced a positive or
average, after 218 ± s.d. 94 sec), and each ending with a 15-sec negative valence emotion during that period — a binary
relaxation narrative to restore a neutral state. Each induction classification. Our approach relies on changes in the power
narrative began with a short description of the emotion spectrum of short-time stationary oscillatory EEG processes
followed by suggestions of one or more circumstances in within standard EEG frequency bands (delta, theta, alpha, beta,
which the target emotion might be vividly experienced. low gamma) at unknown source locations. We employ a
Participants were instructed to use whatever imagery they variant of the Filter-Bank Common Spatial Pattern method
deemed suitable for stimulating a vivid and embodied (FBCSP [14]) that finds optimal spatial filters in whose outputs
experience of the suggested emotion, and were encouraged to power spectral differences are maximally discriminative
pay attention to somatic sensations associated with the target between conditions. Here, we use a sparse feature-selecting
emotion. They were asked to take as much time as they needed classifier (logistic regression with elastic-net regularization
to recall or imagine a scenario that would induce a realistic [15]) instead of performing feature selection in a separate step.
experience of the described emotion. They were encouraged to The method, implemented using the open-source BCILAB
experience each emotion for 3–5 min, pressing a left hand-held toolbox [16], can be trained in under 5 min on a recent PC and
button when their emotional experience began and then again can be used to perform real-time classification.
when it subsided or they were ready to move on. To minimize
participant fatigue, the emotion sequence was chosen to A. Data Pre-Processing and Segmentation
alternate pseudo-randomly between 8 selected positive-valence The continuous EEG data were first high-pass filtered using
emotions (love, joy, happiness, relief, compassion, a Butterworth IIR filter with a 0.1-1 Hz transition band. Then,
contentment, excitement, awe) and 7 negative-valence N=40 equally spaced segments Xt, each 6 sec in length, were
emotions (anger, jealousy, disgust, frustration, fear, sadness, extracted from each block (giving on average about 50%
grief). The experiment ended after another 2-min silent resting overlap for successive segments). We excluded the first 60 sec
period. After the experiment, all participants stated they felt and last 10 sec of each emotion imagination block to focus on
that they had experienced realistic emotional states using the the period of maximum engagement. Each extracted segment
verbal narratives and their own imagination. was then associated with a label yt  {+1,-1} corresponding to
the positive or negative valence of the block label.
III. DATA ACQUSITION
Sixteen young adult volunteers (mixed gender, 25.5 ± s.d. 5 B. Single-Trial Classification
years) participated under informed consent in accordance with Given a short high-pass filtered EEG segment X  RC×T,
University of California San Diego institutional review board the data are first band-pass filtered to nf =5 separate pre-defined
requirements. We here restrict ourselves to 12 subjects since EEG frequency bands (delta, 0.5-3 Hz; theta, 4-7 Hz; alpha, 8-
four recordings had partially missing marker information. The 12 Hz; beta, 13-30 Hz; gamma, 31-42 Hz). For simplicity
study included 16 further participants, not analyzed here, in a filtering is applied using temporal filters Bf comprising FFT,
modified experiment protocol (see [11]). spectral weighting, and inverse FFT transforms. Each resulting
segment of band-pass filtered EEG is further linearly spatially
EEG data were collected from 250 gel-based scalp filtered by a matrix Wf = [w1,f , w2,f, …, wk,f]T of k=8 band-
electrodes, plus four infraocular and two electrocardiographic specific spatial filters, which gives an 8-channel signal whose
(ECG) placements using a BioSemi ActiveTwo (Biosemi, NL) per-channel log-variances are taken as the feature vector xf for
amplifier with 24-bit resolution. Caps with a custom whole- frequency band f as
head montage that covered most of the skull, forehead, and
lateral face surface were used, omitting chin and fleshy cheek .
areas. Locations of the electrodes and skull landmarks for each
participant were digitized (Polhemus). To expedite The resulting feature vectors xf, concatenated into a single
computations in the subsequent analysis the data were here feature vector for trial x=[x1, x2, …, xf], are then mapped onto a
resampled to 128 Hz and reduced to a subset of 124 evenly- trial probability value using a generalized linear model (GLM)
distributed EEG channels. Since here we attempt to classify with a logistic link function
positive versus negative valence, we excluded three emotion
conditions (excitement, disgust, and compassion) that did not
fall clearly into the positive or negative valence categories.
Further, we discarded the first two remaining blocks (during
which we assumed the participant was still adapting to the This value represents the probability that the given EEG
task), leaving 10 blocks per participant, on average 5 labeled as segment is of positive (resp. negative) class, here indexing the
valence of the emotion being experienced by the participant.

856
C. Model Calibration although the within-subject sample size is low due to the block
To calculate the spatial filters Wf for a given frequency design. The relevant spatial patterns (spatial filter inverses
band f from a collection of high-pass filtered training trial obtained from the k upper and lower rows of V-1) of the FBCSP
segments Xt and associated labels yt we first apply the filter Bf models for two typical participants are shown in Figs. 2 and 3,
to each segment and concatenate all filtered segments with respectively, where the classifier in Fig. 2 (#11, 72% accuracy)
positive label into matrix X(+) and all filtered negative-labeled appears to involve high-frequency scalp muscle activity, while
segments into matrix X(-) to calculate the per-condition that in Fig. 3 (#9, 69% accuracy) appears to primarily involve
covariance matrices C(+) = X(+)X(+)T and C(-) = X(-)X(-)T, lower frequency activity from cortical sources. The displayed
respectively. We then solve the generalized eigenvalue patterns quantify the forward projection patterns of the latent
problem sources extracted by the respective spatial filters. Note, in
particular, the occurrence of peri-auricular muscle sources with
virtually no far-field projection across the scalp (Fig. 2), and
dual-symmetric scalp projection patterns in the alpha band,
as in the Common Spatial Pattern (CSP) [17] algorithm, giving compatible with occipital and parietal brain generators.
a matrix of eigenvalues λ and eigenvectors V of which we
retain the (k = 4) components at the upper and lower ends of VI. DISCUSSSION
the eigenvalue spectrum. These vectors are concatenated into
the filter matrix Wf. The key result of this analysis is, first, that emotion valence
can be classified in this task at better-than-chance level, and
To learn the parameters (θ,b) of the generalized linear second that the level of accuracy may be considered almost
prediction function, we employ logistic regression with practical for real-time neurofeedback or emotion-based
elastic-net regularization (elastic mixing parameter α fixed at interface control. These results hold up under block-wise
¼). This amounts to solving the convex optimization problem evaluation with clear separation of training and test data, an
approach considerably more rigorous than a randomized cross-
validation over segments. Furthermore, since the emotional
scenarios used in the test blocks were distinct from those in the
training blocks (for example, awe vs. love), our results quantify
for centered and standardized feature vectors xt and their to what extent the learned classifiers could generalize to unseen
associated labels yt extracted from a set of training blocks as conditions. This implies that the observed EEG responses share
described in Section IV(B). To determine the free parameter μ some commonalities across different emotional scenarios of a
the problem is solved repeatedly for a series of candidate given valence level (while exhibiting some differences between
values of μ and the best value is selected using a leave-one- levels).
block-out cross-validation on the training set. This problem can
Several of the learned spatial filters seem clearly focused
be solved efficiently using the glmnet package for MATLAB
on neck and temporal scalp muscle (or EMG) activity in higher
(The Mathworks, Natick, MA).
frequency bands. Eye movement cues, when relevant, were
The elastic-net penalty was chosen under the assumption relevant at low frequencies while, as expected, near 10-Hz
that not all frequency bands or spatial filters are relevant alpha frequency band variance was relevant for occipital and
(implying sparsity), while at the same time features for parietal brain sources. For some participants, other brain
neighboring frequencies are likely correlated, suggesting the sources were relevant at theta and beta bands, although their
use of the additional l2 term to encourage equal weighting for locations did not appear to be consistent.
similarly-relevant features.

D. Performance Evaluation
To assess whether emotional valence can be predicted from
single EEG trials, we evaluated the test-set accuracy of the
method using a 5-fold block-wise cross-validation on the 10
blocks from each participant. Thus, in each fold of the cross-
validation, two successive blocks were declared the test set and
model calibration was performed on the 8 remaining training
blocks. The resulting predictive model was then tested on the
data segments of the two test-set blocks. The classification
accuracy of the method was quantified as percent correct given
the class labels of the test blocks.

V. RESULTS
The mean accuracy of our method, across the 12
participants, was 71.3% +/- 14.9%, which is highly significant
given the chance level of 50% (p<0.01 in a standard t-test). Figure 1. Cross-validated valence classification accuracy across all
Accuracies for individual subjects are depicted in Fig. 1, participants (top) and emotional scenarios (bottom). Chance level is 50%.

857
Figure 2. Forward scalp projections of relevant source mixtures selected by
the classifier for a participant (#11) for whom the classifier is dominated by
scalp muscle activity. The relevant frequency bands are indicated.

VII. CONCLUSION Figure 3. Forward scalp projections of relevant source mixtures selected by
the respective classifier for another participant (#9), including both cortical
We have presented a classifier for experienced emotion and scalp muscle sources. Other details as in Fig. 2.
valence in an experiment designed to elicit strong emotional
experiences. A rigorous analysis of these data produced strong [6] P. D. Bamidis et al., “An integrated approach to emotion recognition for
evidence that a key aspect of emotional state – emotional advanced emotional intelligence,” in Human-Computer Interaction,
Springer Heidelberg, 2009, pp., 565–574.
valence – can be classified from a few seconds of spontaneous
EEG data. Because of the inclusion of informative muscular [7] Y. Liu, O. Sourina, and M. K. Nguyen, “Real-time EEG-based human
emotion recognition and visualization,” in Intl. Conf. on Cyberworlds,
activity in the EEG data for most subjects (possibly related to 2010, pp. 262–269.
“tensing up” during some emotion states), we cannot [8] D. Nie, X.-W. Wang, L.-C. Shi, and B.-L. Lu, “EEG-based emotion
conclusively determine from these results to what extent brain- recognition during watching movies,” in 5th Intl. IEEE/EMBS Conf. on
source EEG activity alone would allow for emotion Neural Engineering, 2011, pp. 667–670.
classification (though this should be possible with further [9] M. Murugappan, M. Rizon, R. Nagarajan, and S. Yacoob, “Inferring of
analysis of these results). We can state that the extracted human emotional states using multichannel EEG,” European J. of
features do generalize for these data to novel emotional Scientific Research, vol. 48(2), 2010, pp. 281–299.
scenarios, the key requirement for practical usability of [10] Y. P. Lin et al., “EEG-based emotion recognition in music listening,”
IEEE Trans. Biomed. Eng., vol. 57(7), 2010, pp. 1798–1806.
emotion recognition systems. At the core of our analysis is a an
[11] J. Onton and S. Makeig, "High-frequency broadband modulations of
easy-to-implement method to learn and classify informative electroencephalographic spectra." Frontiers in Human Neuroscience vol.
spectral power changes across standard frequency bands that 3, 2009.
may be useful for further investigation of emotion recognition [12] G. Chanel, J. Kierkels, M. Soleymani, and T. Pun, “Short-term emotion
from ongoing EEG data. assessment in a recall paradigm,” Intl. J. Human-Computer Studies, vol.
67(8), 2009, pp. 607–627.
REFERENCES [13] H. L. Bonny, Music & consciousness: the evolution of guided imagery
and music, Barcelona Publishers, 2002.
[1] R. W. Picard, Affective Computing, The MIT Press, 1997. [14] K. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan, "Filter bank common
[2] S. Koelstra et al., “Single trial classification of EEG and peripheral spatial pattern (FBCSP) in brain-computer interface." In IEEE Intl. Joint
physiological signals for recognition of emotions induced by music Conf. on Neural Networks, 2008, pp. 2390–2397.
videos," Brain Informatics, Springer, 2010, pp. 89–100. [15] H. Zou and T. Hastie, "Regularization and variable selection via the
[3] H. Hamdi, P. Richard, A. Suteau, and P. Allain, “Emotion assessment elastic net." J. of the Royal Statistical Society: Series B (Statistical
for affective computing based on physiological responses,” in IEEE Methodology) vol 67(2), 2005, pp. 301–320.
World Congress on Computational Intelligence, 2012, pp. 1–8. [16] A. Delorme et al., "EEGLAB, SIFT, NFT, BCILAB, and ERICA: new
[4] M.-N. Kim, M. Kim, E. Oh, and S.-P. Kim, “A review on the tools for advanced EEG processing," Computational Intelligence and
computational methods for emotional state estimation from the human Neuroscience, vol. 10, 2011.
EEG,” Computational and Mathematical Methods in Medicine vol. [17] H. Ramoser, J. Müller-Gerking, and G. Pfurtscheller, "Optimal spatial
2013, 2013. filtering of single trial EEG during imagined hand movement," IEEE
[5] R. Horlings, D. Datcu, and L. J. M. Rothkrantz, “Emotion recognition Trans. Rehab. Eng., vol. 8(4), 2000, pp. 441–446.
using brain activity,” in CompSysTech’08, 2008, pp. 6.

858

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy