0% found this document useful (0 votes)
34 views

An EEG Data Processing Approach For Emot

Uploaded by

education78601
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

An EEG Data Processing Approach For Emot

Uploaded by

education78601
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

IEEE SENSORS JOURNAL, VOL. 22, NO.

11, JUNE 1, 2022 10751

An EEG Data Processing Approach


for Emotion Recognition
Guofa Li , Member, IEEE, Delin Ouyang , Yufei Yuan, Wenbo Li,
Zizheng Guo, Xingda Qu , and Paul Green

Abstract —As the most direct way to measure the true


emotional states of humans, EEG-based emotion recognition
has been widely used in affective computing applications.
In this paper, we aim to propose a novel emotion recognition
approach that relies on a reduced number of EEG electrode
channels and at the same time overcomes the negative impact
of individual differences to achieve a high recognition accu-
racy. According to the statistical significance results of EEG
power spectral density (PSD) features obtained from the SJTU
Emotion EEG Dataset (SEED), six candidate sets of EEG
electrode channels are determined. An experiment-level batch
normalization (BN) is proposed and applied on the features
from the candidate sets, and the normalized features are
then used for emotion recognition across individuals. Eleven
well-accepted classifiers are used for emotion recognition. The experimental results show that the recognition accuracy
when using a small portion of the available electrodes is almost the same or even better than that when using all
the channels. Based on the reduced number of electrode channels, the application of experiment-level BN can help
further improve the recognition accuracy, specifically from 73.33% to 89.63% when using the LR classifier. These results
demonstrate that better and easier emotion recognition performance can be achieved based on the batch normalized
features from fewer channels, indicating promising applications of our proposed method in real-time emotion recognition
applications in intelligent systems.
Index Terms — Electroencephalogram (EEG), emotion recognition, electrode channels selection, batch normalization,
individual difference.

I. I NTRODUCTION much attention from researchers. Many emotion recognition


MOTION recognition becomes an important research methods are based on facial expression data [1]–[5], which
E topic in the field of affective computing that has attracted are limited when actual personal emotion is hidden behind
facial expression consciously or unconsciously, or in the
Manuscript received March 17, 2022; accepted April 15, 2022. Date situations with poor illumination or rapidly changing light
of publication April 21, 2022; date of current version May 31, 2022.
This work was supported in part by the NSF China under Grant distributions on human faces (e.g., in nighttime driving)
51805332 and Grant 52072320 and in part by the Shenzhen Fundamen- [6]. Therefore, using a more direct way (e.g., physiological
tal Research Fund under Grant JCYJ20190808142613246 and Grant signals) to recognize human emotions without being affected
20200803015912001. The associate editor coordinating the review of
this article and approving it for publication was Prof. Rosario Morello. by environments or fake facial expressions is essential.
(Corresponding author: Xingda Qu.) As electroencephalogram (EEG) signals have been frequently
Guofa Li, Delin Ouyang, Yufei Yuan, and Xingda Qu are with the reported to be closely and directly related with human
Institute of Human Factors and Ergonomics, College of Mechatronics
and Control Engineering, Shenzhen University, Shenzhen 518060, China emotional states [7], recognizing human emotion from EEG
(e-mail: hanshan198@gmail.com; ouyangdelin2021@email.szu.edu.cn; data has been developed as an alternative solution for emotion
yuanyufei2021@email.szu.edu.cn; quxd@szu.edu.cn). recognition recently.
Wenbo Li is with the State Key Laboratory of Automotive Safety
and Energy, School of Vehicle and Mobility, Tsinghua University, With the rapid development of machine learning and deep
Beijing 100084, China (e-mail: liwenbocqu@foxmail.com). learning technologies, various approaches have been proposed
Zizheng Guo is with the School of Transportation and Logis- for EEG-based emotion recognition [8]–[17]. Song et al. [18]
tics, Southwest Jiaotong University, Chengdu 611756, China (e-mail:
guozizheng@swjtu.edu.cn). proposed an EEG-based emotion recognition method with
Paul Green is with the University of Michigan Transportation Research dynamical graph convolutional neural networks (DGCNN).
Institute (UMTRI) and the Department of Industrial and Operations Yin et al. [19] proposed a fusion model of graph convolu-
Engineering, University of Michigan, Ann Arbor, MI 48109 USA (e-mail:
pagreen@umich.edu). tional neural network (GCNN) and long-short term memo-
Digital Object Identifier 10.1109/JSEN.2022.3168572 ries neural networks (LSTM), in which DGCNN and LSTM

1558-1748 © 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
10752 IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022

were used to extract graph domain features and memorize


the relationship changes among EEG channels, respectively.
Maheshwari et al. [20] recognized emotions using multi-
channel EEG signals based on a rhythm-specific multi-channel
deep convolutional neural network (CNN). Li et al. [21]
constructed spatial and temporal neural network models to
recognize emotions from EEG signals. Liang et al. [22]
identified emotions via EEG using an unsupervised decoding
system, while Wang et al. [23] developed a semi-supervised
random vector functional link (RVFL) network for emotion Fig. 1. The protocol of experiments in the SEED dataset.
recognition. Liu et al. [24] proposed a subject-independent
emotion recognition algorithm based on dynamic empirical on spiking neural network (SNN) for subject-independent
convolutional neural network (DECNN). Topic and Russo [25] short-term emotion recognition using EEG signals. Ning et al.
utilized deep learning to extract features from feature maps [36] used a single-source domain adaptive few-shot learning
and performed emotion recognition based on topographic and network (SDA-FSL) to perform cross-subject EEG emotion
holographic representations of EEG signals. Another example recognition. Another study [32] presented a domain adaptation
was given by Cui et al. [26], who proposed an end-to-end method where task-invariant features and task-specific features
regional-asymmetric convolutional neural network (RACNN) integrated in a unified framework models were adopted to
and asymmetric differential layer (ADL) with spatial infor- eliminate individual differences. Lu et al. [37] also devel-
mation from adjacent channels and symmetric channels for oped a dynamic entropy-based pattern for subject-independent
emotion recognition. Islam et al. [27] studied EEG emotion emotion recognition. However, most of these previous studies
recognition based on two categories including deep learning- took EEG data from different test conditions or experiments
based and shallow machine learning-based methods. The without considering the effect of the differences between test
results suggested that deep learning algorithms had better conditions/experiments on emotion recognition [24].
performance on emotion recognition compared to shallow Therefore, this paper aims to develop an improved EEG-
learning-based algorithms. However, most of the previous based emotion recognition method by using channel selection
studies are dependent on EEG signals from a relatively large and an improved batch normalization (BN) for emotion recog-
number of electrode channels, which would largely include nition. The main contributions of this paper can be summarized
noise and redundant data and have high requirements on as follows: (1) A smaller set of electrode channels sensitive to
hardware devices, especially for deep learning based meth- human emotions is determined. Using the selected set of elec-
ods [28]. Therefore, it would be desirable if the similar trode channels can improve emotion recognition accuracy with
emotion recognition performance can be achieved based on much less computational cost. (2) The data is preprocessed
EEG signals from fewer channels. using an experiment-level BN to normalize EEG features for
Following this motivation, Wang et al. [29] proposed a electrode channels to increase the recognition accuracy across
channel selection method to find an optimal subset of EEG individuals. To the best of our knowledge, this has never been
channels by using normalized mutual information. Although used for emotion recognition before. The obtained results show
a smaller subset was selected, mutual information based the effectiveness of this preprocessing method in improving
methods always lead to redundant information among the subject-independent emotion recognition.
selected features [30]. Similarly, a smaller subset of channels
is also selected for emotion recognition in Zheng and Lu [31], II. DATASET AND E XPERIMENTS
however, how the channels were selected and the correspond- The publicly available SJTU Emotion EEG Dataset (SEED)
ing recognition accuracies were not given, which makes it [31] was used for model development and evaluation. The
difficult to evaluate the effectiveness of the method. Given the data were collected from 62 electrode channels according
limitations in the existing channel selection methods, further to the international 10-20 system [31] by using the ESI
investigation on the optimal channels for emotion recognition NeuroScan System at a sampling rate of 1000 Hz. During
is still needed. data collection, emotional film clips were presented to each
Additionally, the existence of individual differences can lead subject in 15 separate emotion elicitation trials (see Fig. 1).
to diversified EEG response patterns which further has effects In each trial, a starting hint was given 5 seconds before the
on the generalization capabilities of classifiers across subjects. start of each clip. Each film clip was approximately 4 minutes.
Therefore, another significant problem in the development After each clip, subjects had 45 seconds to complete a ques-
of emotion recognition with EEG data is the negative influ- tionnaire [38] reporting their immediate emotional reactions to
ence of individual differences [32]. To address this problem, the film clips. Subsequently, another 15 seconds were provided
Li et al. [33] proposed a normalization method where EEG for rest before the start of the next trial. The order of emotions
signals in each electrode channel of each person was normal- presented in the selected clips was 1, 0, −1, −1, 0, 1, −1, 0,
ized into the range of [0, 1]. Yao et al. [34] applied the cross- 1, 1, 0, −1, 0, 1, −1, where 1 stands for positive emotion,
subject emotion training method based on complex networks 0 for neutral, and −1 for negative.
using the visibility graph method to overcome individual The collected EEG data were downsampled to 200 Hz
differences. Tan et al. [35] proposed a framework based and filtered with a 0-75 Hz frequency band to remove noise

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
LI et al.: EEG DATA PROCESSING APPROACH FOR EMOTION RECOGNITION 10753

each trial is then flattened to 1 dimension and stacked together


to form a set of DE features with the format of (675, 290).

B. Electrode Channel Selection


Previous studies reported that not all the electrode channels
can contribute to emotion recognition [29], [31]. Therefore,
selecting the electrode channels that are most sensitive to
human emotions would probably improve EEG-based emotion
recognition performance [45]. In this study, significance analy-
sis of PSD features among the three emotions is applied on the
five frequency bands for the 58 electrode channels, separately,
to select the optimal electrode channels. Because PSD features
are not normally distributed, a nonparametric Kruskal-Wallis
test is used to examine the difference of PSD features among
different emotions (Table I). There are six sets of bands with
Fig. 2. The EEG topo map of the 58 electrode channels. statistically significant differences among them, i.e., Set 1: all
the 58 channels; Set 2: channels with ≥1 significant band;
and artifacts. Fifteen young subjects (7 males and 8 females; Set 3: ≥2 significant bands; Set 4: ≥3 significant bands;
age: 23.27±2.37 years) participated in data collection in the Set 5: ≥4 significant bands; Set 6: channels with 5 significant
SEED dataset. Each subject repeated the abovementioned bands. The positions of electrode channels in each set are
data collection procedure three times with an interval of one shown in Fig. 3. The results show that channels in Set 6
week or longer. Thus, the collected data in the SEED dataset (the one having the most significant bands) mainly locate in
includes a total of 45 experimental sessions and 675 trials the occipital and temporal lobes.
(225 trials for each emotion category). Three new subjects are randomly selected from SEED
IV dataset [46] (i.e., subject 4, 7 and 15) to validate the
III. P ROPOSED M ETHOD AND R ESULTS significance analysis results on SEED. Following the same
A. Feature Extraction steps above, significance analysis is performed on the PSD
To extract features from the candidate electrode chan- features extracted from the nine electrode channels of these
nels, MNE-Python [39] is used with a 64-channel Biosemi three subjects. As shown in Table II, most of the examined
Active Two system [40], where 58 electrode channels in the features are still with statistical significances, indicating that
SEED dataset are included (Fig. 2) but 4 electrode channels the significance analysis results on SEED are credible and
(PO5, PO6, CB1, CB2) are excluded. applicable for other datasets.
Power spectral density (PSD) is a commonly used and To examine the effectiveness of different channel sets
well-accepted feature in the analysis of human’s EEG activ- on emotion recognition, PSD features and DE features of
ities. PSD features show the distribution of signal power each set are delivered to eleven classical classifiers used for
as a function of frequency [41], [42]. The frequency bands emotion recognition, including K Nearest Neighbors (KNN),
are delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), Logistic Regression (LR), Support Vector Machine (SVM),
beta (14-30 Hz), and gamma (30-50 Hz). Within each fre- Gaussian Naive Bayes (GNB), Decision Tree (DT), Random
quency band, the values of PSD features are averaged for each Forest (RF), eXtreme Gradient Boosting (XGB), Multilayer
channel over time. Thus, each channel has five different fre- Perceptron (MLP), Bootstrap Aggregating (BA), CNN and
quency features corresponding to the five frequency bands, and LSTM. These classifiers have been well proved to be have high
there are 290 frequency features in total for the 58 channels in accuracies and strong adaptability to different classification
each trial. Given that there are 675 trials in the SEED dataset, tasks [47]–[49]. The 15 subjects in the SEED dataset are
the final data format of PSD features is set as a matrix with randomly divided into a training set (80%, i.e., 12 subjects)
the size of 675 × 290, where 675 is the number of emotion 3 =
and a test set (20%, i.e., 3 subjects). In total, there are C15
elicitation trials and 290 is the number of PSD features in each 455 possible combinations of the training and test sets and
trial. all the combinations are examined. The emotion recognition
Differential entropy (DE) is also an effective measure of the accuracies of all the possible combinations are averaged to
signal complexity in EEG analysis [43]. It has been reported evaluate the recognition performance of the selected classifiers
in [44] that DE performs well in discriminating EEG responses for each channel set (Table III). The results indicate that the
between low and high frequency energies. After loading and highest mean recognition accuracy is achieved when using the
processing DE features from the SEED dataset, its data format least number of channels (i.e., Set 6 with only 9 channels).
in each trial is (58, N, 5), where 58 is the number of channels, The emotion recognition accuracy increases with the reduced
N is the number of DE features extracted evenly in the film number of channels and becomes relatively stable after the
clips, and 5 is the number of frequency bands. To make DE number of channels reaches 36. The same situation occurs
features have the same format with PSD features, the averaged for both PSD and DE features. In general, as highlighted in
values of DE features in each trial are calculated, so that its Table III, Set 6 including 9 channels can serve as a promising
format is then transformed to (58, 5). The transformed data of solution for emotion recognition.

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
10754 IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022

TABLE I
S TATISTICAL A NALYSIS R ESULTS OF PSD F EATURES (SEED D ATASET ) ACROSS 58 E LECTRODE C HANNELS IN THE F IVE
F REQUENCY B ANDS . T HE R ED C OLOR H IGHLIGHTS THE R ESULTS W ITH S TATISTICAL S IGNIFICANCE ( P≤0.05)

Fig. 3. The positions of the six selected sets of electrode channels. (The orange circles in each subgraph represent the selected channels) (a) set
1: 58 electrode channels. (b) set 2: 55 electrode channels. (c) set 3: 48 electrode channels. (d) set 4: 37 electrode channels. (e) set 5: 26 electrode
channels. (f) set 6: 9 electrode channels.

C. Activation Status of Brain Regions in Different PSD features across the three emotions from different subjects
Emotions for Different Subjects are created, which can demonstrate the power distribution of
To further investigate the relationship between the nine EEG [41], [42]. The PSD features from the same subject in
selected channels and human emotions, topo maps of the the same emotional state are averaged over time for each

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
LI et al.: EEG DATA PROCESSING APPROACH FOR EMOTION RECOGNITION 10755

Fig. 4. PSD feature topo maps of subjects in the five frequency bands for different emotions. The subjects are evenly selected.

emotional state. Thus, for each subject, three sets of emotion would diminish sensitivity of features in recognizing emo-
data (positive, neutral, and negative) are obtained for each tions for a specific subject. Thus, within-subject normaliza-
frequency band (i.e., delta, theta, alpha, beta, and gamma). tion methods are needed to improve the subject-independent
The topo maps of each emotion are then visualized and emotion recognition performance.
compared across different subjects. The corresponding results
of the PSD features from three evenly selected subjects are D. Batch Normalization
shown in Fig. 4. The illustrated results show that: (1) The Previous studies have reported that there exist individual dif-
activated channels with human emotions mainly distribute in ferences in human behavior and physiological responses [32].
the temporal lobe brain regions, especially the left temporal Fig. 5 illustrates the PSD feature topo maps from differ-
lobe. As shown in Fig. 3(f), the nine selected EEG channels ent experimental sessions of the same subject, which shows
in this study mainly distribute in the temporal lobe regions, the existence of individual differences in SEED. To reduce
describing why the nine selected channels can effectively the impact of individual differences on emotion recognition,
help recognize human emotions with even higher accuracies. an experiment-level batch normalization (BN) method is pro-
(2) Large individual differences can be observed across sub- posed. Specifically, the features in each frequency band of the
jects in each emotion. Therefore, normalization across subjects 9 selected channels are normalized within the experimental

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
10756 IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022

TABLE II
S TATISTICAL A NALYSIS R ESULTS OF PSD F EATURES (SEED IV
D ATASET ) ACROSS THE N INE S ELECTED C HANNELS IN THE F IVE
F REQUENCY B ANDS . T HE R ED C OLOR H IGHLIGHTS THE
R ESULTS W ITH S TATISTICAL S IGNIFICANCE ( P≤0.05)

session by using the following equation,


Fi − Fmin
FB Ni =
Fmax − Fmin
where Fi and FB Ni are the original value of a specific feature Fig. 5. PSD feature topo maps from subjects in different experiments.
and the value of the feature after applying this BN method, The subjects are evenly selected.
respectively. Fmin and Fmax are the minimum and maximum
values of the corresponding feature in each experimental
session, respectively.
The protocol of BN is shown in Fig. 6. To evaluate the
effectiveness of BN on emotion recognition, PSD features
and DE features are used to examine cross-subject emotion
recognition performance with the experiment-level BN. The
PSD and DE features are extracted from the five frequency
bands of the nine selected electrode channels. If the number of
subjects used for model training is M and the data from the rest
subjects are used as the test set, there will be C15M possibilities

of the randomly selected training and test sets. All these


possibilities are examined and tested for emotion recognition,
and the corresponding results based on PSD features and
DE features are shown in Table IV and Table V, respectively.
The results based on PSD features with the proposed BN Fig. 6. The protocol of BN processing in one experiment.
in Table IV show that the recognition accuracy generally
increases with the scale of the training set (i.e., the number of for model training is 67.70%, while the accuracy when
subjects in the training set). The mean accuracy of the CNN using the DE features from only one subject for training is
classifier when using the PSD features from 14 subjects for only 38.13%.
model training is 61.93%, 24.84% higher than the accuracy The comparison between recognition accuracies before and
when using the data from only one subject for model training. after applying the proposed BN on the PSD or DE features
Similar trends can be found in the results based DE features is shown in Fig. 7. For fair comparison, the model training
with the proposed BN in Table V. The mean accuracy of the and testing based on features without BN are run following
CNN classifier when using the DE features from 14 subjects the same procedure. The results in Fig. 7 indicate that using

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
LI et al.: EEG DATA PROCESSING APPROACH FOR EMOTION RECOGNITION 10757

TABLE III
E MOTION RECOGNITION R ESULTS ACROSS I NDIVIDUALS B ASED ON D IFFERENT S ETS O F S ELECTED E LECTRODE C HANNELS . T HE M EAN
ACCURACY I S T HE AVERAGE OF A LL THE C LASSIFIERS . T HE B EST ACCURACY OF E ACH C LASSIFIER W HEN U SING PSD OR DE
F EATURES I S H IGHLIGHTED IN B OLD R ED, AND THE S ECOND -B EST IN B OLD B LUE , AND THE T HIRD -B EST IN B OLD

TABLE IV
T HE R ECOGNITION R ESULTS OF PSD F EATURE AND DE F EATURE F ROM THE F IVE F REQUENCY B ANDS IN 9 E LECTRODE C HANNELS W ITH 80%
F EATURE D ATA IN E ACH E XPERIMENT F ROM 15 S UBJECTS AS T RAINING S ET AND THE R EST 20% F EATURE D ATA AS T ESTING S ET

features with the proposed BN for emotion recognition can All the above results are obtained from each individual
achieve obviously better accuracies than using features without separately. To further examine the effectiveness of our pro-
the BN, no matter which classifier is used. When comparing posed method on emotion recognition, the data from all the
the performances of different classifiers, SVM and LR are 15 subjects are combined, with 80% of the combined samples
superior to the other classifiers, which is consistent with the being used for training and 20% for testing. Almost all
results shown in Table III, IV, and V. the recognition accuracies when using the proposed BN are

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
10758 IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022

TABLE V
E MOTION R ECOGNITION R ESULTS ACROSS I NDIVIDUALS B ASED ON DE F EATURES (W ITH BN) F ROM THE S ELECTED 9 E LECTRODE
C HANNELS . T HE B EST ACCURACY OF E ACH C LASSIFIER I S H IGHLIGHTED IN B OLD R ED,
AND THE S ECOND -B EST IN B OLD B LUE , AND THE T HIRD -B EST IN B OLD

greater than the accuracies without it (see Table VI). The temporal region where the nine selected channels locate in is
greatest recognition accuracy is achieved with the SVM and closely related to human emotional responses. Furthermore,
LR classifier when using DE features with the proposed BN. other researchers proposed that perception-emotion linkages
This highest number is 11.85% and 16.30% greater than could be stored at the temporal lobe, which would react when
the numbers without BN for the SVM and LR classifiers, emotions were perceived or imagined [54]. They speculated
respectively. These results indicate that our proposed BN can that dorsal portions of the temporal pole were responsible for
also perform well when using the combined data from all the coupling visceral emotional responses with representations of
subjects. complex auditory stimuli while the ventral portions coupled
In addition, as shown in Table V, the recognition accuracies visceral emotional responses to complex visual stimuli. This
of SVM and LR when using 80% of data for training are may explain why the nine selected channels in the temporal
77.85% and 77.89%, respectively. The corresponding numbers region are sensitive to human emotions elicited by films
in Table VI are about 12% higher. This is probably because the clips (i.e., auditory stimuli and visual stimuli). However, the
general characteristics of a person’s emotional EEG responses researchers in [50] and [51] also reported that the front lobe
can be learned from the data of that person in the training was sensitive to human emotions as well. A previous study
set. Therefore, the emotion recognition models can better also proposed that patients with frontal lobe brain damage
recognize the emotion samples from the same person in the might change their emotion behavior [55]. But our results
test set. in Table I show that the PSD features from the channels
in the front lobe are not statistically significant among the
IV. D ISCUSSION examined emotions. It may be because the PSD feature, as a
A. The Relationship Between the Nine Selected single index, cannot totally reflect all the changes of our brain
Channels and Human Emotion with emotional stimuli. This indicates that the mechanism of
Fig. 3 and Fig. 4 show that the nine selected channels are brain EEG responses to human emotions should be further
mainly in the temporal lobe brain regions, indicating their investigated [56].
connection with emotion. Similarly, Liu et al. [50] found
that the temporal lobes were sensitive to emotion activities B. The Potential of Using Only Nine Channels for
in human brain. [51] and [52] reported similar findings. Emotion Recognition
An earlier study suggested that the temporal region affected The results in Table VI show that the recognition accuracy
visceral emotional responses to evocative stimuli based on can achieve 89.63% when only using the nine selected chan-
anatomical connectivity experiments [53], indicating that the nels. The numbers in [57] and [31] are 82.87% and 83.99%

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
LI et al.: EEG DATA PROCESSING APPROACH FOR EMOTION RECOGNITION 10759

TABLE VI
E MOTION R ECOGNITION R ESULTS ACROSS I NDIVIDUALS B ASED ON PSD F EATURES (W ITH BN) F ROM THE S ELECTED
9 E LECTRODE C HANNELS . T HE B EST ACCURACY OF E ACH C LASSIFIER I S H IGHLIGHTED IN B OLD
R ED, AND THE S ECOND -B EST IN B OLD B LUE , AND THE T HIRD -B EST IN B OLD

TABLE VII
S TATISTICAL A NALYSIS R ESULTS OF PSD F EATURES IN THE F IVE F REQUENCY B ANDS A FTER A PPLYING BN.
T HE R ED C OLOR H IGHLIGHTS THE R ESULTS W ITH S TATISTICAL S IGNIFICANCE ( P≤0.05)

when using all the 62 electrode channels, respectively. Based and achieved a recognition accuracy of 70.10%. When using
on the EEG signals from 62 channels in SEED, the authors of features from 12 selected channels for emotion recognition
[58] developed single-task DNN (deep neural network), multi- in [31], the accuracy can be up to 86.65%. Compared with
task DNN and adversarial DNN for emotion recognition, and these previous studies, our proposed method based on only
accuracies were 59.05%, 62.15% and 75.31%, respectively. nine electrode channels has the best performance with an
The accuracy when using convolution neural network (CNN) accuracy of 89.63%, superior to those accuracies based on
was 84.35% in [59]. When using a linear formulation of other subsets of EEG channels.
DE feature exactor and a bidirectional long short-term mem-
ory (BiLSTM) network, the method proposed in [60] achieved C. The Effect of Experimental-Level BN on Feature
a recognition accuracy of 80.64%. These results show that Sensitivity to Human Emotions
only using the nine selected channels in this study can achieve To further investigate how the experiment-level BN con-
competitive recognition results with the previous studies using tributes to the recognition accuracy improvement, signif-
all the 62 channels. icance analysis is applied to examine the difference of
Previous studies also used subsets of EEG channels for emo- accuracies between the PSD features with and without the
tion recognition based on SEED. For example, 16 electrode experiment-level BN in the five frequency bands across chan-
channels were selected to classify human emotions by using nels. Comparing the results in Tables I and VII, the number of
SVM in [61]. The reported maximal recognition accuracy channels with statistically significant differences is increased
was 74.06%. Researchers in [62] applied 13 electrode channels when involving the experiment-level BN, and the number of

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
10760 IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022

Fig. 8. A framework of practical applications.

fewer electrode channels for emotion recognition. Specifically,


only nine electrode channels are used in our method, but
it is able to achieve comparative or even better recognition
accuracy than the conventional methods using all the channels.
Application of fewer electrode channels would lead to less
computational cost and more feasible solutions to emotion
recognition in practical applications. Second, an experiment-
level BN is newly developed. Different from the previous
studies involving BN, the BN developed in this study is
conducted within an experiment (not a single trial or all the
Fig. 7. Emotion recognition accuracies based on the nine selected experiments for a single participant) to avoid interference or
features with and without BN across individuals. (The suffix “-BN”
represents “with the proposed BN” while no suffix represents “without
noise from other factors (e.g., variations of mental status),
the proposed BN.”) which can diminish the inter- and intra-individual differences
and baseline deviation problems in the collected data. Because
frequency bands with statistical significances is also increased of its simple operation, BN has great adaptability with different
for each channel. These results indicate that the feature algorithms, including classical machine learning algorithms
sensitivities to human emotions are increased after applying and deep learning algorithms. To the best of our knowledge,
the experiment-level BN, demonstrating its effectiveness in this experiment-level BN has never been used in the previous
emotion recognition. emotion recognition studies based on EEG signals.

D. Practical Applications F. Limitations and Future Work


EEG emotion recognition is of significance in the field of The main limitations and future work of this study can
passive brain computer interface (BCI), which aims to improve be summarized in the following aspects: (1) The features
the communication between individuals and machines, and to are averaged in the temporal dimension in the preprocessing
develop applications that adapt to changing states of users [63]. procedure, which could result in the loss of useful information
The proposed method in this paper can be applied in EEG for emotion recognition [66], [67]. Future studies should
signals collection and processing with less computational cost include more temporal dimension features without involving
and greater performance on emotion recognition (see Fig. 8). the averaging operation. (2) Only eleven classical classifiers
Motivated by the EEG emotion recognition results, comput- (KNN, LR, SVM, GNB, DT, RF, XGB, MLP, BA, CNN and
ers can be designed to optimize interaction strategies with LSTM) are examined in this paper. Future work should also
operators, leading to more natural and effective operation consider more advanced machine learning methods in artificial
experiences. Another typical application is for drivers. Road intelligence. (3) The nine selected channels may provide insuf-
rage has been identified as a typical causation of traffic acci- ficient features for CNN and LSTM model training, especially
dents [64]. The proposed method can be used for driver anger for features extraction of CNN. The temporal information
recognition, based on which his/her irritable characteristics loss in the averaging operation also limits the performance of
can be evaluated and the corresponding interference strategies LSTM. In future studies, specific models should be developed
(e.g., feedback reports or music) can be developed to regulate based on deep learning theories and technologies to increase
the negative emotion [65]. recognition accuracy. (4) The method proposed in this study
is mainly for offline applications because the premise of using
E. Novelties and Advantages of Our Proposed Method BN is that the maximum and minimum of EEG signals in
The novelties and advantages of our work can be summa- each experiment are available. The future work should focus
rized in two aspects. First, we propose a method with much more on the development of online technologies for real-time

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
LI et al.: EEG DATA PROCESSING APPROACH FOR EMOTION RECOGNITION 10761

applications. (5) The numbers of participants in the SEED and [16] H. Dabas, C. Sethi, C. Dua, M. Dalawat, and D. Sethia, “Emotion
SEED IV datasets are limited. More experiment data should classification using EEG signals,” in Proc. 2nd Int. Conf. Comput. Sci.
Artif. Intell. (CSAI), 2018, pp. 380–384.
be collected in future studies. [17] J. Li, S. Qiu, Y.-Y. Shen, C.-L. Liu, and H. He, “Multisource transfer
learning for cross-subject EEG emotion recognition,” IEEE Trans.
V. C ONCLUSION Cybern., vol. 50, no. 7, pp. 3281–3293, Jul. 2020.
[18] T. Song, W. Zheng, P. Song, and Z. Cui, “EEG emotion recognition using
Six sets of EEG electrode channels are selected from the dynamical graph convolutional neural networks,” IEEE Trans. Affect.
source channels based on the public SEED dataset. The best Comput., vol. 11, no. 3, pp. 532–541, Jul./Sep. 2020.
set with only nine channels mainly from the temporal lobes is [19] Y. Yin, X. Zheng, B. Hu, Y. Zhang, and X. Cui, “EEG emotion
recognition using fusion model of graph convolutional neural networks
then determined based upon the emotion recognition accura- and LSTM,” Appl. Soft Comput., vol. 100, Mar. 2021, Art. no. 106954.
cies. An experiment-level BN method is developed to reduce [20] D. Maheshwari, S. K. Ghosh, R. K. Tripathy, M. Sharma, and
the effect of individual differences so as to improve feature U. R. Acharya, “Automated accurate emotion recognition system using
rhythm-specific deep convolutional neural network technique with multi-
sensitivity on emotion recognition. Our results show that the channel EEG signals,” Comput. Biol. Med., vol. 134, Jul. 2021,
recognition performance achieved based on the experiment- Art. no. 104428.
level BN features from the nine selected channels is better [21] Y. Li, W. Zheng, L. Wang, Y. Zong, and Z. Cui, “From regional to global
than the results when using the signals from all the source brain: A novel hierarchical spatial-temporal neural network model for
EEG emotion recognition,” IEEE Trans. Affect. Comput., early access,
channels. Our proposed method has the potential to facilitate Jun. 14, 2019, doi: 10.1109/TAFFC.2019.2922912.
the deployment of emotion recognition applications based on [22] Z. Liang, S. Oba, and S. Ishii, “An unsupervised EEG decoding system
cost-effective devices with fewer EEG channels. for human emotion recognition,” Neural Netw., vol. 116, pp. 257–268,
Aug. 2019.
[23] W. Wang, Y. Peng, and W. Kong, “EEG-based emotion recognition via
R EFERENCES joint domain adaptation and semi-supervised RVFL network,” in Proc.
[1] A. Toisoul, J. Kossaifi, A. Bulat, G. Tzimiropoulos, and M. Pantic, Int. Conf. Intell. Automat. Soft Comput. Cham, Switzerland: Springer,
“Estimation of continuous valence and arousal levels from faces in pp. 413–422.
naturalistic conditions,” Nature Mach. Intell., vol. 3, no. 1, pp. 42–50, [24] S. Liu, X. Wang, L. Zhao, J. Zhao, Q. Xin, and S.-H. Wang, “Subject-
Jan. 2021. independent emotion recognition of EEG signals based on dynamic
[2] B. Ko, “A brief review of facial emotion recognition based on visual empirical convolutional neural network,” IEEE/ACM Trans. Comput.
information,” Sensors, vol. 18, no. 2, p. 401, Jan. 2018. Biol. Bioinf., vol. 18, no. 5, pp. 1710–1721, Sep. 2021.
[3] D. Y. Liliana, “Emotion recognition from facial expression using deep [25] A. Topic and M. Russo, “Emotion recognition based on EEG feature
convolutional neural network,” J. Phys., Conf., vol. 1193, Apr. 2019, maps through deep learning network,” Eng. Sci. Technol., Int. J., vol. 24,
Art. no. 012004. no. 6, pp. 1442–1454, Dec. 2021.
[4] N. Mehendale, “Facial emotion recognition using convolutional neural [26] H. Cui, A. Liu, X. Zhang, X. Chen, K. Wang, and X. Chen, “EEG-
networks (FERC),” Social Netw. Appl. Sci., vol. 2, no. 3, pp. 1–8, based emotion recognition using an end-to-end regional-asymmetric
Mar. 2020. convolutional neural network,” Knowl. Syst., vol. 205, Oct. 2020,
[5] M. Mohammadpour, H. Khaliliardali, S. M. R. Hashemi, and Art. no. 106243.
M. M. AlyanNezhadi, “Facial emotion recognition using deep convo- [27] M. R. Islam et al., “Emotion recognition from EEG signal focusing on
lutional networks,” in Proc. IEEE 4th Int. Conf. Knowl. Eng. Innov. deep learning and shallow learning techniques,” IEEE Access, vol. 9,
(KBEI), Dec. 2017, pp. 17–21. pp. 94601–94624, 2021.
[6] G. Li, Y. Yang, X. Qu, D. Cao, and K. Li, “A deep learning based image [28] Y. Li, W. Zheng, L. Wang, Y. Zong, and Z. Cui, “From regional to global
enhancement approach for autonomous driving at night,” Knowl. Syst., brain: A novel hierarchical spatial-temporal neural network model for
vol. 213, Feb. 2021, Art. no. 106617. EEG emotion recognition,” IEEE Trans. Affect. Comput., early access,
[7] M. S. Özerdem and H. Polat, “Emotion recognition based on EEG Jun. 14, 2019, doi: 10.1109/TAFFC.2019.2922912.
features in movie clips with channel selection,” Brain Inf., vol. 4, no. 4, [29] Z. Wang, S. Hu, and H. Song, “Channel selection method for EEG emo-
pp. 241–252, 2017. tion recognition using normalized mutual information,” IEEE Access,
[8] J. X. Chen, P. W. Zhang, Z. J. Mao, Y. F. Huang, D. M. Jiang, and vol. 7, pp. 143303–143311, 2019.
Y. N. Zhang, “Accurate EEG-based emotion recognition on combined [30] G. Brown, A. Pocock, M.-J. Zhao, and M. Luján, “Conditional like-
features using deep convolutional neural networks,” IEEE Access, vol. 7, lihood maximisation: A unifying framework for information theoretic
pp. 44317–44328, 2019. feature selection,” J. Mach. Learn. Res., vol. 13, no. 1, pp. 27–66,
[9] K. Giannakaki, G. Giannakakis, C. Farmaki, and V. Sakkalis, “Emotional Jan. 2012.
state recognition using advanced machine learning techniques on EEG
[31] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and
data,” in Proc. IEEE 30th Int. Symp. Comput. Med. Syst. (CBMS),
channels for EEG-based emotion recognition with deep neural net-
Jun. 2017, pp. 337–342.
works,” IEEE Trans. Auton. Mental Develop., vol. 7, no. 3, pp. 162–175,
[10] R. K. Jeevan, V. M. R. S.P., P. S. Kumar, and M. Srivikas, “EEG-based
Sep. 2015.
emotion recognition using LSTM-RNN machine learning algorithm,” in
Proc. 1st Int. Conf. Innov. Inf. Commun. Technol. (ICIICT), Apr. 2019, [32] J. Li, S. Qiu, C. Du, Y. Wang, and H. He, “Domain adaptation for
pp. 1–4. EEG emotion recognition based on latent representation similarity,”
[11] C. Qing, R. Qiao, X. Xu, and Y. Cheng, “Interpretable emotion recogni- IEEE Trans. Cognit. Develop. Syst., vol. 12, no. 2, pp. 344–353,
tion using EEG signals,” IEEE Access, vol. 7, pp. 94160–94170, 2019. Jun. 2020.
[12] M. Z. Soroush, K. Maghooli, S. K. Setarehdan, and A. M. Nasrabadi, [33] M. Li, H. Xu, X. Liu, and S. Lu, “Emotion recognition from multi-
“Emotion classification through nonlinear EEG analysis using machine channel EEG signals using K-nearest neighbor classification,” Technol.
learning methods,” Int. Clin. Neurosci. J., vol. 5, no. 4, pp. 135–149, Health Care, vol. 26, no. S1, pp. 509–519, Jul. 2018.
Dec. 2018. [34] L. Yao, M. Wang, Y. Lu, H. Li, and X. Zhang, “EEG-based emotion
[13] A. Hassouneh, A. M. Mutawa, and M. Murugappan, “Development of a recognition by exploiting fused network entropy measures of complex
real-time emotion recognition system using facial expressions and EEG networks across subjects,” Entropy, vol. 23, no. 8, p. 984, Jul. 2021.
based on machine learning and deep neural network methods,” Informat. [35] C. Tan, M. Šarlija, and N. Kasabov, “NeuroSense: Short-term emo-
Med. Unlocked, vol. 20, 2020, Art. no. 100372. tion recognition and understanding based on spiking neural network
[14] V. Doma and M. Pirouz, “A comparative analysis of machine learning modelling of spatio-temporal EEG patterns,” Neurocomputing, vol. 434,
methods for emotion recognition using EEG and peripheral physiologi- pp. 137–148, Apr. 2021.
cal signals,” J. Big Data, vol. 7, no. 1, pp. 1–21, Dec. 2020. [36] R. Ning, C. L. P. Chen, and T. Zhang, “Cross-subject EEG emo-
[15] O. Bazgir, Z. Mohammadi, and S. A. H. Habibi, “Emotion recognition tion recognition using domain adaptive few-shot learning networks,”
with machine learning using EEG signals,” in Proc. 25th Nat. 3rd Int. in Proc. IEEE Int. Conf. Bioinf. Biomed. (BIBM), Dec. 2021,
Iranian Conf. Biomed. Eng. (ICBME), Nov. 2018, pp. 1–5. pp. 1468–1472.

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
10762 IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022

[37] Y. Lu, M. Wang, W. Wu, Y. Han, Q. Zhang, and S. Chen, “Dynamic [62] L. Tong, J. Zhao, and W. Fu, “Emotion recognition and channel selection
entropy-based pattern learning to identify emotions from EEG signals based on EEG signal,” in Proc. 11th Int. Conf. Intell. Comput. Technol.
across individuals,” Measurement, vol. 150, Jan. 2020, Art. no. 107003. Autom. (ICICTA), Sep. 2018, pp. 101–105.
[38] P. Philippot, “Inducing and assessing differentiated emotion-feeling [63] A. Al-Nafjan, M. Hosny, Y. Al-Ohali, and A. Al-Wabil, “Review and
states in the laboratory,” Cognition Emotion, vol. 7, no. 2, pp. 171–193, classification of emotion recognition based on EEG brain-computer
Mar. 1993. interface system research: A systematic review,” Appl. Sci., vol. 7, no. 12,
[39] A. Gramfort et al., “MEG and EEG data analysis with MNE-Python,” p. 1239, Dec. 2017.
Frontiers Neurosci., vol. 7, p. 267, Dec. 2013. [64] G. Li et al., “Influence of traffic congestion on driver behavior in post-
[40] G. F. González, G. Žarić, J. Tijms, M. Bonte, L. Blomert, and congestion driving,” Accident Anal. Prevention, vol. 141, Jun. 2020,
M. W. van der Molen, “Brain-potential analysis of visual word recog- Art. no. 105508.
nition in dyslexics and typically reading children,” Frontiers Hum. [65] X. Hu et al., “SAfeDJ: A crowd-cloud codesign approach to situation-
Neurosci., vol. 8, p. 474, Jun. 2014. aware music delivery for drivers,” ACM Trans. Multimedia Comput.,
[41] O. Dressler, G. Schneider, G. Stockmanns, and E. F. Kochs, “Aware- Commun., Appl., vol. 12, no. 1s, pp. 1–24, Oct. 2015.
ness and the EEG power spectrum: Analysis of frequencies,” Brit. J. [66] D. Nemrodov, M. Niemeier, A. Patel, and A. Nestor, “The neural
Anaesthesia, vol. 93, no. 6, pp. 806–809, Dec. 2004. dynamics of facial identity processing: Insights from EEG-based pattern
[42] S. A. Unde and R. Shriram, “PSD based coherence analysis of EEG analysis and image reconstruction,” Eneuro, vol. 5, no. 1, Jan. 2018,
signals for stroop task,” Int. J. Comput. Appl., vol. 95, no. 16, pp. 1–5, Art. no. e0358-17.
Jun. 2014. [67] G. Li, W. Yan, S. Li, X. Qu, W. Chu, and D. Cao, “A temporal-spatial
[43] J. W. Gibbs, Elementary Principles in Statistical Mechanics: Developed deep learning approach for driver distraction detection based on EEG
With Especial Reference to the Rational Foundation of Thermodynamics. signals,” IEEE Trans. Autom. Sci. Eng., early access, Jun. 24, 2021, doi:
New Haven, CT, USA: Yale Univ. Press, 1914. 10.1109/TASE.2021.3088897.
[44] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy feature for
EEG-based emotion classification,” in Proc. 6th Int. IEEE/EMBS Conf.
Neural Eng. (NER), Nov. 2013, pp. 81–84.
[45] K. Yang, L. Tong, J. Shu, N. Zhuang, B. Yan, and Y. Zeng, “High
gamma band EEG closely related to emotion: Evidence from functional
network,” Frontiers Hum. Neurosci., vol. 14, p. 89, Mar. 2020.
[46] W.-L. Zheng, W. Liu, Y. Lu, B.-L. Lu, and A. Cichocki, “EmotionMeter: Guofa Li (Member, IEEE) received the Ph.D.
A multimodal framework for recognizing human emotions,” IEEE Trans. degree in mechanical engineering from Tsinghua
Cybern., vol. 49, no. 3, pp. 1110–1122, Mar. 2019. University, Beijing, China, in 2016. He is cur-
[47] F. Pereira, T. Mitchell, and M. Botvinick, “Machine learning clas- rently an Associate Research Professor with the
sifiers and fMRI: A tutorial overview,” NeuroImage, vol. 45, no. 1, College of Mechatronics and Control Engineer-
pp. S199–S209, Mar. 2009. ing, Shenzhen University, Guangdong, China.
[48] M. M. R. Mamun, O. Sharif, and M. M. Hoque, “Classification of textual He has published more than 60 papers in his
sentiment using ensemble technique,” Social Netw. Comput. Sci., vol. 3, research areas. His research interests include
no. 1, p. 49, Jan. 2022. environment perception, driver behavior analy-
[49] C. A. Ul Hassan, M. S. Khan, and M. A. Shah, “Comparison of sis, human-like decision-making based on arti-
machine learning algorithms in data classification,” in Proc. 24th Int. ficial intelligence technologies in autonomous
Conf. Autom. Comput. (ICAC), Sep. 2018, pp. 1–6. vehicles, and intelligent transportation systems. He was a recipient of the
[50] X. Liu et al., “Emotion recognition and dynamic functional connectivity Young Elite Scientists Sponsorship Program in China and the Best Paper
analysis based on EEG,” IEEE Access, vol. 7, pp. 143293–143302, 2019. Awards from the China Association for Science and Technology (CAST)
[51] H. Jiang, Z. Wang, X. Gui, and G. Yang, “Correlation study of and the Automotive Innovation Journal. He serves as an Associate Editor
emotional brain areas induced by video,” in Proc. Int. Conf. Testbeds for IEEE SENSORS JOURNAL and a Lead Guest Editor for IEEE Intelligent
Res. Infrastruct. Cham, Switzerland: Springer, 2019, pp. 199–212. Transportation Systems Magazine and Automotive Innovation.
[52] P. A. Kragel and K. S. LaBar, “Decoding the nature of emotion in the
brain,” Trends Cogn. Sci., vol. 20, no. 6, pp. 444–455, 2016.
[53] H. Kondo, K. S. Saleem, and J. L. Price, “Differential connections
of the temporal pole with the orbital and medial prefrontal net-
works in macaque monkeys,” J. Comparative Neurol., vol. 465, no. 4,
pp. 499–523, Oct. 2003. Delin Ouyang received the bachelor’s degree
[54] I. R. Olson, A. Plotzker, and Y. Ezzyat, “The enigmatic temporal from Shenzhen University, Shenzhen, China,
pole: A review of findings on social and emotional processing,” Brain, in 2021, where he is currently pursuing the mas-
vol. 130, no. 7, pp. 1718–1731, May 2007. ter’s degree in mechanical engineering with the
[55] E. T. Rolls, J. Hornak, D. Wade, and J. McGrath, “Emotion-related College of Mechatronics and Control Engineer-
learning in patients with social and emotional changes associated with ing. His research interests include the application
frontal lobe damage,” J. Neurol., Neurosurgery Psychiatry, vol. 57, of brain computer interface (BCI) and affective
no. 12, pp. 1518–1524, Dec. 1994. computing.
[56] J. Zhang, S. Zhao, W. Huang, and S. Hu, “Brain effective connectivity
analysis from EEG for positive and negative emotion,” in Proc. Int. Conf.
Neural Inf. Process. Cham, Switzerland: Springer, 2017, pp. 851–857.
[57] Z. Wang, R. Jiao, and H. Jiang, “Emotion recognition using WT-SVM in
human-computer interaction,” J. New Media, vol. 2, no. 3, pp. 121–130,
2020.
[58] S. Hwang, M. Ki, K. Hong, and H. Byun, “Subject-independent EEG-
based emotion recognition using adversarial learning,” in Proc. 8th Int.
Winter Conf. Brain-Comput. Interface (BCI), Feb. 2020, pp. 1–4. Yufei Yuan received the bachelor’s degree
[59] Z. Wang, Y. Tong, and X. Heng, “Phase-locking value based graph from the Hebei University of Technology, China,
convolutional neural networks for emotion recognition,” IEEE Access, in 2021. He is currently pursuing the master’s
vol. 7, pp. 93711–93722, 2019. degree in mechanical engineering with the Col-
[60] V. M. Joshi and R. B. Ghongade, “EEG based emotion detection lege of Mechatronics and Control Engineer-
using fourth order spectral moment and deep learning,” Biomed. Signal ing, Shenzhen University, Shenzhen, China. His
Process. Control, vol. 68, Jul. 2021, Art. no. 102755. research interests include the application of brain
[61] R. Nivedha, M. Brinda, D. Vasanth, M. Anvitha, and K. V. Suma, computer interface (BCI) and distraction detec-
“EEG based emotion recognition using SVM and PSO,” in Proc. Int. tion.
Conf. Intell. Comput., Instrum. Control Technol. (ICICICT), Jul. 2017,
pp. 1597–1600.

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.
LI et al.: EEG DATA PROCESSING APPROACH FOR EMOTION RECOGNITION 10763

Wenbo Li received the B.S., M.Sc., and Ph.D. Xingda Qu received the Ph.D. degree in human
degrees in automotive engineering from factors and ergonomics from Virginia Tech,
Chongqing University, Chongqing, China, Blacksburg, VA, USA, in 2008. He is currently a
in 2014, 2017, and 2021, respectively. He is Professor with the Institute of Human Factors and
currently a Postdoctoral Fellow with Tsinghua Ergonomics, Shenzhen University, Shenzhen,
University. He is also a visiting Ph.D. student China. His research interests include transporta-
with the Waterloo Cognitive Autonomous Driving tion safety, occupational safety and health, and
(CogDrive) Laboratory, University of Waterloo, human–computer interaction.
Canada, from 2018 to 2020. His research inte-
rests include smart cockpit, intelligent vehicle,
human emotion, driver emotion detection,
affective computing, emotion regulation, human–machine interaction,
and brain–computer interface.

Paul Green received the bachelor’s degree in


mechanical engineering from Drexel University
in 1974, and the master’s degree in industrial
and operations engineering, the master’s degree
Zizheng Guo received the Ph.D. degree in traf- in psychology, and the joint Ph.D. degree in
fic planning and management from Southwest industrial and operations engineering and psy-
Jiaotong University, Chengdu, China, in 2009. chology from the University of Michigan in 1974,
He was a Postdoctoral Research Fellow at the 1979, and 1979, respectively. He is currently a
Institute of Psychology, Chinese Academy of Sci- Research Professor with the University of Michi-
ences, from 2011 to 2013, and a Visiting Scholar gan Transportation Research Institute (UMTRI),
at the University of Michigan from 2013 to 2014. Ann Arbor, MI, USA, where he is an Adjunct
He is currently a Professor with the Depart- Professor with the Department of Industrial and Operations Engineering.
ment of Transportation and Logistics, Southwest He is a Leader with the UMTRI Human Factors Group. His research
Jiaotong University, and the Deputy Director of interests include driver interfaces, driver workload, and driver distraction.
Comprehensive Transportation Key Laboratory, He was the President of the Human Factors and Ergonomics Society,
Sichuan. His research interests include the mathematical modeling of where he is a member of the Executive Council. He is a Fellow of the
human cognition and performance, machine learning, traffic safety, and Human Factors and Ergonomics Society and the Chartered Institute of
biomedical signal processing. Ergonomics and Human Factors.

Authorized licensed use limited to: University of Michigan Library. Downloaded on February 10,2023 at 15:39:24 UTC from IEEE Xplore. Restrictions apply.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy