0% found this document useful (0 votes)
13 views

Emotions

This document describes a study that aimed to recognize emotions evoked by music using convolutional neural network (CNN) and long short-term memory (LSTM) networks on electroencephalography (EEG) signals. The researchers recorded EEG signals from 14 subjects while exposing them to music stimulation. They then used a deep neural network architecture including 10 convolutional layers, 3 LSTM layers, and 2 fully connected layers to classify emotions into two categories (negative and positive) and three categories (negative, neutral, and positive). The proposed method achieved 97.42% accuracy for two-category classification and 96.78% accuracy for three-category classification, outperforming other classification methods. The high accuracy suggests this method could be useful for developing human-

Uploaded by

z mousavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Emotions

This document describes a study that aimed to recognize emotions evoked by music using convolutional neural network (CNN) and long short-term memory (LSTM) networks on electroencephalography (EEG) signals. The researchers recorded EEG signals from 14 subjects while exposing them to music stimulation. They then used a deep neural network architecture including 10 convolutional layers, 3 LSTM layers, and 2 fully connected layers to classify emotions into two categories (negative and positive) and three categories (negative, neutral, and positive). The proposed method achieved 97.42% accuracy for two-category classification and 96.78% accuracy for three-category classification, outperforming other classification methods. The high accuracy suggests this method could be useful for developing human-

Uploaded by

z mousavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Received May 21, 2020, accepted June 23, 2020, date of publication July 27, 2020, date of current

version August 10, 2020.


Digital Object Identifier 10.1109/ACCESS.2020.3011882

Recognizing Emotions Evoked by Music Using


CNN-LSTM Networks on EEG Signals
SOBHAN SHEYKHIVAND1 , ZOHREH MOUSAVI2 , TOHID YOUSEFI REZAII1 ,
AND ALI FARZAMNIA 3 , (Senior Member, IEEE)
1 Biomedical Engineering Department, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 5166616471, Iran
2 Department of Mechanical Engineering, Faculty of Mechanical Engineering, University of Tabriz, Tabriz 5166616471, Iran
3 Faculty of Engineering, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia
Corresponding authors: Ali Farzamnia (ali-farzamnia@ieee.org) and Tohid Yousefi Rezaii (yousefi@tabrizu.ac.ir)
This work was supported by the Research and Innovation Management Center (PPPI) and the Faculty of Engineering, Universiti Malaysia
Sabah (UMS).

ABSTRACT Emotion is considered to be critical for the actual interpretation of actions and relationships.
Recognizing emotions from EEG signals is also becoming an important computer-aided method for diag-
nosing emotional disorders in neurology and psychiatry. Another advantage of this approach is recognizing
emotions without clinical and medical examination, which plays a major role in completing the Brain-
Computer Interface (BCI) structure. Emotions recognition ability, without traditional utilization strategies
such as self-assessment tests, is of paramount importance. EEG signals are considered the most reliable
technique for emotions recognition because of the non-invasive nature. Manual analysis of EEG signals
is impossible for emotions recognition, so an automatic method of EEG signals should be provided for
emotions recognition. One problem with automatic emotions recognition is the extraction and selection
of discriminative features that generally lead to high computational complexity. This paper was design
to prepare a new approach to automatic two-stage classification (negative and positive) and three-stage
classification (negative, positive, and neutral) of emotions from EEG signals. In the proposed method,
directly apply the raw EEG signal to the convolutional neural network and long short-term memory network
(CNN-LSTM), without involving feature extraction/selection. In prior literature, this is a challenging
method. The suggested deep neural network architecture includes 10-convolutional layers with 3-LSTM
layers followed by 2-fully connected layers. The LSTM network in a fusion of the CNN network has been
used to increase stability and reduce oscillation. In the present research, we also recorded the EEG signals
of 14 subjects with music stimulation for the process. The simulation results of the proposed algorithm for
two-stage classification (negative and positive) and three-stage classification (negative, neutral and positive)
of emotion for 12 active channels showed 97.42% and 96.78% accuracy and Kappa coefficient of 0.94 and
0.93 respectively. We also compared our proposed LSTM-CNN network (end-to-end) with other hand-
crafted methods based on MLP and DBM classifiers and achieved promising results in comparison with
similar approaches. According to the high accuracy of the proposed method, it can be used to develop the
human-computer interface system.

INDEX TERMS Emotions Recognition, CNN, LSTM, EEG.

I. INTRODUCTION control, and expectance of the situation and pleasure [1].


Emotion is a physiological excitement mood that one finds Also, there are several important definitions and theories
in an emotional state. This theory supports other cognitive about human emotions. According to James Long’s the-
assessment theories that, in the experience of emotion, six ory, emotional experience is a response to physiological
dimensions of cognitive evaluation of situations are pre- changes in the body. According to James Long’s theory,
sented, including commitment, control, certainty, attention, emotional experience is a response to physiological changes
in the body. Any emotion is an interpretation of its previ-
The associate editor coordinating the review of this manuscript and ous excitation. Therefore, the knowledge of the physiolog-
approving it for publication was Haiyong Zheng . ical reaction of every emotion is important to the emotion

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
139332 VOLUME 8, 2020
S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

analysis [2]. Russell suggested one prevailing hypothesis, performance for emotions recognition. Lin et al. [6] validated
which stated that emotion consists of two arousals and the emotion-specific features based on EEG power spectral
valence elements [3]. Arousal denotes the emotional activa- changes and assessed the relation between EEG dynamics
tion level, whereas valence identifies positivity or negativity. and emotional states triggered by music. Koelstra et al. [24]
This representation depicts emotions systematically and is introduced the dataset for emotion analysis using physiolog-
commonly used as background knowledge in countless stud- ical signals (DEAP) for human affective states, and among
ies, including current work. 32 participants extracted the spectral power feature of five
A large body of research has been devoted to exploring frequency bands. They noticed that the frontal and parietal
the neural correlates of emotions to build devices that can lobe features can provide discriminative emotion-producing
interpret emotions. In these efforts, different pictorial [4], information. Chanel et al. [25], employed the Naive Bayes
musical [5]–[7], music video and video [8]–[10] triggers classifier to classify three emotion classes from specific fre-
have elicited emotions. Music listening encompasses a range quency bands at specific electrode locations. Zheng et al. [26]
of psychological processes, such as multimodal and per- provided the SJTU emotion EEG dataset (SEED) for emo-
ception integration, syntactic processing and encoding of tions recognition. During their study, the authors extracted
sense knowledge, sentiment, concentration, emotion, atten- six different spectral features to analyze the neural signatures
tion and social cognition [11]. Also, music will bring out for positive, negative, and neutral emotions; by this, they
strong emotions [12]. Emotional processing includes the var- found that the EEG patterns were relatively stable within
ious structures of the human brain and changes their opera- and between sessions at sensitive frequency bands and brain
tion [11]. It also induces some other physiological responses regions. Therefore, Changes in EEG power spectral can pre-
that are side effects of brain activity, such as changes in dict distinct emotional states of subjects in different brain
heart rate [13], skin conductance, and body temperature [14]. regions. Thammasat et al. [27] used physiological signals
Researchers have used different modalities such as PET [15], to extract power density spectra and fractal dimensions for
fMRI [16], NIRS [17], and EEG [18] to study neural cor- emotion analysis and found that using less familiar music
relates of emotion. High temporal resolution, non-invasive improved the accuracy of recognition regardless of whether
availability, portability, and relatively low data processing the classifier was support vector (SVM) or multilayer percep-
costs have made EEG an effective candidate for studying tion. Kumagai et al. [28] evaluated the relationship between
the neural correlates of different cognitive functions such cortical response and music familiarity. They found that when
as emotion. Various computational methods based on EEG listening to scrambled or unfamiliar music, the two peaks
signals have been developed for the observation and analysis of the cross-correlation values were significantly larger than
of automatic emotions recognition, which will be discussed familiar ones. Such results collectively indicate that the corti-
below. cal response to unfamiliar music is greater than familiar music
Balconi et al. [19] employed the EEG frequency bands and is, therefore, highly valued by BCIs programs for classifi-
in combination with hemodynamic testing to analyze brain cation applications. Zhao et al. [29] analyzed the volunteers’
reactions. Sammler et al. [13] found that listening to enjoy- EEG signals when they were watching effective films. SVM
able music in the frontal midline region increases theta power has been used as a classifier after extracting EEG features
of EEG signals. Balasubramanian et al. [7] also found a to recognize human emotions. Lu et al. [30] selected nine
higher frontal midline theta band energy for liked songs. musical passages as stimulus and divided them into three cat-
They also studied the strength of EEG components col- egories according to the two-dimensional model of emotions
lected by decomposing the wavelet packet when listening using the variance test and t-test. To evaluate EEG signals,
to liked and disliked music. Zheng et al. [20] used EEG the power density spectra were derived from different bands,
spectral power as a feature in the discovery and emotional and the dimensionality reduction of the principal component
identification of EEG channels by coarse canonical corre- analysis (PCA) was used to select the features. They found
lation analysis. Ozel et al. [21] implemented multivariate that the accuracy of SVM classifier emotions recognition
synchrosqueezing transform to extract EEG features for emo- was higher than other bands using average beta and gamma-
tional state recognition. For one of the emotional states, they band power information. Therefore, beta and gamma bands
achieved 93% classification accuracy. Hasanzadeh et al. [22] appear to be effective for emotional discrimination. Yoon and
used a nonlinear autoregressive model and genetic algo- Chung [31] suggested a classifier based on Bayes’ theorem
rithm to predict emotional states by the EEG power range that used supervised learning algorithms to classify human
when listening to music. Soleymani et al. [23] developed emotions based on EEG signals from volunteers. In this
a multimodal database to investigate emotions recognition. work, Fast Fourier Transform (FFT) analysis was used to
Based on this study, they measured the correlation between extract the feature in recognition. Li et al. [32] extracted
the electrode’s EEG spectral power and valence scores and 816 features from 16 electrodes and improved the recogni-
found that higher frequency components on the frontal, pari- tion rate by reducing CFS dimensions and machine learning.
etal, and occipital lobes had a higher correlation with the Signals related to electrode position of O2, Fp1, F3, T3, and
valence response based on self-evaluation. They also fused Fp2 were also suggested to be most closely associated with
the PSD and facial features to improve the classification mild depression. Yimin et al. [33] used a Correlation Based

VOLUME 8, 2020 139333


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 1. EEG recording while listening to music.

Feature Selection (CFS) model to extract features from EEG used the Dual-tree Complex Wavelet Transform (DT-CWT)
signals to classify different emotions (relaxation, happiness, for feature extraction on EEG signals. These researchers also
sadness, and grief) into 8 subjects. They used BP, SVM, LDA, used the Simple Recurrent Units (SRU) network to train
and C4.5 classifiers for the classification, and concluded that emotion models. This method resulted in an average accuracy
the C4.5 classifier works better for emotions recognition than of 85.65%, 85.45%, and 87.99%, respectively, with low/high
other classifiers. Fatemeh et al. [34] used a fuzzy parallel arousal, valence, and liking classes. The challenging step in
cascade (FPC) to predict a continuous subjective assessment emotions recognition is to select the discriminative features of
of the emotional content of music from the EEG signals on different stages of emotion. In the works that are most avail-
15 subjects. They also compared the FPC model with LSTM able, first, statistical features are extracted, and then the best
and linear regression (LR) models. The RMSE of their model discriminatory features are selected manually or using com-
was about 0.089. Their proposed model RMSE was lower mon feature selection methods, which is a time-consuming
than the other models for estimating both valence and arousal. method that requires high complication complexity. Also,
Panayu et al. [35] used CNN to extract features from the for one case, the best features may not be regarded as
EEG signals for arousal and valence classification into 12 optimal in another. Therefore, implementing an algorithm
subjects. Their network architecture consisted of six layers that learns the correct features corresponding to each case
of convolution. They compared their proposed algorithm with is essential. This will remain as the main benefit of this
SVM and concluded that CNN was better in emotions recog- research.
nition. Yang et al. [36] used a hybrid neural network that In the proposed method, pre-processing operations were
combined CNN and the Recurrent Neural Network (RNN) performed on the data after recording data with the auditory
to automatic emotions recognition from EEG signals. In their stimulation. Then a fusion of a deep convolutional network
experiments, these researchers used the DEAP benchmarking and an LSTM network is used to train two-stage classification
dataset. Also, in their proposed method, 1D EEG signals (positive and negative) and three-stage classification of emo-
have been converted to 2D EEG frames. The reported accu- tions (positive, neutral and negative). The approach suggested
racy for both valence and excitement classes is reported to can be used as an end-to-end classifier, in which there is no
be 90.80% and 91% respectively. Yang et al. [37] used a need for a feature selection/extraction method and the correct
multi-column structured CNN model for emotions recogni- features of each class will be automatically learned with a
tion. The DEAP database was also used by these researchers. deep neural network.
The final accuracy reported for their multi-column model for The remainder of the paper is structured as follows:
the classification of valence and arousal is reported to be The experimental database based on the stimulation of the
90% and 90%, respectively. Chen et al. [38] used parallel auditory, the related mathematical background of CNN and
hybrid convolutional recurrent neural networks to classify the LSTM network are given in Section II. The proposed method
binary emotions of EEG signals. The DEAP database was is presented in Section III. The simulation results and com-
also used by these researchers. The final accuracy reported parison of the proposed method with the common meth-
for the classification of valence and arousal is reported to ods are given in Section IV; finally, section V is as to the
be 93.64% and 93.26%, respectively. Chen Wei et al. [39] conclusion.

139334 VOLUME 8, 2020


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

TABLE 1. Validation of subjects in the EEG signal recording process for emotions recognition.

II. MATERIALS AND METHODS more than 21 on the test were excluded from the process-
In this section, we first introduce the experiments of collect- ing and conclusion process by the psychological standards.
ing EEG data based on the stimulation of the auditory at the To collect the database, 16 people (6 females and 10 males)
University of Tabriz. Then, the mathematical background of between the ages of 20 and 28 were invited to participate in
CNN and LSTM will be provided. the experiment. Participants’ EEG signal was recorded while
listening to music. All EEG signals were recorded at 29◦ C
A. EEG COLLECTING between 9 and 11 a.m. to ensure that the participants were
A database of three positive, neutral and negative emotions not tired. Also, to avoid EOG noise, all subjects were asked
was created to recognize emotions from the EEG signal. The to keep their eyes closed during the signal recording process.
nine-degree version of the Self-Assessment Manikin (SAM) The experiment was conducted using Encephalan 21-channel
test was also used in the testing process. In this test, a score EEG recorder and Macbook air 2017 (Corei5 and 8 Ram). All
below 3 is considered to be low and a score over 6 is channel data were referenced to the two reference electrodes
considered high. Before recording the signal, all partici- A1 and A2 and digitized from an international 10-20 system-
pants were asked to sign their consent form (no history of based 21-channel electrode cap at 250 Hz. Fig. 1. shows the
mental illness, no history of epilepsy, no use of psychi- EEG recording while the participant is listening to music. The
atric drugs, normal sleep at night, no fatty food, no pre- descriptive results of the BDI and SAM tests have been shown
test caffeine, and no pre-test hair washing). Participants were in Table 1. For example, according to Table 1, subject 3 was
asked to complete a beck depression inventory (BDI). After excluded from the processing-process due to a mismatch in
completing the questionnaire, participants who had scored the SAM test. Subject 1 entered the process with a mean

VOLUME 8, 2020 139335


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

TABLE 2. The order and type of music used to stimulate emotions.

FIGURE 2. Validation of SAM test.

methods in the field of machine vision [40]. There are


2 phases for learning in CNN; feed-forward and backprop-
agation (BP) phase [41].
CNN consists of three main layers, namely, convolutional,
FIGURE 3. The duration and order of the music tracks. pooling and fully connected (FC) layers [41]–[43]. The out-
put of the convolution layer is called the feature mapping.
In this research, the max-pooling layer has been used, which
dimension of positive emotional capacity 9 (greater than selects only the maximum values in each feature map. The
6) and a mean dimension of negative emotional induction dropout technique is used to avoid the overfitting; therefore,
(less than 9) and a BDI test Scale of 16 (16 < 21). Also, details according to a probability, each neuron is thrown out of the
of the validation result of the SAM test have been shown network at each stage of training, which results in decrease
in Fig. 2. Music stimulation is used to stimulate positive and network. The batch normalization (BN) layer is used to nor-
negative emotions in participants. Each music track is played malize the data inside the network. The BN transformation is
for 1 minute and pauses for 15 seconds to prevent any transfer given as follows:
of emotion between the music tracks. It is also considered a
neutral state in the processing process. Headphones are also y∗(l−1) − µB
ŷ(l−1) = q
used to play the songs (to induce more). After listening to the (σB2 + ε
music stimulus, each subject immediately began listening and
refilling out the questionnaire. Overall, the whole test process z∗(l) = γ (l) ŷ(l−1) + β (l) (1)
takes about 720 seconds. The theme and mood of music have
where y∗(l−1) is the input vector to the BN layer, z∗(l) is
a general and physiological effect on everyone with different
the output response related to a neuron in layer l, µB =
mental and emotional mechanisms. But the magnitude and
E[y∗(l−1) ], σB2 = var[y∗(l−1) ], ε represents a small constant
severity of this effect depend on the condition of the neurons,
for numerical stability, γ (l) and β (l) are the parameters of
the mental history and the habit of the listener. According to
scale and shift, respectively, which are obtained by learning.
the Iranian nationality of the participants, and also studies the
An activation function is applied after each layer. In this
sad theme has been chosen for negative emotion induction
study, Relu and Softmax, as two- types of activation func-
and the historical theme for positive emotions. Iranian music
tions, are used. Relu, as it has been defined in (2), is used in
tracks were used for each emotional theme. Table. 2 shows
the convolutional layers and has the ability to apply nonlin-
the details of each selected music and Fig. 3. shows the order
earity and sparseness to the network structure.
of the played music tracks. Fig. 4. also shows samples of EEG
(
signals for the three-stage of emotion for C4 and F4 channels d ifd > 0
on subject 1. R (d) = (2)
0 otherwise
B. DEEP CONVOLUTIONAL NEURAL NETWORK The probable distribution of the output classes can be cal-
CNN is a better replacement for the traditional neural net- culated by a Softmax activation function. Therefore, the Soft-
work, which is very effective and develops classification max function is used in the last FC layer and is defined as

139336 VOLUME 8, 2020


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 4. Part of the EEG signal for positive, negative and neutral stages of C4 and F4 channels for subject 1.

TABLE 3. Size of filters and steps recommended for suggested network.

follows: changes to the RNN [45]. This network is designed to solve


the problem of gradient vanish problem and RNN instability.
e δi
σ (δ)i = Pk for i = 1, . . . k Unlike the RNN, which only calculates a balanced sum of
δj
j=1 e input signals and then passes an activation function, each
and δ = (δ1 , . . . , δk ) ∈ Rk (3) LSTM unit uses memory Ct at time t. The output of the ht or
the activation of the LSTM unit is ht = 0o . tanh(Ct ), where
where δ is the input vector and the output values σ (δ) are 0o is the output gate and it controls the amount of content
between 0 and 1, which their sum is equal to 1 [41]–[43]. delivered through memory. The output gate can be calculated
by Equation (4).
C. LONG SHORT-TERM MEMORY (LSTM)
Recurrent neural networks (RNN) are widely used to deal 0o = σ (Wo [ht−1 , Xt + b0 ) (4)
with variable-length sequence inputs. The long-distance his-
tory is stored in a recurrent hidden vector, which depends on This equation σ is a sigmoid activation function. WO is
the previous hidden vector [44]. LSTM is one of the popular also a skew matrix. The memory cell Ct is also updated to

VOLUME 8, 2020 139337


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 5. The block diagram of the proposed method.

In this study, we used a fusion of CNN and LSTM networks


to classifying 2-stage and 3-stage of emotion. We will see that
fusion of these two networks will increase the accuracy and
reduce the oscillation.

III. PROPOSED METHOD


Details of the proposed emotions recognition method based
on CNN-LSTM are provided in this section. Fig. 5. shows the
general structure of the proposed method.

FIGURE 6. The electrodes selected for simulation and processing of data.


A. PREPROCESSING AND DISCUSSION
First, a Notch filter is applied to the data for removing the
Equation (5). 50 Hz frequency of the power supply, second, a first-order
low-pass Butterworth filter with a frequency of 0.5 to 45 Hz is
Ct = 0f · Ct−1 + 0u · Ĉt (5) applied to the data. Third, the data were normalized between
0 and 1. Considering that one of the goals of this study
where Ĉt is new memory content and is obtained by the form was to provide an algorithm based on the minimum number
of Equation (6). of EEG signal channels, it was necessary to identify the
Ĉt = tanh(WC [ht−1 , Xt ] + bC ) (6) active channels; For this purpose, in the fourth step, according
to [33], [34] and [35] only Pz, T3, C3, C4,
The amount of current memory to forget is controlled by 0f T4, F7, F3, Fz, F4, F8, Fp1, and Fp2 electrodes are used for
and that amount of new memory content, which needs to be simulation and data processing. Fig. 6. shows the electrodes
added to the memory cell, is done by the 0u update gateway. selected for simulation and data processing.
This is done by calculating Equation (7) and (8) [46], [47]. As can be seen from Fig. 3, the number of data related
to the neutral class is less than the data of the positive and
0f = σ (Wf [ht−1 , Xt ] + bf ) (7) negative classes, which causes an imbalance between the data
0f = σ (Wu [ht−1 , Xt ] + bu) (8) and may cause an overfitting problem. Furthermore, the lack

139338 VOLUME 8, 2020


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 7. Shows the overlap operation for 2-stage and 3-stage classification of emotion.

FIGURE 8. Proposed CNN-LSTM Network Architecture.

VOLUME 8, 2020 139339


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 9. EEG data allocation in the proposed algorithm for classifying


2-stage and 3-stage of emotions.

FIGURE 11. The proposed network accuracy for (a). 2-stage and (b).
3-stage classification of emotion.

to Fig. 3, we also considered the three-stage classification


of emotion (positive, negative, neutral). This step (three-
stage classification of emotion) has also been considered as
before; eventually, the final dimension of the input matrix to
the network will be equal to (3 × 7 × 360)× (e × 2000).
Fig. 7. shows this operation for the positive (5 × 60s = 300s),
negative (5×60s = 300s) and neutral (8×15 s = 120s) classes
FIGURE 10. The proposed network error for (a). 2-stage and (b). 3-stage of emotion. Due to the fact that the number of data related
classification of emotion. to the neutral class is less than the positive and negative
classes, the overlap between the neutral class will be higher,
of balance between the data of each class is a challenging the amount of shift in the neutral class is less than the positive
situation, resulting in a bias in classification outcomes and and negative classes, so the overlap between the neutral class
degraded accuracy. The overlapping methods are used in the will be higher.
proposed approach to solve the challenges of unbalanced
classes. In this process, all corresponding epochs of each B. PROPOSED NETWORK ARCHITECTURE
emotion are concatenated to form a single long signal; then In the proposed network architecture, we used a fusion of 10
rectangular windows of specific duration and overlap are convolution1-D layers and 3 LSTM layers. A cross-library
implemented in such a way that the number of epochs col- in Python programming language is used to implement the
lected for each of the emotion classes is equivalent. proposed CNN-LSTM network. The CNN architecture has
Fifth, in the proposed method for each channel, 5 minutes been also selected as follows: I. A dropout layer. II. A con-
of recorded signal (according to Fig. 3.) is selected for each volutional layer with nonlinear Leaky-Relu function, then a
emotion. In this case, we have 2 data classes (negative and max-pooling layer followed by a batch normalization layer.
positive) with 75000 sampling points for each channel. Then, III. The previous step architecture is repeated 9 times. IV.
using the overlap technique to prevent overfitting, the data The output of the previous architecture is connected to a 2D
are split into 8-second intervals per channel. In fact, depend- matrix. V. The previous architecture output connects to the
ing on the size of the shift, each electrode is divided into 3 layers of LSTM with Leaky-Relu nonlinear functions in
2000 sampling points (8 seconds). For example, electrode series, then these layers followed by a batch normalization
e is the dimension of the input matrix (e × 360 × 2000). layer. VI. Two fully connected layers are used to access
Since we have 7 subjects and 2 classes (positive and negative), the output layer. Table 3 shows the details of the proposed
then the final dimension of the input matrix to the network deep neural network architecture. As it is shown in Table 3,
will be equal to (2 × 7 × 360) × (e × 2000). According the dimensionality reduction in the hidden layers continued

139340 VOLUME 8, 2020


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 12. Shows the confusion matrix for (a). 2-stage and (b). 3-stage classification of emotion.

TABLE 4. F-measure obtained for a 2-stage (positive and negative) and


3-stage (positive, negative and Neutral) classification of emotion.

data. According to what has been explained before, Fig. 9.


shows the EEG signal allocation data for the training and test
set.
FIGURE 13. The bar-chart diagram for 2-stage and 3-stage classification
of emotion.
IV. SIMULATION RESULT
In this section, simulation results of the proposed algorithm
was presented for automatic emotion recognition. A lap-
from 24000 (12 × 2000) (the number of initial time features) top with 8 GB of RAM and 2.4 GHz Core i7 CPU were
to 100. Finally, the selected feature vector was linked to the used to simulate the proposed algorithm. Fig. 10. shows the
fully connected layer with the nonlinear Softmax function. loss function of the proposed network for two-stage clas-
Fig. 8. shows the architecture of the suggested network. sification (negative and positive) and three-stage classifica-
tion (negative, neutral and positive) of emotion. As shown
C. PROPOSED DNN MODEL TRAINING AND ASSESSMENT in Fig. 10. (a), the network error for two-stage classifi-
All hyper-parameters for the proposed CNN-LSTM network cation of emotion decreases by increasing the number of
are specifically adjusted to achieve the best rate of con- iteration and reach its steady-state value from about 130th
vergence and the procedure for trial and errors is followed iteration. as well as Fig. 10. (b) shows the steady-state
to determine these hyper-parameters. Finally, the training value for the three-stage classification of emotion from about
process is performed by cross-entropy cost function and 145th iteration. Fig. 11. shows the accuracy of the proposed
RmStrop optimizer [46] with a learning rate of 0.001 and method for two-stage classification (negative and positive)
batch size 10. The total number of parameters for two-class and three-stage classification (negative, neutral and posi-
and three-class is equal to 167822 and 167923 respectively. tive) of emotion in 400 iterations for validation of the data.
Besides, the total number of samples for two-class and As shown in Fig. 11, the accuracy of the proposed method
three-class of emotion is 5040 and 7560 respectively, of these, for two-stage classification (negative and positive) and three-
60% are randomly selected for training the network (3024 for stage classification (negative, neutral and positive) of emo-
two-class and 4536 for three-class) and the remaining 40% tion reaches 97.42% and 96.78%, respectively, at about
(2016 for two-class and 3024 for three-class) are selected 200 iterations. To further analysis the suggested method,
as the test set. 10% of the data are also used to validate the the confusion matrix for two-stage and three-stage classi-
training set. After training the deep neural network, the pro- fication has been given in Fig. 12. The accuracy obtained
posed network model is evaluated using 40% of the total from the proposed method is also promising. Also, Fig. 13.

VOLUME 8, 2020 139341


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

FIGURE 14. The t-SEN chart of the proposed method for (a). 2-stage and (b). 3-stage classification of emotion.

FIGURE 15. The performance of the suggested method compared to the CNN, MLP and DBM networks for
classifying 3-stage of emotion.

shows the Bar-chart diagram of sensitivity, specificity, and of the suggested method for two-stage classification and
accuracy of two-stage and three-stage classification. Further- three-stage classification.
more, the precision, sensitivity and specificity values of the In order to show the performance of the proposed
two-stage and three-stage classification are very promising LSTM-CNN method with different data types as input,
in this figure. Table 4 shows the F-measure obtained for a the accuracy of the classification is obtained using the other
2-stage (positive and negative),and 3-stage (positive, neg- common methods for 3-stage emotion recognition. In this
ative, and neutral) classification of emotion. According to regard, time data and several manual features from time
Table 4, the F-Measure for the 2-and 3-class stages is 97.42% data, along with DBM and MLP, are selected as comparative
and 95.24%, respectively. Fig. 14. also shows the t-SEN chart methods [43], [47], [48]. The number of hidden layers is
for the raw signal, Conv5, and Softmax layers for two-stage considered 3 for DBM and MLP, and the learning rate is
classification and three-stage classification of emotion. As it chosen as 0.001. Also, for CNN, the proposed architecture
is clear from the last layer, almost all the samples are sepa- in Table 3 was selected regardless of the LSTM layers.
rated for the evaluation set, indicating the optimal efficiency The parameters of the minimum, maximum, skewness, crest

139342 VOLUME 8, 2020


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

TABLE 5. Comparison of the suggested network’s computational complexity with CNN, MLP and DBM for classifying 3-stage of emotion in 180 iterations.

TABLE 6. Shows the kappa coefficient of the proposed method for


automatic 2-stage and 3-stage classification of emotion.

To further research the efficiency of the proposed method,


the ROC diagram is provided in Fig. 16. Common approaches
such as wavelet transformation, empirical mode decomposi-
tion, etc. have been used in most previous works to extract
the essential features of the signal, involving some common
problems regarding the parameters of the extraction feature
FIGURE 16. The ROC diagram for 2-stage and 3-stage classification of
emotion.
such as selecting the mother wavelet type, the number of
decomposition levels, etc. One of the most important advan-
tages of the proposed method compared to the other methods
is that the extraction of a feature is done automatically on the
basis of deep learning and no feature selection procedure is
needed. Table 6. Shows the kappa coefficient of the proposed
method for classifying 2-stage and 3-stage of emotion.
In order to evaluate the performance of the proposed,
CNN, DBM and MLP method against observation noise,
the white Gaussian noise of SNR −4 to 20 dB is added as the
measurement noise to the EEG signals and the classification
accuracy for all methods is reported in Fig. 17. As it can be
seen, the classification performance of the proposed method
is considerably robust to the measurement noise in a wide
range of SNR, so that the accuracy is still more than 90% for
SNR −4 to 20 dB.
FIGURE 17. Accuracy of the proposed network versus SNR in additive
white Gaussian noise for emotions recognition. V. CONCLUSION
In this work, a new method for emotions recognition is
factor, variance, root mean square (RMS), mean, and kurtosis presented using a fusion of the CNN and LSTM net-
are chosen as the hand crafted features of the time domain works. The proposed network consisted of the 10-CNN and
(time features). The classification accuracy of the different 3-LSTM layers. As it was observed, the fusion of these
methods based on the feature learning from raw data and networks increases the accuracy and stability of the proposed
the manual features are presented in Fig. 15. The reliabil- algorithm. Also, we achieved 97.42% and 95.23% accuracy
ity of CNN, DBM and MLP reaches 90%, 79% and 73%, for 2-stage and 3-stage of emotion for 12 active channels,
respectively after 180 iterations. As can be seen from Fig. 15, also the Kappa Cohen’s coefficients for 2-stage and 3-stage
the performance of the proposed network is promising com- of emotion are 0.96 and 0.93, respectively, which is very
pared to CNN, DBM and MLP and the proposed algorithm promising compared to the previous emotions recognition
converge to the desired value faster. Also, the computational approaches, we also compared our proposed LSTM-CNN
complexity of running time for training and test phases is network (end-to-end) with other hand-crafted methods based
given for the proposed network (for 3-stags) as well as CNN, on MLP and DBM classifiers and achieved promising results
DBM and MLP networks in Table 5. compared to similar methods, as well as, it is shown that
As can be seen from Table 5, the running time of the the proposed network is robust to the measurement noise of
proposed network is approximately comparable to that of level as much as 1 dB. Despite the contributions, this work
DBM and CNN, but, the MLP has overtaken all three of the has some limitations, as with other previous studies. First,
proposed network, CNN and DBM methods. Nevertheless, proposed network parameters, such as the learning rate and
this is at the cost of reduced accuracy of classification. training algorithm parameters were selected mostly based

VOLUME 8, 2020 139343


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

on the experience or the trial-and-error method. Therefore, [19] M. Balconi, E. Grippa, and M. E. Vanutelli, ‘‘What hemodynamic (fNIRS),
it would be better to develop a more systematic method for electrophysiological (EEG) and autonomic integrated measures can tell
us about emotional processing,’’ Brain Cognition, vol. 95, pp. 67–76,
selecting the appropriate parameters. Second, the training Apr. 2015.
time of the proposed algorithm is relatively high, which can [20] W. Zheng, ‘‘Multichannel EEG-based emotion recognition via group
be solved using a graphical processing unit (GPU) systems. sparse canonical correlation analysis,’’ IEEE Trans. Cognit. Develop. Syst.,
vol. 9, no. 3, pp. 281–290, Sep. 2017.
In addition, deep learning is more dependent on powerful [21] P. Ozel, A. Akan, and B. Yilmaz, ‘‘Synchrosqueezing transform based fea-
computation compared to traditional machine learning meth- ture extraction from EEG signals for emotional state prediction,’’ Biomed.
ods, and will spend much more time on general training. Signal Process. Control, vol. 52, pp. 152–161, Jul. 2019.
[22] F. Hasanzadeh and S. Moghimi, ‘‘Emotion estimation during listening to
In order to save computational complexity; in future research, music by EEG signal and applying NARX model and genetic algorithm,’’
we could consider introducing some pre-training models or presented at the Nat. Conf. Technol., Energy Data Elect. Comput. Eng.,
combining transfer learning methods to accelerate model 2015.
training. We also intend to use a number of more emotional [23] M. Soleymani, S. Asghari-Esfeden, Y. Fu, and M. Pantic, ‘‘Analysis of
EEG signals and facial expressions for continuous emotion detection,’’
states for classification purposes in future studies. The pro- IEEE Trans. Affect. Comput., vol. 7, no. 1, pp. 17–28, Jan. 2016.
posed method is also expected to be used in BCI applications. [24] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T.
Pun, A. Nijholt, and I. Patras, ‘‘DEAP: A database for emotion analysis;
Using physiological signals,’’ IEEE Trans. Affect. Comput., vol. 3, no. 1,
REFERENCES pp. 18–31, Jan. 2012.
[1] A. Goldenberg, D. Garcia, E. Halperin, and J. J. Gross, ‘‘Collective emo- [25] G. Chanel, J. Kronegg, D. Grandjean, and T. Pun, ‘‘Emotion assessment:
tions,’’ Current Directions Psychol. Sci., vol. 29, no. 2, pp. 154–160, 2020. Arousal evaluation using EEG’s and peripheral physiological signals,’’ in
[2] T. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, ‘‘Electron spectroscopy Proc. Int. Workshop Multimedia Content Represent., Classification Secur.
studies on magneto-optical media and plastic substrate interface,’’ IEEE Berlin, Germany: Springer, 2006, pp. 530–537.
Transl. J. Magn. Jpn., vol. 2, no. 8, pp. 740–741, Aug. 1987. [26] W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, ‘‘Identifying stable patterns over
[3] J. A. Russell, ‘‘A circumplex model of affect,’’ J. Personality Social time for emotion recognition from EEG,’’ IEEE Trans. Affect. Comput.,
Psychol., vol. 39, no. 6, p. 1161, Dec. 1980. vol. 10, no. 3, pp. 417–429, Jul. 2019.
[4] P. C. Petrantonakis and L. J. Hadjileontiadis, ‘‘Emotion recognition from [27] N. Thammasan, K. Moriyama, K.-I. Fukui, and M. Numao, ‘‘Familiarity
EEG using higher order crossings,’’ IEEE Trans. Inf. Technol. Biomed., effects in EEG-based emotion recognition,’’ Brain Informat., vol. 4, no. 1,
vol. 14, no. 2, pp. 186–197, Mar. 2010. pp. 39–50, Mar. 2017.
[5] A. M. Bhatti, M. Majid, S. M. Anwar, and B. Khan, ‘‘Human emotion [28] Y. Kumagai, M. Arvaneh, and T. Tanaka, ‘‘Familiarity affects entrainment
recognition and analysis in response to audio music using brain signals,’’ of EEG in music listening,’’ Frontiers Hum. Neurosci., vol. 11, p. 384,
Comput. Hum. Behav., vol. 65, pp. 267–275, Dec. 2016. Jul. 2017.
[6] Y.-P. Lin, C.-H. Wang, T.-P. Jung, T.-L. Wu, S.-K. Jeng, J.-R. Duann, and [29] G. Zhao, Y. Zhang, and Y. Ge, ‘‘Frontal EEG asymmetry and middle
J.-H. Chen, ‘‘EEG-based emotion recognition in music listening,’’ IEEE line power difference in discrete emotions,’’ Frontiers Behav. Neurosci.,
Trans. Biomed. Eng., vol. 57, no. 7, pp. 1798–1806, Jul. 2010. vol. 12, p. 225, Nov. 2018.
[7] G. Balasubramanian, A. Kanagasabai, J. Mohan, and N. P. G. Seshadri, [30] J. Lu, D. Wu, H. Yang, C. Luo, C. Li, and D. Yao, ‘‘Scale-free brain-wave
‘‘Music induced emotion using wavelet packet decomposition—An EEG music from simultaneously EEG and fMRI recordings,’’ PLoS ONE, vol. 7,
study,’’ Biomed. Signal Process. Control, vol. 42, pp. 115–128, Apr. 2018. no. 11, Nov. 2012, Art. no. e49773.
[8] Y. Ding, X. Hu, Z. Xia, Y.-J. Liu, and D. Zhang, ‘‘Inter-brain EEG feature
[31] H. J. Yoon and S. Y. Chung, ‘‘EEG-based emotion estimation using
extraction and analysis for continuous implicit emotion tagging during
Bayesian weighted-log-posterior function and perceptron convergence
video watching,’’ IEEE Trans. Affect. Comput., early access, Jun. 22, 2018,
algorithm,’’ Comput. Biol. Med., vol. 43, no. 12, pp. 2230–2237,
doi: 10.1109/TAFFC.2018.2849758.
Dec. 2013.
[9] F. Noroozi, M. Marjanovic, A. Njegus, S. Escalera, and G. Anbarjafari,
[32] X. Li, B. Hu, S. Sun, and H. Cai, ‘‘EEG-based mild depressive detection
‘‘Audio-visual emotion recognition in video clips,’’ IEEE Trans. Affect.
using feature selection methods and classifiers,’’ Comput. Methods Pro-
Comput., vol. 10, no. 1, pp. 60–75, Jan. 2019.
grams Biomed., vol. 136, pp. 151–161, Nov. 2016.
[10] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, ‘‘A multimodal
database for affect recognition and implicit tagging,’’ IEEE Trans. Affect. [33] F. Hasanzadeh, M. Annabestani, and S. Moghimi, ‘‘Continuous emotion
Comput., vol. 3, no. 1, pp. 42–55, Jan. 2012. recognition during music listening using EEG signals: A fuzzy parallel cas-
[11] S. Koelsch, Brain and Music. Hoboken, NJ, USA: Wiley, 2012. cades model,’’ 2019, arXiv:1910.10489. [Online]. Available: http://arxiv.
[12] T. Eerola and J. K. Vuoskoski, ‘‘A comparison of the discrete and dimen- org/abs/1910.10489
sional models of emotion in music,’’ Psychol. Music, vol. 39, no. 1, [34] Y. Hou and S. Chen, ‘‘Distinguishing different emotions evoked by music
pp. 18–49, Jan. 2011. via electroencephalographic signals,’’ Comput. Intell. Neurosci., vol. 2019,
[13] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, ‘‘Music and emotion: Mar. 2019, Art. no. 3191903.
Electrophysiological correlates of the processing of pleasant and unpleas- [35] P. Keelawat, N. Thammasan, M. Numao, and B. Kijsirikul, ‘‘Spatiotem-
ant music,’’ Psychophysiology, vol. 44, no. 2, pp. 293–304, Mar. 2007. poral emotion recognition using deep CNN based on EEG during music
[14] L.-O. Lundqvist, F. Carlsson, P. Hilmersson, and P. N. Juslin, ‘‘Emotional listening,’’ 2019, arXiv:1910.09719. [Online]. Available: http://arxiv.org/
responses to music: Experience, expression, and physiology,’’ Psychol. abs/1910.09719
Music, vol. 37, no. 1, pp. 61–90, Jan. 2009. [36] Y. Yang, Q. Wu, M. Qiu, Y. Wang, and X. Chen, ‘‘Emotion recognition
[15] A. J. Blood and R. J. Zatorre, ‘‘Intensely pleasurable responses to music from multi-channel EEG through parallel convolutional recurrent neural
correlate with activity in brain regions implicated in reward and emotion,’’ network,’’ in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2018,
Proc. Nat. Acad. Sci. USA, vol. 98, no. 20, pp. 11818–11823, Sep. 2001. pp. 1–7.
[16] K. Mueller, T. Fritz, T. Mildner, M. Richter, K. Schulze, J. Lepsien, [37] H. Yang, J. Han, and K. Min, ‘‘A multi-column CNN model for emo-
M. L. Schroeter, and H. E. Möller, ‘‘Investigating the dynamics of the tion recognition from EEG signals,’’ Sensors, vol. 19, no. 21, p. 4736,
brain response to music: A central role of the ventral striatum/nucleus Oct. 2019.
accumbens,’’ NeuroImage, vol. 116, pp. 68–79, Aug. 2015. [38] J. Chen, D. Jiang, Y. Zhang, and P. Zhang, ‘‘Emotion recognition from
[17] S. Moghimi, A. Kushki, S. Power, A. M. Guerguerian, and T. Chau, spatiotemporal EEG representations with hybrid convolutional recurrent
‘‘Automatic detection of a prefrontal cortical response to emotionally rated neural networks via wearable multi-channel headset,’’ Comput. Commun.,
music using multi-channel near-infrared spectroscopy,’’ J. Neural Eng., vol. 154, pp. 58–65, Mar. 2020.
vol. 9, no. 2, Apr. 2012, Art. no. 026022. [39] C. Wei, L.-L. Chen, Z.-Z. Song, X.-G. Lou, and D.-D. Li, ‘‘EEG-based
[18] S. M. Alarcao and M. J. Fonseca, ‘‘Emotions recognition using EEG sig- emotion recognition using simple recurrent units network and ensem-
nals: A survey,’’ IEEE Trans. Affect. Comput., vol. 10, no. 3, pp. 374–393, ble learning,’’ Biomed. Signal Process. Control, vol. 58, Apr. 2020,
Jul. 2019. Art. no. 101756.

139344 VOLUME 8, 2020


S. Sheykhivand et al.: Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals

[40] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cam- ZOHREH MOUSAVI received the B.Sc. degree
bridge, MA, USA: MIT Press, 2016. [Online]. Available: http://www. in mechanical engineering from the University of
deeplearningbook.org Yasuj, Yasuj, Iran, in 2012, and the M.Sc. degree
[41] S. L. Hung and H. Adeli, ‘‘Parallel backpropagation learning algorithms in mechanical engineering from the University of
on CRAY Y-MP8/864 supercomputer,’’ Neurocomputing, vol. 5, no. 6, Vali Asr Rafsanjan, Kerman, Iran, in 2015. She is
pp. 287–302, Nov. 1993. currently pursuing the Ph.D. degree in mechan-
[42] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and ical engineering with the University of Tabriz,
R. R. Salakhutdinov, ‘‘Improving neural networks by preventing co-
Tabriz, Iran. Her current research interests include
adaptation of feature detectors,’’ 2012, arXiv:1207.0580. [Online].
vibration and biomedical signal processing, com-
Available: http://arxiv.org/abs/1207.0580
[43] Z. Mousavi, T. Yousefi Rezaii, S. Sheykhivand, A. Farzamnia, and pressed sensing, mechanical systems, structural
S. N. Razavi, ‘‘Deep convolutional neural network for classification of health monitoring (SHM), machine learning, neural networks, and deep
sleep stages from single-channel EEG signals,’’ J. Neurosci. Methods, learning.
vol. 324, Aug. 2019, Art. no. 108312.
[44] A. Graves, ‘‘Generating sequences with recurrent neural networks,’’ 2013,
arXiv:1308.0850. [Online]. Available: http://arxiv.org/abs/1308.0850 TOHID YOUSEFI REZAII received the B.Sc.,
[45] S. Hochreiter and J. Schmidhuber, ‘‘Long short-term memory,’’ Neural M.Sc., and Ph.D. degrees in electrical engi-
Comput., vol. 9, no. 8, pp. 1735–1780, 1997. neering (communication) from the University of
[46] O. Wichrowska, N. Maheswaranathan, M. W. Hoffman, S. G. Colmenarejo, Tabriz, Tabriz, Iran, in 2006, 2008, and 2012,
M. Denil, N. de Freitas, and J. Sohl-Dickstein, ‘‘Learned optimizers that
respectively. He is currently with the Faculty of
scale and generalize,’’ in Proc. 34th Int. Conf. Mach. Learn., vol. 70, 2017,
Electrical and Computer Engineering, University
pp. 3751–3760.
[47] R. Salakhutdinov and G. Hinton, ‘‘Deep Boltzmann machines,’’ in Arti- of Tabriz. His current research interests include
ficial Intelligence and Statistics. Clearwater Beach, FL, USA, Apr. 2009, biomedical signal processing, data compression,
pp. 448–455. compressed sensing, statistical signal process-
[48] Y.-L. Hsu, Y.-T. Yang, J.-S. Wang, and C.-Y. Hsu, ‘‘Automatic sleep ing, pattern recognition-statistical learning, and
stage recurrent neural classifier using energy features of EEG signals,’’ adaptive filters.
Neurocomputing, vol. 104, pp. 105–114, Mar. 2013.

ALI FARZAMNIA (Senior Member, IEEE)


received the B.Eng. degree in electrical engineer-
ing (telecommunication engineering) from the
Islamic Azad University, Urmia, Iran, in 2005,
the M.Sc. degree in electrical engineering
SOBHAN SHEYKHIVAND received the B.Sc. (telecommunication engineering) from the Uni-
degree in aviation-fighter pilot from the University versity of Tabriz, in 2008, and the Ph.D. degree
of Shahid Satari, Tehran, Iran, in 2014, the B.Sc. in electrical engineering (telecommunication engi-
degree in electronic engineering from the Islamic neering) from Universiti Teknologi Malaysia
Azad University of Urmia, Urmia, Iran, in 2016, (UTM), in 2014. He is appointed as a Senior
and the M.Sc. degree in biomedical engineering Lecturer (Assistant Professor) in the Electrical and Electronic Engineering
from the University of Tabriz, Tabriz, Iran, in 2018, program, Faculty of Engineering, Universiti Malaysia Sabah (UMS), since
where he is currently pursuing the Ph.D. degree in 2014. He is Chartered Engineer U.K. (C.Eng.) and a member of IET. His
biomedical engineering. His current research inter- research interests are wireless communication, signal processing, network
ests include biomedical signal processing, data coding, information theory, and bio-medical signal processing.
compression, and compressed sensing.

VOLUME 8, 2020 139345

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy