0% found this document useful (0 votes)
3 views9 pages

A_Modulation_Classification_Algorithm

This article presents a modulation classification algorithm utilizing a feature-embedding Graph Convolutional Network (FE-GCN) to enhance efficiency and accuracy in signal classification. The FE-GCN consists of three components: a feature-embedding network, a similarity adjacent matrix calculation network, and a graph convolutional classification network, which collectively improve the classification of modulation signals. Experimental results demonstrate that the FE-GCN outperforms existing deep learning methods on the RML2016.10A dataset.

Uploaded by

irenesairamathew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views9 pages

A_Modulation_Classification_Algorithm

This article presents a modulation classification algorithm utilizing a feature-embedding Graph Convolutional Network (FE-GCN) to enhance efficiency and accuracy in signal classification. The FE-GCN consists of three components: a feature-embedding network, a similarity adjacent matrix calculation network, and a graph convolutional classification network, which collectively improve the classification of modulation signals. Experimental results demonstrate that the FE-GCN outperforms existing deep learning methods on the RML2016.10A dataset.

Uploaded by

irenesairamathew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

This article has been accepted for publication in IEEE Access.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2023.DOI

A Modulation Classification Algorithm


Based on Feature-Embedding Graph
Convolutional Network
HUALI ZHU, HUA XU, YUNHAO SHI, YUE ZHANG AND LEI JIANG
Information and Navigation College, Air Force Engineering University, Xi’an 710077, China
Corresponding author: Yue Zhang (e-mail: y.zhang@nwpu.edu.cn).
This research was funded by the National Science Foundation of China, grant number 61906156 and Innovation and Practice Fund for
Graduate Student of Air Force Engineering University, grant number CXJ2021075.

ABSTRACT Deep-learning is widely used in modulation classification to reduce labor and improve
the efficiency. Graph convolutional network (GCN) is a type of feature extraction network for graph
data. Considering the signals as graph nodes and the similarity of each signal as an edge, the GCN
propagates node information to similar nodes along the edges. GCN extracts more features and achieves
better classification results, particularly for characterless examples. In this paper, we propose a modu-
lation classification algorithm based on a feature-embedding GCN (FE-GCN). It comprises three parts:
feature-embedding network (FEN), similarity adjacent matrix calculation network (SAMCN), and graph
convolutional classification network (GCCN). The FEN embeds the signal data into a one-dimensional
feature vector. The SAMCN calculates the similarity of all signal feature vectors to a matrix using a
single convolutional neural network (CNN). The GCCN is used to extract the final features and classify the
signals in a graph. Simulation results on the public dataset RML2016.10A show that the FE-GCN performs
effectively and outperforms a series of advanced deep-learning methods.

INDEX TERMS Feature Embedding, Modulation Recognition, GCN

I. INTRODUCTION among all candidates, which is easier to implement and does


IGNAL modulation classification is required to iden- not require carefully designed thresholds. The FB classifier
S tify adversary transmitting units for signal jamming
and recovery in electronic warfare, surveillance, and threat
relies on manual feature extraction [6], and a high-quality
classifier, such as a Support Vector Machine (SVM) or
analysis. Thus, automatic modulation classification (AMC), Decision Trees [7], is used to obtain better classification
which includes feature extraction and classification, has been performance.
developed for military applications. In contrast to the ini- Compared with traditional AMC, deep-learning AMC
tial modulation classification, for efficiency, signals are pro- (DL-AMC) can automatically extract data features and clas-
cessed automatically by engineers using signal observation sification while improving accuracy and reducing the re-
and processing equipment for efficiency [1]. quirements of professional data processing. DL-AMC can
Traditional AMC are broadly divided into two categories: be divided into statistical feature-based algorithms [8], [9],
likelihood-based (LB) and feature-based (FB) [2]. The com- image processing-based algorithms [10], [11], and time-
mon approach of an LB classifier consists of two steps: domain-waveform-based algorithms [12], [13]. In terms of
evaluating the likelihood for each modulation hypothesis network types, DL-AMC mainly uses an improvement or
with observed signal samples, and comparing the likelihood a combination of onvolutional neural networks (CNN) and
of different modulation hypotheses to conclude the clas- recurrent neural networks (RNN) [14]–[16]. In 2021, Zhang
sification. Likelihood functions that optimize and select a used the ResNeXt [14] framework, which is an improved
threshold improve classification performance but also require CNN combined with four adaptive attention mechanism
more tentative effort [3], [4]. Decision-making [5] determines modules, to classify modulation signals. They adopted time-
the maximum likelihood by testing every pair of hypotheses frequency representation data as inputs, and a transfer learn-

VOLUME 4, 2016 1

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

ing strategy was used for pre-training. The ResNeXt model age recognition rates were 62.93% and 64.58%,respectively,
achieved the highest recognition accuracy of 96.10% on the which were higher than those of the other methods.
RadioML2016.10B dataset and 99.70% for the 10 modula- Currently, GCN are not widely used in modulation classi-
tion modes of the RadioML2018.01A dataset with a high fication, and they mainly adopt few-shot methods, which are
signal-to-noise ratio (SNR). In the same year, Zhang [15] complicated to calculate. The motivation of this paper is to
extracted both temporal and spatial features of modulation apply a feature-embedding GCN (FE-GCN) for modulation
signals using a CNN and a bidirectional long short-term recognition. It embeds signals using a simple CNN or LSTM
memory (Bi-LSTM) network. Combined with the correlation to simplify the inputs, and computes the similarity matrix
between the radio signal channels, they managed to improve using another CNN to construct a graph. The GCN extracts
the recognition accuracy to 92.68% with a high SNR. In features from both the signal and other similar signals to
particular, it reduced the difficulty o f identification of multi- improve the classification accuracy. The main contributions
ple Quadrature Amplitude Modulation (MQAM) signals, and of this paper are as follows:
significantly improved the recognition accuracy of QAM16 1) This novel GCN model for modulation classification
and QAM64 signals. Simultaneously, Weng proposed a deep combines different embedding networks to fully ex-
cascade network architecture (DCNA) [16] to address the tract the signal features and compensate for the clas-
difficulty of AMC under different SNRs. It includes an SNR sification efficiency of the GCN.
Estimation Network (SEN) to identify the SNR level of the 2) A similarity matrix is used to transfer the characteristic
samples and a Modulation Recognition Clustering Network information between signals to determine characterless
(MRCN) that contains several subnetworks for further mod- examples.
ulation recognition under different SNR Settings. Notably, 3) The FE-GCN simplifies the GCN application on AMC
DCNA does not exploit specific network structures and can and does not require the few-shot method or label infor-
be generalized to various network models through improve- mation in feature extraction and classification, which
ments. generalizes the GCN to universal scenarios.
In recent years, Graph Convolutional Networks (GCNs)
The remainder of this paper is organized as follows. Sec-
have gained widespread popularity for processing graph
tion II briefly describes the principle of graph convolution,
data by transmitting feature information between adjacent
and Section III presents the structure of the proposed FE-
nodes through a relation matrix. They have been success-
GCN model. Experiments on EF-GCN utilization for modu-
fully applied to various domains such as text classifica-
lation classification are discussed in Section IV, along with
tion, relation extraction, and image classification [17]. In
the simulation results and performance analysis. Finally,
2023, Zhao [18] proposed a learnable Graph Convolutional
Section V concludes the paper.
Network based on Feature Fusion (LGCN-FF) to learn the
underlying features and explore more discriminative graph
II. PRINCIPLE OF GRAPH CONVOLUTIONAL NEURAL
fusion techniques. LGCN-FF outperformed several state-of-
NETWORK
the-art methods in multiview semi-supervised classification
For a continuous function, convolution is a mathematical
tasks. Xiao [19] introduced a Dual Fusion-Propagation Graph
sum [23] h of functions f and g. It is modulated by the
Neural Network (DFP-GNN) for deep multiview clustering
signal processing, and the convolved signal is smoother than
tasks, achieving significant results compared with several
the original data. For discrete data, the convolution function
state-of-the-art algorithms on popular databases. Wu [20]
h(x, y) is the sum of the multiplication of the discrete data
employed an interpretable multiview Graph Convolutional
f (x, y) and local filters g(m, n) with a size of (m, n). It is
Network (IMvGCN) for a multiview semi-supervised learn-
primarily used in image processing and is expressed as
ing task, demonstrating its superiority over other state-of-the-
art methods through comprehensive experiments.
X
In 2020, Liu [21] originally applied a GCN to signal- h(x, y) = f (x, y)∗g(m, n) = f (x − m, y − n)g(m, n).
modulation classification. A feature extraction network m,n
(FECNN) was used to extract the signal features, and a graph (1)
mapping network (GMCNN) was used to concatenate the The convolution theorem applies to data in Euclidean
extracted features and label information. An adjacency ma- space, such as images, but not to graph structure data in a
trix was then constructed by calculating the distance between non-Euclidean space. A graph G(V, E) with N nodes consists
the concatenated features to create a graph. It achieved a of a node feature set V = {vi } and an edge set E =
higher recognition accuracy than the CNN and K-nearest {eij }, i, j = 1, 2, 3, ...,N. Edges represent the relationships
neighbor (KNN) algorithms, particularly for a low SNR. In between the nodes. The purpose of graph convolution is to
2021, Xuan [22] extracted signal information in a time series find a convolution method to deal with graph data that is sim-
using an Adaptive Visibility Graph (AVG) proposed to map ilar to convolution of images. According to the principle of
each signal into a graph, and used the GNN model for feature signal processing, early graph convolution transforms graph
extraction to achieve end-to-end signal recognition. For the data into the frequency domain by Fourier transform, and
open datasets RML2016.10A and RML2016.10b, the aver- then a convolution operation is performed, namely, spectral
2 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

graph convolution. The output of the spectral graph con-


volution is transformed 1ack to a graph using an inverse
Fourier transform. The spectral method is complete in theory,
complex in terms of calculations, and difficult to implement.
Therefore, researchers later considered its practicability and
replaced it with an approximate convolution kernel as a
representative method.

A. SPECTRAL GRAPH CONVOLUTION


As mentioned previously, E is the edge set of a graph with FIGURE 1. CNN embedding network.
N nodes, which is expressed as an adjacency matrix of size
(N, N), and eij represents the relationship between nodes i
and j. In an undirected graph, the Graph Laplacian matrix Network (GCCN). The input is a set of in-phase/quadrature
L [24] is the difference between the diagonal degree matrix (I/Q data) signals, and each signal is a one-dimensional
D and the weighted adjacency matrix W , which describes feature whose length is the sampling point L (L= Number
the smoothness of the signal and defines the derivative of the of sampling points) and width is two (I data and Q data).
graph as
X A. FEATURE EMBEDDING NETWORKS
L = D − W, dii = wij , (2)
The GCN extracts features by aggregating similar features,
j
which is more effective than the traditional feature extrac-
1 1
which is normalized to L = I − D− 2 W D− 2 and I is tion network. However, it is not ideal to use I/Q data for
the identity matrix. The Laplacian is indeed to L = U ΛU ⊤ direct classification during verification. Feature embedding
by a Fourier basis U = {ul }, l = 1, .., N, and eigenvalues networks help extract more useful features that further reduce
Λ = [λ1 , ..., λN ]. the length of features and map the original data into feature
A signal feature vector x is converted to x̂ = U ⊤ x vectors of specified dimensions and sizes, thus improving
by Fourier transform, which represents its feature mapping the learning efficiency of the GCN. Considering the spatial
under the orthogonal basis in the spectral domain [25], and structure and timing of the signals, we chose a basic CNN
the inverse transformation is x = U x̂. The convolution of a and Gate Recurrent Unit (GRU) to embed signal features
signal x and a graph convolution yG is represented as without enormous complexity. All signals in the graph were
embedded in a one-dimensional sequence of size (1, 64).
xyG = U ((U ⊤ x) ⊙ (U ⊤ y)), (3)
1) CNN Features Embedding Network
where ⊙ is Hadamard product and U ⊤ y is the convolution
kernel in the spectral domain. However, the Laplace equation The CNN feature embedding network is a two-layer convo-
is solved with high complexity and graph mapping does not lution network with a fully connected layer, the structure of
have local feature convergence. which is illustrated in Fig. 1. The convolution kernels were
(3, 1) and (2, 1), and the channel counts were 64 and 128,
B. FAST APPROXIMATE CONVOLUTION ON GRAPHS respectively.
To obtain a fast approximate convolution model, Kipf [26]
proposed a specific model, which is a multi-layer GCN with 2) GRU Feature Embedding Network
the following hierarchical propagation rules: A GRU is a model that can process sequence data, and is
a type of cyclic neural network and a variant of LSTM. An
1 1
H (l+1) = σ(D̃− 2 ÃD̃− 2 H (l) W (l) ), (4) LSTM unit has three gates: forgetting gate, input gate, and
output gate, whereas a GRU uses two gates: reset gate and
where à = A+IN is the adjacency matrix ofP an undirected update gate. The GRU unit is shown in Fig. 2, and the GRU
graph G, IN is the identity matrix, d˜ii = j ãij is the embedding network with 64 and 128 hidden layers is shown
sum of all direct neighbors of node i and W (l) is a layer- in Fig. 3.
specific trainable weight matrix. σ(.) represents an activation
functions such as ReLU (.) = max(0, .). H (l) represents the B. GRAPH CONSTRUCTION
feature matrix of layer l, and H (0) = X.
The signal feature vectors are considered as nodes of a graph,
and the edges are constructed using the potential relation-
III. MODULATION RECOGNITION ALGORITHM BASED
ships between the signal feature vectors. The edge between
ON FE-GCN
nodes i and j is expressed as
The FE-GCN mainly consists of three parts: Feature Embed-
ding Network (FEN), Similarity Adjacent Matrix Calculation
Network (SAMCN) and Graph Convolutional Classification ei,j = φ(xi , xj ), (5)
VOLUME 4, 2016 3

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

where w(l) is the trainable parameter and ρ is an activa-


tion function. Each node sends messages to its neighbors
and merges the messages received from others by iteration.
Normalization and activation functions were then used to
enhance the ability of the GCN to fit the data distribution.
At the end of the algorithm, ŷ = softmax(xout ) predicts the
probability of each category. The GCN algorithm is trained in
a supervised manner. In the forward propagation process, the
signal features are input into the GCN algorithm to determine
FIGURE 2. GRU unit structure .
the minimum of N1 L(ŷ, y). The loss function in the network
is the cross-entropy loss function used between the prediction
and real labels according to the following definition:
X
L=− ylog ŷ (8)
In the process of joint supervision training, FECNN and
GMCNN are forced to extract as much useful signal infor-
mation as possible to produce a reasonable graph structure
and help the GCN make the correct prediction. The entire
process of the GCN modulation recognition is presented in
Table 1.

TABLE 1. Algorithm 1 Training process of FE-GCN modulation recognition


FIGURE 3. GRU embedding network. model

Step 1: The training signals were divided into M subsets of size N to build
the graphs.
where φ is the symmetric function used to calculate edge Step 2: For epoch = 1, ..., EPOCH do
value. In this paper, edge set E was constructed by calculat- for m = 1, ..., M do
ing the similarities of the nodes as Embed all the signals of a graph into a one-dimensional
eigenvector of length L, Fm l = {F l , ..., F l }, and
m1 mN
φ(xi , xj ) = I − SAM CN (dif (xi , xj )), (6) Fm0 = {F 0 , ..., F 0 }.
m1 mN
for l = 1, ..., GCN_Layer do
where I is the identity matrix of size N×N and dif (xi , xj ) •Compute the adjacent matrix Ãlm using SAMCN;
is the absolute difference between node vectors i and j. The •Create a graph convolutional operation by
initial difference vector matrix A is mapped in numerical l = F l ⊙ Ãl .
F̃m m m
matrix à using a similarity adjacent matrix calculation net- •Aggregate similar features using
work(SAMCN). The SAMCN is a four-layer CNN, and its Fml+1
= Fm l + F̃ l .
m
parameters are adjusted by feedback during training. φ is end for
the result of combining the absolute difference between two Return loss and training accuracy, and training the model
eigenvectors nonlinearly, such that φ(xi , xj ) = φ(xj , xi ) parameters.
and φ(xi , xi ) = 1. The process of constructing the adjacency end for
matrix with N nodes is shown in Fig. 4. Validate with validation sets.
After feature embedding and adjacency matrix construc- Adjust the policy of learning parameters changing.
tion, a graph of N nodes can be built, whose nodes are signal Set the conditions for termination of training.
feature vectors and whose edges are adjacency matrix Ã. The end for
process of constructing a graph is shown in Fig. 5. Step 3: Save the trained model and parameters.

C. GCN MODULATION RECOGNITION MODEL


All signal data were divided into m subsets to construct First, the training signals are divided into M subsets of
graphs. Each subset in a graph is described by x = size N to build graphs. Second, for each epoch, the signals
{(xi , yi )}, i = 1, ..., N, yi ∈ {1, ..., C}. yi is the index of the of the mth graph are embedded into the feature set Fm l
=
l l
category of node i, and C is the number of known categories. {Fm1 , ..., FmN }, which are the inputs of the GCCN. For
For signal graph G, we obtain the adjacency matrix using each layer of GCCN, compute the adjacent matrix Ãlm of
Eq. (6), where w(l) is the corresponding weight. The layer l the feature eigenvectors using SAMCN and perform a graph
l
of gcn(.) [29] inputted x(l) and produced x(l+1) as convolutional operation by multiplying the features Fm and
l l l l
the adjacency matrix Ãm , F̃m = Fm ⊙ Ãm . The graph
l+1 l l
features were then updated using Fm = Fm + F̃m . The
(l) (l)
x(l+1) = gcn(x(l) ) = ρ(φ(xi , xj )x(l) w(l) ) + x(l) , (7) model parameters were trained after each epoch until the
4 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

FIGURE 4. Adjacency matrix construction.

an NVIDIA Quadro T2000 GPU. The experimental model


adopted the end-to-end training mode and Adam optimizer.
The initial learning rate was set to 0.001. Every ten epochs,
the learning rate was multiplied by a coefficient of 0.7. The
verification sets were used to verify each training round.

B. COMPARISON OF RECOGNITION PERFORMANCE


UNDER DIFFERENT PARAMETERS
1) Test Accuracy
FIGURE 5. Graph construction.
To select an effective feature embedding network, CNN
[27]and GRU [28] embedding networks were validated for
termination conditions were met. Finally, the training was classification in the experiment, and the structures of the
stopped, and the trained model and parameters were saved. CNN and GRU were set as described in Section.III.A. The
A graph of signal data processing in the FE-GCN modu- experimental results of the FE-GCN showed that both em-
lation recognition network is shown in Fig. 6. Through the bedding networks performed well, with accuracies of 61.03%
deepening of the GCN, abstract features were extracted and and 61.13%, respectively. As shown in Fig. 7 and 8, the
the data features were mapped to the label space through the FE-GCN based on the CNN embedding network exhibited
fully connected layer, which was used to classify all samples. slightly better performance for QAM16 signals at SNR = -
10 to SNR = 0. The FE-GCN based on the GRU embedding
IV. EXPERIMENTAL RESULTS AND DISCUSSION
network exhibited a better performance for AM-DSB signals
A. EXPERIMENTAL SETTINGS
at SNR = -10 to SNR = -2. The experiment can clearly distin-
guish similar signals QAM64 and QAM16 at high SNRs and
We used the RML2016.10a [12] dataset, a public modulation
can significantly improve the classification accuracy of the
recognition dataset created by O’Shea et al. Each signal
WBFM signals. Fig. 9 and 10 show the confusion matrices
was represented by two channels of I/Q data, with a data
of the FE-GCN based on the two embedding networks.
length of 128. The basic information on the dataset is pre-
sented in Table 2. There are eight types of digital modula-
2) Comparison of Different Graph Sizes and Embedding
tion signals(8PSK, BPSK, CPFSK, GFSK, PAM4, 16QAM,
Lengths
64QAM, and QPSK) and three types of analog modulated
signals( AM-DSB, AM-SSB, and WBFM). The SNRs ranged According to the graph structure, N signals and the similarity
from -20 dB to 18 dB, and the interval was 2 dB. Each signal matrix form a graph. The influence of the number of nodes,
had 1,000 samples per SNR and the dataset contained a total N, on the experimental results was analyzed, as shown in
number of 220,000 signal samples. In this paper, 70%, 15%, Fig. 11. As the network is based on a dense matrix, it is
15%of the data were used for the training, validation, and difficult to calculate if N >70. Experiments showed that the
testing, respectively. All data-sets were evenly and randomly best effect was achieved when N=60. The other graph had
selected from all types and SNRs. less expressive feature aggregation, which was not conducive
to classification.
TABLE 2. Composition of dataset RML2016.10A In the feature embedding network, the length of the em-
bedding vectors also has an impact on the classification. This
Classes SNR Signal/(Category,SNR) Training/ Verification/Test Rate
is not conducive to classification, because the information is
11 -20~18dB 1000 0.7/0.15/0.15
redundant for embedding feature vectors that are too long.
However, the system biases the center of gravity toward the
This experiment was conducted on the Pytorch framework, embedding network when the embedding feature is too short,
and the hardware and software platforms were Windows neither of which is conducive to training the GCN network.
10, 32GB of memory, Intel(R) Core(TM) i9 CPU, and The experimental results are shown in Fig. 12. The best
VOLUME 4, 2016 5

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

FIGURE 6. Processing of a signal graph in FE-GCN modulation recognition network.

FIGURE 7. Accuracy of FE-GCN(Embedding with CNN). FIGURE 8. Accuracy of FE-GCN(Embedding with GRU).

TABLE 3. Results of different networks on RML2016.10A


classification effect was obtained when the embedded feature Method Accuracy
length was 64. Both experiments in this section used the EF- CNN[27] 53.17%
GCN based on a GRU embedding network with three GCN GRU[28] 57.61%
layers. ResNet18[30] 57.57%
AvgNet[22] 61.12%
C. COMPARISON OF RECOGNITION PERFORMANCE EF-GCN(Emb. CNN) 61.03%
WITH OTHER METHODS EF-GCN(Emb. GRU) 61.13%

In this paper, several methods were selected for comparison,


and it was found that the EF-GCN achieved state-of-the-art
results. The classification accuracy of the EF-GCN based models have rarely been used for signal classification in
on the GRU-embedding network was higher than those of public datasets. The classification results are listed in Table
the other methods. It is noteworthy that GCN deep-learning 3, and the recognition rate curves for each algorithm for all
6 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

FIGURE 12. Test accuracy on different embedding length.

FIGURE 9. Confusion matrix of FE-GCN(Embedding with CNN).

FIGURE 13. Accuracy of different methods.

SNRs are shown in Fig. 13.

D. T-SNE OF FE-GCN
FIGURE 10. Confusion matrix of FE-GCN(Embedding with GRU). The t-Distributed Stochastic Neighbor Embedding(t-SNE)
[31] is a tool to visualize high-dimensional data. It converts
similarities between data points to joint probabilities and tries
to minimize the Kullback-Leibler divergence between the
joint probabilities of the low-dimensional embedding and the
high-dimensional data. To better illustrate the classification
principle of EF-GCN, we took random samples with a SNR
of 10db and plotted t-SNEs when training. The t-SNEs of
EF-GCN(Embedding with CNN) at different training times
as shown in the Fig. 14. EF-GCN(Embedding with GRU)
is similar to EF-GCN(Embedding with CNN), signals are
mapped to a distribution with a rotation center. It is the mul-
tiplication of features and matrices that makes the difference
between GCN and CNN in the principle of feature extraction.

V. CONCLUSIONS
We proposed a general signal-modulation classification
FIGURE 11. Test accuracy on different graph size. model based on the GCN. In contrast to previous methods,
our model does not require the use of few-shot methods
VOLUME 4, 2016 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

Communications Letters, vol. 7, no. 2, pp. 194-197, April 2018, doi:


10.1109/LWC.2017.2764078.
[8] J. Fu, C. Zhao, and B. Li, "Deep Learning Based Digital Signal Modulation
Recognition," in Springer International Publishing, vol:978, 955–964,
Dec. 2015.
[9] F. Wang, Y. Wang and X. Chen, "Graphic Constellations and DBN
Based Automatic Modulation Classification," 2017 IEEE 85th Vehicular
Technology Conference (VTC Spring), 2017, pp. 1-5, doi: 10.1109/VTC-
Spring.2017.8108670.
[10] G. Sun, "MPSK signals modulation classification using sixth-order cumu-
lants," 2010 3rd International Congress on Image and Signal Processing,
2010, pp. 4404-4407, doi: 10.1109/CISP.2010.5648132.
[11] A. Dai, H. Zhang and H. Sun, "Automatic modulation classifica-
tion using stacked sparse auto-encoders," 2016 IEEE 13th Interna-
tional Conference on Signal Processing (ICSP), 2016, pp. 248-252, doi:
10.1109/ICSP.2016.7877834.
[12] R. Liu, Y. Guo and S. Zhu, "Modulation Recognition Method of
FIGURE 14. t-SNEs of of different training periods of FE-GCN(Embedding Complex Modulation Signal Based on Convolution Neural Network,"
with CNN). (a)Untrained signals. (b)Signals by training 10 epoches. (c)Signals 2020 IEEE 9th Joint International Information Technology and Ar-
by training 20 epoches. (d)Signals by training 30 epoches. tificial Intelligence Conference (ITAIC), 2020, pp. 1179-1184, doi:
10.1109/ITAIC49862.2020.9338875.
[13] N. E. West and T. O’Shea, "Deep architectures for modulation
recognition," 2017 IEEE International Symposium on Dynamic Spec-
that do not rely on labels to spread information in the trum Access Networks (DySPAN), 2017, pp. 1-6, doi: 10.1109/DyS-
network, which is more suitable and simpler for general PAN.2017.7920754.
scenarios. In the FE-GCN model, FEN embeds the signal [14] Z. Liang, M. Tao, L. Wang, J. Su and X. Yang, "Automatic Modulation
Recognition Based on Adaptive Attention Mechanism and ResNeXt WSL
data into a one-dimensional feature vector to simplify the Model," in IEEE Communications Letters, vol. 25, no. 9, pp. 2953-2957,
input of the GCN. The SAMCN simplifies the matrix cal- Sept. 2021, doi: 10.1109/LCOMM.2021.3093485.
culation to construct the edges of the graph. The GCCN [15] C. Huang, M. Ji, H. Zhang and R. Luo, "A Multi-level Complex Feature
Mining Method Based on Deep Learning for Automatic Modulation
converged the features of similar signals to improve the Recognition," 2022 3rd International Conference on Information Sci-
classification efficiency and accuracy. Simulation results on ence, Parallel and Distributed Systems (ISPDS), 2022, pp. 335-339, doi:
the public dataset RML2016.10A have shown that the FE- 10.1109/ISPDS56360.2022.9874223.
[16] L. Weng, Y. He and J. Peng, "Deep Cascading Network Architecture For
GCN performs effectively and outperforms a series of ad- Robust Automatic Modulation Classification," in Neuro-computing vol.
vanced deep learning methods. Due to the feature embedding 455,pp. 308-324,July 2021.
and relational matrix construction modules in the model, [17] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang and P. S. Yu, "A Comprehensive
Survey on Graph Neural Networks," in IEEE Transactions on Neural
GCN has untapped potential in raw data. In future research, Networks and Learning Systems, vol. 32, no. 1, pp. 4-24, Jan. 2021, doi:
this problem will be further explored and improved to find 10.1109/TNNLS.2020.2978386.
more efficient relational matrix construction algorithms to [18] Chen, Z., Fu, L., Yao, J., Guo, W., Plant, C. and Wang, S., 2023. Learnable
increase the interpretability of the model. The GCN is also an graph convolutional network and feature fusion for multi-view learning.
Information Fusion, 95, pp.109-119. 10.1016/j.inffus.2023.02.013
exceptional semi-supervised and few-shot method used for [19] S. Xiao, S. Du, Z. Chen, Y. Zhang and S. Wang, "Dual Fusion-Propagation
accurate identification in cases where a large number of label Graph Neural Network for Multi-View Clustering," in IEEE Transactions
signals cannot be obtained in the battlefield environment. on Multimedia, doi: 10.1109/TMM.2023.3248173.
[20] Z. Wu, X. Lin, Z. Lin, Z. Chen, Y. Bai and S. Wang, "Interpretable Graph
Convolutional Network for Multi-View Semi-Supervised Learning," in
REFERENCES IEEE Transactions on Multimedia, doi: 10.1109/TMM.2023.3260649.
[1] Z. Zhu and A. K. Nandi, "Signal Models for Modulation Classification [21] Y. Liu, Y. Liu and C. Yang, "Modulation Recognition With Graph Convo-
," in Automatic Modulation Classification (Principles, Algorithms and lutional Network," in IEEE Wireless Communications Letters, vol. 9, no.
Applications) , 2nd ed., vol. 10,West Sussex, UK, 2015, pp. 1–17. 5, pp. 624-627, May 2020, doi: 10.1109/LWC.2019.2963828.
[2] O. A.Dobre, A.Abdi, "Survey Of Automatic Modulation Classifification [22] Q. Xuan et al., "AvgNet: Adaptive Visibility Graph Neural Network and
Techniques: Classical Approaches And New Trends," in IET Commun. Its Application in Modulation Classification," in IEEE Transactions on
vol.1, pp. 137–156, Feb. 2007. Network Science and Engineering, vol. 9, no. 3, pp. 1516-1526, 1 May-
[3] Q. Shi and Y. Karasawa, "Automatic Modulation Identification Based on June 2022, doi: 10.1109/TNSE.2022.3146836.
the Probability Density Function of Signal Phase," in IEEE Transactions [23] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega and P. Vandergheynst,
on Communications, vol. 60, no. 4, pp. 1033-1044, April 2012, doi: "The emerging field of signal processing on graphs: Extending high-
10.1109/TCOMM.2012.021712.100638. dimensional data analysis to networks and other irregular domains," in
[4] B. Ramkumar, "Automatic modulation classification for cognitive ra- IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83-98, May 2013,
dios using cyclic feature detection," in IEEE Circuits and Systems doi: 10.1109/MSP.2012.2235192.
Magazine, vol. 9, no. 2, pp. 27-45, Second Quarter 2009, doi: [24] S. Fu, W. Liu, and K. Zhang,"Semi-supervised Classification by Graph p-
10.1109/MCAS.2008.931739. Laplacian Convolutional Networks," in Information Sciences,vol. 560, pp.
[5] S. U. Pawar and J. F. Doherty, "Modulation Recognition in Continuous 92-106, June 2021.
Phase Modulation Using Approximate Entropy," in IEEE Transactions on [25] C. Li, X. Qin, X. Xu, D. Yang and G. Wei, "Scalable Graph Convolu-
Information Forensics and Security, vol. 6, no. 3, pp. 843-852, Sept. 2011, tional Networks With Fast Localized Spectral Filter for Directed Graphs,"
doi: 10.1109/TIFS.2011.2159000. in IEEE Access, vol. 8, pp. 105634-105644, 2020, doi: 10.1109/AC-
[6] Y. Deng, Z. Wang, "Modulation recognition of MAPSK signals using CESS.2020.2999520.
template matching," in Electron. Lett, vol. 50, pp.1986–1988, Dec. 2014, [26] Kipf, Thomas and Max Welling. “Semi-Supervised Classification with
doi: 10.1049/el.2014.2700. Graph Convolutional Networks.” ArXiv abs/1609.02907 (2016): n. pag.
[7] L. Xie and Q. Wan, "Automatic Modulation Recognition for Phase Shift [27] H. Zhang, M. Huang, J. Yang and W. Sun, "A Data Preprocessing
Keying Signals With Compressive Measurements," in IEEE Wireless Method for Automatic Modulation Classification Based on CNN," in IEEE

8 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3385663

Communications Letters, vol. 25, no. 4, pp. 1206-1210, April 2021, doi: YUE ZHANG received the B.S. degree in elec-
10.1109/LCOMM.2020.3044755. tronic information science and technology from
[28] D. Hong, Z. Zhang and X. Xu, "Automatic modulation classification the University of Electronic Science and Tech-
using recurrent neural networks," 2017 3rd IEEE International Confer- nology of China, Chengdu, China, in 2012, and
ence on Computer and Communications (ICCC), 2017, pp. 695-700, doi: the M.S degree in computational intelligence from
10.1109/CompComm.2017.8322633. Sheffield University, Sheffield, United Kingdom,
[29] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam and P. Vandergheynst, in 2013, and the Ph.D. degree in communication
"Geometric Deep Learning: Going beyond Euclidean data," in IEEE
and information systems from Xidian University
Signal Processing Magazine, vol. 34, no. 4, pp. 18-42, July 2017, doi:
in 2018. He was an associated professor in Un-
10.1109/MSP.2017.2693418.
[30] X. Chi, S. Huang and J. Li, "Handwriting Recognition Based on Resnet- manned System Research Institute, Northwestern
18," 2021 2nd International Conference on Big Data , Artificial Intel- Polytechnical University and currently a postdoc in Air Force Engineering
ligence and Software Engineering (ICBASE), 2021, pp. 456-459, doi: University. His research interests include machine learning, multi-agent re-
10.1109/ICBASE53849.2021.00091. inforcement learning, deep reinforcement learning, game theory, the Internet
[31] A. C. Belkina, C. O. Ciccolella, R. Anno, et al, "Automated optimized of Things, intelligent transportation systems, and big data.
parameters for T-distributed stochastic neighbor embedding improve vi-
sualization and analysis of large datasets," in Nature Communications,
5415(2019), DOI:10.1038/s41467-019-13055-y.

HUALI ZHU received the B.S. degree and M.S.


degree in software engineering from Southwest
Jiaoong University, Chengdu, China, in 2016. She
is currently pursuing the Ph.D. degree with the
Information and Navigation College, Air Force
Engineering University, Xi’an.His research inter-
ests include deep learning, modulation recognition
and few-shot learning.

LEI JIANG is currently an associate Professor


in the Air Force Engineering University, Xi’an,
HUA XU is currently a Professor in the Air Force China. He received the B.S. M.S. and Ph.D. degree
Engineering University, Xi’an, China. He received in communication engineering and communica-
the B.S. and M.S. degree in communication en- tion signal processing from Air Force Engineering
gineering from Air Force Engineering Univer- University, Xi’an, China, in 2005. His research in-
sity, Xi’an, China, in 2001, and the Ph.D. de- terests include communication system, electronic
gree in communication signal processing from the countermeasures and pattern recognition.
Information Engineering University, Zhengzhou,
China, in 2005. His research interests include
communication signal processing, blind signal
processing, communication countermeasures.

YUNHAO SHI received the B.S. degree in infor-


mation engineering from Xidian University Xi’an,
China, in 2018. He is currently pursuing the M.S.
degree in Information and Navigation College, Air
Force Engineering University, Xi’an, China. His
research interests include deep learning, signal
processing and few-shot learning.

VOLUME 4, 2016 9

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy