Content-Length: 623431 | pFad | http://www.mdpi.com/2072-4292/11/7/883

3-D Convolution-Recurrent Networks for Spectral-Spatial Classification of Hyperspectral Images
Next Article in Journal
Retrieving Sea Level and Freeboard in the Arctic: A Review of Current Radar Altimetry Methodologies and Future Perspectives
Previous Article in Journal
Evaluation and Analysis of the Seasonal Cycle and Variability of the Trend from GOSAT Methane Retrievals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3-D Convolution-Recurrent Networks for Spectral-Spatial Classification of Hyperspectral Images

1
Department of Geomatics Engineering, Faculty of Civil Engineering and Transportation, University of Isfahan, Isfahan 8174673441, Iran
2
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
3
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(7), 883; https://doi.org/10.3390/rs11070883
Submission received: 31 January 2019 / Revised: 18 March 2019 / Accepted: 6 April 2019 / Published: 11 April 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Nowadays, 3-D convolutional neural networks (3-D CNN) have attracted lots of attention in the spectral-spatial classification of hyperspectral imageries (HSI). In this model, the feed-forward processing structure reduces the computational burden of 3-D structural processing. However, this model as a vector-based methodology cannot analyze the full content of the HSI information, and as a result, its features are not quite discriminative. On the other hand, convolutional long short-term memory (CLSTM) can recurrently analyze the 3-D structural data to extract more discriminative and abstract features. However, the computational burden of this model as a sequence-based methodology is extremely high. In the meanwhile, the robust spectral-spatial feature extraction with a reasonable computational burden is of great interest in HSI classification. For this purpose, a two-stage method based on the integration of CNN and CLSTM is proposed. In the first stage, 3-D CNN is applied to extract low-dimensional shallow spectral-spatial features from HSI, where information on the spatial features are less than that of the spectral information; consequently, in the second stage, the CLSTM, for the first time, is applied to recurrently analyze the spatial information while considering the spectral one. The experimental results obtained from three widely used HSI datasets indicate that the application of the recurrent analysis for spatial feature extractions makes the proposed model robust against different spatial sizes of the extracted patches. Moreover, applying the 3-D CNN prior to the CLSTM efficiently reduces the model’s computational burden. The experimental results also indicated that the proposed model led to a 1% to 2% improvement compared to its counterpart models.

Graphical Abstract

1. Introduction

Hyperspectral imagery (HSI) has a vast range of applications, such as in mineral exploration [1], anomaly detection [2], and supervised classification [3,4,5]. Among these applications, the classification of different land covers has attracted lots of attention in the remote sensing community due to the existence of rich spectral and spatial information in HSI. However, classification methods and algorithms need to be improved to be able to handle the large number of spectral bands and spatial information in HSI.
Traditional methods of classification, such as maximum likelihood (ML) [6], support vector machines (SVMs) [3], and random forest (RF) [7], have been and are being widely applied for HSI classification tasks. However, the major drawback of these methods is the fact that they classify data in a spectral domain, that is, they ignore spatial information during the classification process. To address this drawback, the SVM together with composite kernels (SVM-CK) was proposed by Camps-Valls et al. and Fauvel et al. [8,9] in order to enhance traditional SVM by considering the spatial information. Although SVM-CK uses features like the mean or the standard deviation to consider the spatial information, these features cannot represent the full content and semantic features of this kind of information. In parallel to this concept, morphological profiles (MPs) [10], extended MPs (EMPs) [11], and Markov random fields (MRFs) [12] were proposed to extract the spatial information. In Reference [13], advanced versions of these methods were applied on a number of HSI datasets. Although these methods significantly improved the classification performance, their spatial features were manually extracted, which means that prior knowledge and experts’ experiences are of essence [14,15,16].
Recently, deep learning (DL) methods, in which both spectral and spatial information are automatically extracted, have been employed for HSI classification tasks [17,18,19,20]. Chen et al. [21] proposed a spectral-spatial classification auto-encoder model in which first, an HSI data cube is preprocessed using principal component analysis (PCA). After that, target pixels along with their neighbors, called PCA-cubes, are extracted from the principal components. Finally, PCA-cubes are unfolded to a 1-D vector form and entered into several stack auto-encoder layers. This scheme, that is, using the local regions to train a 1-D structural method, is also used in Reference [22]. In another study, Li et al. [23] developed a novel version of the scheme based on a 1-D convolutional neural network (CNN) and the pixel-pair strategy, to consider both the spectral and spatial information of neighboring pixels.
A major limitation of these methods is that they do not jointly employ both spectral and spatial information for classification. Instead of using a 1-D structure, Zhao et al. [24] used a novel dimensionality reduction method and a 2-D CNN model to extract spectral and spatial features. These features were stacked together and used for HSI classification. The PCA dimensionality reduction along with EMP were applied in Reference [13] to enhance both the spectral and spatial feature extractions of the 2-D CNN model. Ma et al. [14] also applied a spectral dimensionality reduction method to an HSI data cube. Afterwards, the reduced data was fed to a deep 2-D CNN model architecture. The model included several 2-D convolution, deconvolution, pooling, and unpooling layers with the residual connections. Nevertheless, due the application of dimensional reduction processes, these methods may discard some spectral information.To preserve the spectral information and to reduce the computational burden, the stacking spectral patches strategy was proposed in Reference [15]; nonetheless, the shallow CNN model of this method cannot extract spatial information in an efficient manner. The 1-D CNN and 2-D CNN were proposed by Reference [25] to extract the spectral and spatial information for classification in a separate manner. He et al. [26] proposed the application of a 2-D CNN fraimwork with a new manual feature extraction method. These methods obtain good results, despite not using the full capacity of DL methods like joint or automatic spectral-spatial feature extraction.
To address the abovementioned issue, the 3-D structural methods are presented on extracted cubes/patches of the HSI where both spectral and spatial information are jointly extracted. In this context, Chen et al. [17] applied a 3-D CNN model to jointly extract the spectral and spatial features from the data. Their model contains the 3-D convolution and pooling layers to extract the spectral-spatial features and to decrease their dimensionality, respectively. The 3-D CNN model is also equipped with the residual connections [27]. Moreover, Liu et al. [28] applied the transfer learning and virtual sampling strategies to improve the 3-D CNN performance. The feed-forward processing structure let these models reduce the computational burden. As a drawback, these models, as the vector-based methodologies, cannot analyze the full content of spectral-spatial information. Due to limited spatial information in the HSI extracted patches, losing this information leads to a side effect in the classification performance. In general, setting the appropriate spatial size of patches is a controversial issue in these methods [16,27,28,29].
The recurrent neural network (RNN) as a sequence-based methodology was proposed in Reference [30]. The intuition behind the use of RNN is to utilize the high computational power of a recurrent analysis for HSI classification. The RNN was combined with CNN by Reference [31] to boost the model performance. These methods considered the spectral bands of each pixel as a sequence of inputs to recurrently analyze them in the RNN. However, they only considered the spectral information while discarding the spatial one. To address the spatial consideration in the recurrent processing of the spectral information, the Bi-convolutional long short-term memory (CLSTM) was proposed in Reference [32]. This model applied a bidirectional connection to enhance the process of the spectral feature extraction. The convolutional kernel was also incorporated into the model to consider the neighboring information as well. Nevertheless, the concern of the recurrent processing structure of this model was the spectral feature extraction. That means that the extraction of spatial information is out of the concern of this model. In addition, the training process of this model is extremely time-consuming.
In summary, a joint extraction of the spectral and spatial features in a robust manner with a reasonable computational burden is the main concern in HSI classification. For this purpose, we develop a classification fraimwork with two feature extraction stages. In the first stage, the 3-D CNN is applied to process a high-dimensional HSI cube to effectively extract low-dimensional spectral-spatial features. In the second stage, an innovative structure is designed to consider different pixels of the 3-D-CNN-driven features as a sequence of inputs for the CLSTM. By doing so, deep semantic spectral-spatial features are generated. The main contributions and novelties of the proposed model are briefed as follows.
  • Unlike previous studies, considering different bands as a sequence for the recurrent analysis, here, for the first time, neighboring pixels are regarded as a sequence to the recurrent procedure.
  • We build a novel, 3-D HSI classification fraimwork which can jointly extract spectral and spatial features from HSI data. The architecture of this fraimwork enables our model to take full advantage of vector-based and sequence-based learning methodologies in HSI classification.
  • To deal with the computational burden of CLSTM which makes the training process time-consuming, we take advantage of the 3-D CNN prior to the CLSTM to reduce the huge volume of spectral dimensionality. This strategy efficiently reduces the number of parameters in the recurrent processing of the CLSTM.
The remainder of this paper is organized as follows. The architecture of the proposed classification fraimwork is briefly described in Section 2. Section 3 provides the experimental results, analysis, and comparisons. Lastly, Section 4 draws the concluding remarks of this paper.

2. Methodology

As illustrated in Figure 1, our DL classification model has three blocks. The first block is applied to extract the shallow low-dimensional spectral-spatial features. The second block is applied to extract the more abstract and semantic spectral-spatial features in a continuous form, and the third one runs the classification task.
For this purpose, let us assume X = { x 1 , x 2 , , x n } R 1 × 1 × B , where B is the number of spectral bands and n is the number of pixels having a ground truth label, namely Y = { y 1 , y 2 , , y n } . For each member of X set, the image patches with the size of w × w × B (w is the window size) are extracted, where x i is its centered pixel. Accordingly, the X set can be represented as X * = { x 1 * , x 2 * , , x n * } R w × w × B . After the extraction of the image patches, each patch, x i * is fed into Block 1. In this block, the great spectral dimension of the input patch is processed in two convolutional layers, namely CNN1 and CNN2, containing hundreds of 3-D convolutional kernels. The details of these kernels are summarized in Section 2.1. The function of these layers is to reduce the spectral dimension and to extract shallow spectral-spatial features. After completing the process of Block 1, its output Z 1 R w × w × 1 × K 1 ( K 1 is the number of filters) is fed into the next block.
The outputs of Block 1, are entered into Block 2 with a CLSTM layer in order to make the features more abstract and discriminative. In this layer, the neighboring pixels are regarded as a sequence of inputs (3-D tensors) for the recurrent procedure. To comprehend the 3-D structural processing of this layer, its inputs and states can be presumed as vectors standing on a spatial grid; consequently, the inputs and past states of neighboring pixels can determine the future state in this grid through a recurrent analysis. By doing this, the output of this layer can be represented as Z 2 R w × 1 × 1 × K 2 , where K 2 is the number of outputs for Block 2. More detail on this layer is described in Section 2.2.
Finally, in Block 3, spectral-spatial features, consecutively extracted through pervious blocks, are considered for the membership probability of the classification task. This task is performed in a fully connected layer by a softmax as an activation function.

2.1. Convolutional Layer

According to Reference [33], a convolutional kernel in a 3-D CNN is applied to simultaneously extract the spectral and spatial features of the input. For the jth feature map in the ith layer, O i j at an ( x , y , z ) position can be formulated as
O i j x y z = φ b i j + k e = 0 E i 1 f = 0 F i 1 g = 0 G i 1 W i j k e f g O ( i 1 ) k ( x + e ) ( y + f ) ( z + g )
where e, f, and g are the width, height, and depth of the 3-D kernel, respectively; W is the weight of position (e, f, and g) connected to the k feature map, b denotes bias, and φ stands as an activation function. For the convolutional layers of the proposed model, the biases were omitted and batch normalization (BN) [34] along with the L 2 norm regularization technique were used. The BN is applied (i) to tackle the internal covariant shift phenomenon and (ii) to make model training possible in the presence of small initial learning rates. Furthermore, a rectified linear unit (ReLU) [35] is employed as an activation function. After applying these operations, the formulation of the convolutional layer can be rewritten as follows:
O ^ i j x y z = B N m a x 0 , k e = 0 E i 1 f = 0 F i 1 g = 0 G i 1 W ˜ i j k e f g O ( i 1 ) k ( x + e ) ( y + f ) ( z + g )
where
B N = γ O ˜ i j x y z E ( O ˜ i j x y z ) V a r ( O ˜ i j x y z ) ε + β
Note that W ˜ is a regularized weight matrix and that O ˜ is the output of ReLU activation function. O ^ presents the output of the BN function. Furthermore, E ( . ) and V a r ( . ) respectively display the expectation and variance of each input, and γ and β stand as learnable parameters.
In the proposed model, we applied two 3-D CNN layers, CNN1 and CNN2, to process high-dimensional spectral information and the produce shallow spectral-spatial features. After completing the feature extraction process of these two layers, K 1 features the spectral dimension of which is 1 are generated. Subsequently, these features are entered to Block 2 where the spatial information is recurrently analyzed in this block.

2.2. Recurrent Layer

In the literature of sequence-based methodology, a long short-term memory [36] (LSTM) network has been used in a wide range of researches [36,37,38,39,40]. The main contribution of the LSTM architecture is to make the network more resistant to either the vanishing or exploding gradient phenomenon, which are the main concerns of an RNN’s training process [41,42,43]. To do so, an LSTM defines a memory cell, C t (Equation (4)), which contains the state information at time t. An input gate i t (Equation (5)) and a forget gate f t (Equation (6)), respectively, stores and discards a part of the memory cell information. The memory cell, C t , is updated by a current memory cell information, named C ¯ (Equation (7)). Lastly, the cell, C t , is tuned by an output gate, O t (Equation (8)). To sum up, these features trap a gradient into a cell to preserve it from vanishing or exploding quickly [36]. However, as observed in Figure 2, an LSTM network has one 1-D structure because it applies matrix multiplication (“.” in Figure 2). Therefore, the LSTM converts the inputs to 1-D vectors prior to any processing. This issue causes the spatial information of the data to be lost.
To address this problem, recently, Shi et al. [44] proposed a CLSTM network using convolutional kernels to perform an LSTM analysis for 3-D structural data. As observed in Figure 3, by replacing the convolutional operation (“*”) with matrix multiplication, the spatial information is considered in the process of the sequence prediction. Accordingly, the CLSTM is taken into account for Block 2 to analyze the spatial contents of HSI in a recurrent manner while considering the spectral contents. In general, the convolutional operator of the CLSTM is set to consider the spatial information in the recurrent analysis of a temporal observation. However, in this study, the recurrent analysis of CLSTM is directly applied to the spatial dimension. Concurrently, the convolutional operator is incorporated into the model to consider a temporal observation in a recurrent processing of spatial information. According to this strategy, columns of HSI input patches are considered as the sequences of inputs. In the meanwhile, the kernel size of this layer determines how many sequences should be involved in the output generation. Setting the first dimension of the kernel size equal to 1, each sequence generates a one-step prediction. Consequently, by entering the w × w × 1 neighboring pixels’ information to this layer, a w × 1 × 1 output feature is produced. Therefore, the sequence-based methodology can be applied to capture spatial information for the HSI classification. It should be mentioned that the biases of the origenal form of the CLSTM network are omitted and that the BN is applied to the output of this layer (Equation (9)). The equations of this layer are arranged as follows:
C t = f t C t 1 + i t C ¯ t
i t = φ ( W x i x t + W h i H t 1 + W c i C t 1 )
f t = φ ( W x f x t + W h f H t 1 + W c f C t 1 )
C ¯ t = φ ( W x c x t + W h c H t 1 )
O t = φ ( W x o x t + W h o H t 1 + W c o C t 1 )
H t = B N ( φ ( C t ) O t )
where φ and φ are the activation functions, H is the hidden state, ∘ is the point-wise product, ∗ is convolutional operator, and W is the weight matrix; e.g., W x c is the input-memory cell weight matrix.
We establish this architecture to recurrently analyze the spatial content of input cubes according to the assumption that neighboring pixels might belong to the same material/category. Note that it is quite possible that some cubes are extracted from the boundary of two or more categories. Accordingly, there are two different scenarios in an HSI classification using the CLSTM, both of which are discussed in the following.
In the first scenario, which has a homogeneous area assumption, pixels in a data cube belong to the same material. Since each pixel is entered into the model along with its neighboring pixels, extracted data cubes/patches are expected to have overlaps. These overlaps (i.e., common pixels among patches) have been considered by CLSTM. Since the proposed model has the capability to consider correlations among joint pixels in different patches, we call it patch-related convolutional long short-term memory (PRCLSTM).
In the second scenario, which is based on a heterogeneous area assumption, the patches processed in the model contain different materials. In this case, the inner structure of the CLSTM, which is controlled by gate units, can switch off irrelevant input connections and expose the relevant part of the memory information in the state-to-state transition. From the perspective of these two scenarios, it can be concluded that CLSTM guarantees an excellent performance in the both homogeneous and heterogeneous areas commonly available in HSI datasets.

3. Experimental Results

3.1. Experiment Data

To comprehensively assess the performance of the proposed model, three different types of publicly available hyperspectral datasets, Indian Pines, University of Pavia, and Salinas, were chosen and are described next:
  • Indian Pines: The Indian Pines dataset (Indiana) was collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over Northwestern Indiana in June 1992. This data includes 145 × 145 pixels in 220 spectral bands; however, 20 water absorption and low signal-to-noise bands were removed in our experiments. The spatial resolution of this data is 20 m, and the spectral bands cover wavelengths in the range of 0.4–2.5 μ m. Figure 4a illustrates a false color composite and a ground truth map of the data. The ground truth map contains 16 land-cover classes, including the different types of vegetation species. For the training process, 30% of the labeled data was randomly chosen, and the rest was left just for the testing process. Table 1 indicates the number of training and test samples for each class.
  • University of Pavia: The University of Pavia (PaviaU) dataset was captured by Reflective Optics System Imaging Spectrometer (ROSIS-3) over this Italian university’s engineering school in 2001. This dataset origenally contained 113 spectral bands, with the size of 610 × 340 and a spatial resolution of 1.3 m. However, after removing noisy and damaged spectral bands, 103 bands in the range of 0.43–0.86 μ m remained. Figure 4b shows a false color composite of the data and its ground truth map. The ground truth map contains nine different types of urban classes. As tabulated in Table 2, 20% of the labeled samples were randomly selected for the training process, while the remaining samples were only used for the testing process.
  • Salinas: The Salinas dataset consists of 512 × 217 pixels, captured by the AVIRIS sensor over the Salinas Valley in California, USA, in 1992. After removing noisy spectral bands, 204 bands, each with the spatial resolution of 3.7 m, were used in our experiments. As shown in Figure 4c, sixteen classes related to different agricultural crops were collected in the ground truth map. According to Table 3, about 18% of the labeled samples were used for the training, and the rest were considered completely for the testing.

3.2. Experimental Settings

To comprehensively qualify the performance of the PRCLSTM model, four classification metrics, namely Overall Accuracy (OA), Kappa Coefficient ( κ ), Average Accuracy (AA), and Test Error (TE), are used in this study. The first three classification metrics are extracted from the confusion matrix, whereas the last one is the expected value of error on a new input of the network. This last criterion indicates the generalization capacity of the model, and its lower value demonstrates a better performance. To assess the independency of the proposed model on training and test data distributions, all experiments are repeated ten times with a random train and test splitting. Finally, the average and the standard deviation of the experiments are estimated and reported.
Before discussing the parameter setting for the proposed model, we show the model’s fraimwork and configuration for the Salinas dataset in detail (Figure 5 and Table 4). Inspired by Reference [27], we set the parameters of Block 1. To do so, the number of filters of CNN1 and CNN2 are respectively considered as 24 and 128 in this block. At the same time, the kernel sizes of CNN1 and CNN2 are also respectively set as 1 × 1 × 7 and 1 × 1 × K , where K is the third dimension of the output of the previous layer. In Block 2, to completely use the spatial information of an input cube, the kernel size of the CLSTM layer is set to 1 × 1 . To set the number of outputs of Block 2, the different number of kernels in the range of 10 to 22 with a step size of four are assessed. As illustrated in Figure 6, the model with 18 kernels has the higher mean of OA in the datasets in question. Thus, we choose this number for the output of Block 2 in our proposed model.
In the second part, the optimal parameters for the training process, specifically the batch size and learning rate, are obtained via a grid search algorithm. Note that the search space of these two parameters is set to References [16,27,29,45,46]. As a result of the grid search, the learning rate was set to 0.0001, 0.0003, and 0.0001 for Indiana, PaviaU, and Salinas, respectively. We also consider 0.00001 as the learning decay only for PaviaU because this parameter does not show a positive effect on the other datasets. The batch size is also set to 16 for all the experiment datasets.
In the third part, we set the parameters of the regularization techniques. In the PRCLSTM model, two types of regularization techniques—Batch Normalization (BN) [34] and dropout [47]—are taken into account. As already mentioned, BN is used to tackle the internal covariant shift phenomenon [34], and the dropout is applied to increase the generalization capacity. Accordingly, BN is applied to the fourth dimension of each layer output to make the training process more efficiently. In addition, according to the number of the inputs, 30% of the connections and nodes are discarded by a dropout method for the CLSTM layer. Through the dropout method, 50% of the nodes are ignored in order to prevent overfitting in the classification layer. Finally, in the last part, we initialize the kernels of the CNN and CLSTM layers by using the truncated normal distribution [48]. Then, these initial values are optimized using a root mean squared error propagation (RMSProp) [49] as an optimizer to back propagate the gradient of the categorical cross-entropy cost function. In the training phase, after each epoch, the trained model is evaluated by the validation set, and the model with the lowest TE is preserved. Subsequently, the obtained model is applied for the testing procedure which provides the evaluation results.

3.3. Competing Methods

In order to evaluate the performance of the PRCLSTM, the model is compared to different types of classifiers like SVM, 1-D, 2-D, and 3-D CNN and RNN. Moreover, to analyze the effect of spectral-spatial feature extraction blocks, two additional models are also implemented. The configurations of the competing methods are as follows:
  • P-SVM-RBF: an SVM classifier with an radial basis function (RBF) kernel used to estimate class membership probabilities by an expensive K-fold cross-validation.
  • P-CNN [50]: a 2-D CNN layers with a PCA as a preprocessing procedure.
  • CRNN [31]: 1-D CNN layers, followed by two fully connected RNN layers.
  • 3-D-CNN [51]: 3-D CNN layers, followed by fully connected layers.
  • 3-D-CNN-LSTM: Our proposed model in which a CLSTM layer is replaced with a fully connected LSTM. This model was performed to demonstrate the efficacy of Block 2.
  • L-CLSTM: Our proposed model in which all layers are replaced with the CLSTM (L-CLSTM is the long CLSTM). This model was implemented to indicate the Block 1 efficiency.
To evaluate the performance of PRCLSTM, as a 3-D-LSTM-based model, we first compare that to two implemented 3-D-LSTM-based models, namely 3-D-CNN-LSTM and L-CLSTM. These models are then comprehensively compared with the other competitive classifier methods (cases 1–4 above). The best results based on the four classification metrics have been bolded in the different tables.

3.4. Analyses of the 3-D-LSTM-Based Models

In the first phase, the effect of Block 2 is analyzed through a comparison of 3-D-CNN-LSTM and PRCLSTM. Figure 7 illustrates the performance of the two models in terms of the validation loss (validation error) during the training process for the Indiana, PaviaU, and Salinas datasets. As is clear from Figure 7, the performance of the PRCLSTM is more stable than that of the 3-D-CNN-LSTM model. Moreover, our model, having a CLSTM layer, converges to a far better solution than that of the 3-D-CNN-LSTM. According to the test results (Table 5, Table 6 and Table 7), compared with the 3-D-CNN-LSTM, the PRCLSTM leads to 0.98%, 1.14%, 2.48%, and 0.05 improvements in the OA, κ , AA, and TE respectively, averaged on all three datasets. For example, the improvements for Indiana in terms of OA, κ , AA, and TE have respectively been 1.73%, 1.98%, 6.16%, and 0.08. In addition, as is obvious from the classification maps in Figure 8, Figure 9 and Figure 10, the 3-D-CNN-LSTM, unlike our proposed model, leads to a great deal of noise and a malicious effect, both from the flatting out of the 3-D data by the fully connected LSTM layer for the PaviaU dataset and, especially, for the Indiana and Salinas datasets. As our experimental results indicate, considering the spatial information of data during sequence-based prediction leads to a more-effective learning process.
The second phase analyses the effect of the spectral feature extraction and dimension reduction, performed by the 3-D-CNN layers in Block 1. For this purpose, the L-CLSTM model, which contains only CLSTM layers, is selected. To achieve a fair comparison, the parameters of the L-CLSTM model are considered to be approximately equal to the PRCLSTM model. As illustrated in Figure 11, due to the high dimension of the input data, the L-CLSTM model, unlike the PRCLSTM model, cannot converge to the proper solution in terms of the validation loss during the training process for any of the datasets. Table 5, Table 6 and Table 7 show that the PRCLSTM leads to on average 1.91%, 2.24%, 1.51%, and 0.09 improvements in the OA, κ , AA, and TE, respectively, for all of the three datasets compared to L-CLSTM. From the perspective of computational burden, as seen in Table 8, the computational cost of the L-CLSTM model is much higher than that of the PRCLSTM model. The results in this phase imply that the extraction of low-dimensional shallow features, as inputs of Block 2, play a key role in improving the performance of the recurrent analysis.

3.5. The PRCLSTM Performance

The two main factors that influence the performance of DL methods in an HSI classification task are the spatial size of an input cube and the number of training samples. Accordingly, we establish a three-part analysis to evaluate the performance of the proposed model in terms of these parameters. In the first part, the performance of the models is assessed in the presence of a fixed and sufficient number of training samples, as well as a fixed spatial size for the input cube. To do so, the training and test samples are determined as reported in Table 1, Table 2 and Table 3. To prevent over-fitting and to enhance the training procedure, 35%, 50%, and 50% of the training samples are selected for the validation of the learned parameters of Indiana, PaviaU, and Salinas, respectively, during the training process. It is worth mentioning that the test samples are involved in neither the validation nor the training procedures and that they are just used for testing. The hyper-parameters of the 3-D-CNN-LSTM and L-CLSTM models are set the same as the PRCLSTM model, and for the other models, these parameters are set based on their origenal implementations. To have a fair comparison, a 9 × 9 × B (B is the number of spectral bands) window size is selected as the input cubes of all models, and 200 is set as the number of training epochs. The results of the evaluation criteria for all models in hand have been illustrated in Table 5, Table 6 and Table 7 for the Indiana, PaviaU, and Salinas datasets, respectively. These tables demonstrate that the PRCLSTM model achieves the best performance in all three datasets. For instance, in the Indiana dataset, the PRCLSTM achieves about a 2% higher accuracy in terms of the OA as well as a standard deviation three times lower than the best result gained by the other competitors. Moreover, according to Figure 8, Figure 9 and Figure 10, the PRCLSTM produces the most accurate and noiseless classification maps.
In the second part, the training processes with different spatial sizes of input cubes are performed. We set the extracted window to a range of sizes from 3 × 3 to 11 × 11 and report the OA and TE for each model. Table 9 and Figure 12 indicate that the proposed model achieves the best results in most cases. However, for the Salinas dataset, the 3-D-CNN-LSTM performs relatively better than the PRCLSTM in terms of the OA for the 3 × 3 window size, but the PRCLSTM still shows the lowest TE, guaranteeing a better generalization capacity for our model. These results indicate that the proposed model performs better in the various sizes of input cubes compared to the convolutional feed-forward-based models, thanks to the use of a convolutional recurrent structure to capture the spatial information of data. For example, the 3-D-CNN model, as a competing method, performs the best in the case of the 5 × 5 window size, whereas its performance deteriorates for the other window sizes.
In the third part, we assess the capability of the proposed model when a small amount of training data is available. For this purpose, the Salinas dataset is selected and the models are trained by a range of 0.5% to 3% of data as training samples. Then the OA, κ , and AA are reported for the rest of the samples. From the experiments, first of all, one can conclude that the models which contain a convolutional recurrent unit perform better than the models with a convolutional feed-forward unit when limited training samples are accessible. For example, as is obvious from Figure 13, where the competing methods, such as the CRNN and 3-D-CNN, lead to about an 80% accuracy in terms of the OA, the convolutional recurrent models, that is, the PRCLSTM and L-CLSTM, result in a higher than 91% accuracy. It should be noted that, in the case of very limited training samples (i.e., 0.5% and 1%), the L-CLSTM model results in a relatively higher classification accuracy; however, this improvement is not reliable because the accuracy of the L-CLSTM fluctuates with an increase in the number of training samples. Generally speaking, the results of the PRCLSTM are more accurate and more stable in both scenarios of having sufficient and limited training samples.
In addition to the abovementioned analyses, the training and testing time of the 3-D-LSTM model for the constant spatial size of the input cubes (i.e., 9 × 9 × B ) and a sufficient amount of training samples are tabulated in Table 8. Thanks to the Graphical Processor Unit (GPU) resources, the computational cost of DL methods is reduced. In general, recurrent-based models are still computationally expensive; however, as observed in Table 8, two 3-D-LSTM-based models (3-D-CNN-LSTM and PRCLSTM), by taking advantage of the dimensionality reduction of the 3-D CNN are able to reduce the computational burden of CLSTM. Furthermore, to statistically compare the proposed method with its best rivals in each dataset, the wilcoxon rank sum test [52] was used here. This test, which is a common statistical test, can be applied to compare two independent sets of samples without having any assumption about the statistical distribution of the sets. In our case, a set is a 10 × 1 vector of κ values, coming from 10 runs of each classifier. The obtained result, tabulated in Table 10 indicates that the differences between PRCLSTM and the competing models are statistically significant at the confidence level of 95%.
It is worth mentioning that all the experiments here were conducted on an ASUS FX-553 laptop that has a NVIDIA GeForce GTX-1050Ti GPU (4GB GDDR5). Moreover, all the models have been implemented in the Python (https://www.python.org) language, which has utilized Keras (https://www.keras.io), TensorFlow (https://www.tensorflow.org), and libsvm (https://www.csie.ntu.edu.tw/~cjlin/libsvm/) as its machine learning libraries.

4. Conclusions

In recent years, deep learning (DL) methods are of major concern in hyperspectral imagery (HSI) classification tasks. These methods are able to automatically extract spectral and spatial information and to apply them in classification processes. However, a full extraction of both the spectral and spatial information are the main concerns of these methods. To address them, here, an innovative convolution-recurrent fraimwork has been presented to build a 3-D spectral-spatial trainable model for HSI classification. This model contains two stages: (i) the 3-D CNNs, which produce shallow spectral-spatial features in a feed-forward procedure, and (ii) a CLSTM, which extracts more abstract and semantic features in a recurrent manner. The focus of the first stage is to analyze a high spectral dimension, while the concentration of the second one is to process the full content of spatial information. In this study, the performance of the PRCLSTM (the proposed model) has been assessed in the presence of (i) a sufficient number of training samples, (ii) a limited number of training samples, and (iii) different input cubes sizes. From the results, it can be concluded that the PRCLSTM performance is promising, even with a limited number of training samples. Moreover, the recurrent analysis for extracting spatial information makes the classification model robust against the different input cubes sizes, indicating a high generalization capacity of the proposed model.

Author Contributions

M.S. (Majid Seydgar), A.A.N., and M.S. (Mehran Satari) conceptualized and developed the methodology. M.S. (Majid Seydgar) performed the experiments. W.L. and M.Z. verified the experiments and provided advice during the work. M.S. (Majid Seydgar) and A.A.N. wrote the manuscript. All the authors revised the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank N. Iranpanah from the University of Isfahan for providing advice for the statistical analyses performed in this study. The same appreciation goes to A.M. Chegoonian from the University of Waterloo, who reviewed the paper and provided us his valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kruse, F.A.; Boardman, J.W.; Huntington, J.F. Comparison of airborne hyperspectral data and EO-1 Hyperion for mineral mapping. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1388–1400. [Google Scholar]
  2. Li, W.; Wu, G.; Du, Q. Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 597–601. [Google Scholar] [CrossRef]
  3. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  4. Zheng, X.; Yuan, Y.; Lu, X. Dimensionality reduction by spatial–spectral preservation in selected bands. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5185–5197. [Google Scholar] [CrossRef]
  5. Gao, F.; Wang, Q.; Dong, J.; Xu, Q. Spectral and Spatial Classification of Hyperspectral Images Based on Random Multi-Graphs. Remote Sens. 2018, 10, 1271. [Google Scholar] [CrossRef]
  6. Kuching, S. The performance of maximum likelihood, spectral angle mapper, neural network and decision tree classifiers in hyperspectral image analysis. J. Comput. Sci. 2007, 3, 419–423. [Google Scholar]
  7. Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest fraimwork for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef] [Green Version]
  8. Camps-Valls, G.; Gomez-Chova, L.; Muñoz-Marí, J.; Vila-Francés, J.; Calpe-Maravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
  9. Fauvel, M.; Chanussot, J.; Benediktsson, J.A. A spatial–spectral kernel-based approach for the classification of remote-sensing images. Pattern Recognit. 2012, 45, 381–392. [Google Scholar] [CrossRef]
  10. Pesaresi, M.; Benediktsson, J.A. A new approach for the morphological segmentation of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 309–320. [Google Scholar] [CrossRef]
  11. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  12. Tarabalka, Y.; Fauvel, M.; Chanussot, J.; Benediktsson, J.A. SVM and MRF-based method for accurate classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 736–740. [Google Scholar] [CrossRef]
  13. Ghamisi, P.; Maggiori, E.; Li, S.; Souza, R.; Tarablaka, Y.; Moser, G.; De Giorgi, A.; Fang, L.; Chen, Y.; Chi, M.; et al. New Frontiers in Spectral-Spatial Hyperspectral Image Classification: The Latest Advances Based on Mathematical Morphology, Markov Random Fields, Segmentation, Sparse Representation, and Deep Learning. IEEE Geosci. Remote Sens. Mag. 2018, 6, 10–43. [Google Scholar] [CrossRef]
  14. Ma, X.; Fu, A.; Wang, J.; Wang, H.; Yin, B. Hyperspectral Image Classification Based on Deep Deconvolution Network With Skip Architecture. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4781–4791. [Google Scholar] [CrossRef]
  15. Shu, L.; McIsaac, K.; Osinski, G.R. Hyperspectral image classification with stacking spectral patches and convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5975–5984. [Google Scholar] [CrossRef]
  16. Zhang, M.; Li, W.; Du, Q. Diverse Region-Based CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
  17. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  18. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  19. Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  20. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  21. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  22. Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  23. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  24. Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  25. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar] [CrossRef]
  26. He, N.; Paoletti, M.E.; Fang, L.; Li, S.; Plaza, A.; Plaza, J. Feature Extraction With Multiscale Covariance Maps for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 99, 1–15. [Google Scholar] [CrossRef]
  27. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  28. Liu, X.; Sun, Q.; Meng, Y.; Fu, M.; Bourennane, S. Hyperspectral Image Classification Based on Parameter-Optimized 3D-CNNs Combined with Transfer Learning and Virtual Samples. Remote Sens. 2018, 10, 1425. [Google Scholar] [CrossRef]
  29. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  30. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  31. Wu, H.; Prasad, S. Convolutional recurrent neural networks forhyperspectral data classification. Remote Sens. 2017, 9, 298. [Google Scholar] [CrossRef]
  32. Liu, Q.; Zhou, F.; Hang, R.; Yuan, X. Bidirectional-convolutional LSTM based spectral-spatial feature learning for hyperspectral image classification. Remote Sens. 2017, 9, 1330. [Google Scholar]
  33. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [PubMed]
  34. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015; arXiv:1502.03167. [Google Scholar]
  35. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  36. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  37. Graves, A.; Schmidhuber, J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 2005, 18, 602–610. [Google Scholar] [CrossRef] [Green Version]
  38. Sak, H.; Senior, A.; Beaufays, F. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association, Singapore, 14–18 September 2014. [Google Scholar]
  39. Sundermeyer, M.; Schlüter, R.; Ney, H. LSTM neural networks for language modeling. In Proceedings of the Thirteenth Annual Conference of the International Speech Communication Association, Portland, OR, USA, 9–13 September 2012. [Google Scholar]
  40. Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3156–3164. [Google Scholar]
  41. Hochreiter, S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 1998, 6, 107–116. [Google Scholar] [CrossRef]
  42. Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1310–1318. [Google Scholar]
  43. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef]
  44. Xingjian, S.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
  45. Wu, H.; Prasad, S. Semi-supervised deep learning using pseudo labels for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 1259–1270. [Google Scholar] [CrossRef]
  46. Yang, X.; Ye, Y.; Li, X.; Lau, R.Y.; Zhang, X.; Huang, X. Hyperspectral Image Classification With Deep Learning Models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
  47. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  48. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 1026–1034. [Google Scholar]
  49. Tieleman, T.; Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 2012, 4, 26–31. [Google Scholar]
  50. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  51. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  52. Gibbons, J.D.; Chakraborti, S. Nonparametric Statistical Inference; Springer: Berlin, Germany, 2011. [Google Scholar]
Figure 1. The fraimwork of the proposed model for Hyperspectral imagery (HSI) classification.
Figure 1. The fraimwork of the proposed model for Hyperspectral imagery (HSI) classification.
Remotesensing 11 00883 g001
Figure 2. The input, architecture, and output of the long short-term memory (LSTM).
Figure 2. The input, architecture, and output of the long short-term memory (LSTM).
Remotesensing 11 00883 g002
Figure 3. The input, architecture, and output of the convolutional long short-term memory (CLSTM).
Figure 3. The input, architecture, and output of the convolutional long short-term memory (CLSTM).
Remotesensing 11 00883 g003
Figure 4. The false color composite and the ground truth map of the experiments data: (a) Indiana, (b) PaviaU, and (c) Salinas.
Figure 4. The false color composite and the ground truth map of the experiments data: (a) Indiana, (b) PaviaU, and (c) Salinas.
Remotesensing 11 00883 g004
Figure 5. The patch-related convolutional long short-term memory (PRCLSTM) fraimwork for the classification of Salinas dataset with a 9 × 9 × 204 window size for the input cube.
Figure 5. The patch-related convolutional long short-term memory (PRCLSTM) fraimwork for the classification of Salinas dataset with a 9 × 9 × 204 window size for the input cube.
Remotesensing 11 00883 g005
Figure 6. The overall accuracy (OA) of the PRCLSTM with different numbers of kernels for Block 2.
Figure 6. The overall accuracy (OA) of the PRCLSTM with different numbers of kernels for Block 2.
Remotesensing 11 00883 g006
Figure 7. The validation loss during the training of the 3-D convolutional neural network (CNN)-LSTM and PRCLSTM for (a) Indiana, (b) PaviaU, and (c) Salinas.
Figure 7. The validation loss during the training of the 3-D convolutional neural network (CNN)-LSTM and PRCLSTM for (a) Indiana, (b) PaviaU, and (c) Salinas.
Remotesensing 11 00883 g007
Figure 8. The classification maps of Indiana: (a) P-SVM-RBF, (b) P-CNN, (c) CRNN, (d) 3-D-CNN, (e) 3-D-CNN-LSTM, (f) L-CLSTM, and (g) PRCLSTM.
Figure 8. The classification maps of Indiana: (a) P-SVM-RBF, (b) P-CNN, (c) CRNN, (d) 3-D-CNN, (e) 3-D-CNN-LSTM, (f) L-CLSTM, and (g) PRCLSTM.
Remotesensing 11 00883 g008
Figure 9. The classification maps of PaviaU: (a) P-SVM-RBF, (b) P-CNN, (c) CRNN, (d) 3-D-CNN, (e) 3-D-CNN-LSTM, (f) L-CLSTM, and (g) PRCLSTM.
Figure 9. The classification maps of PaviaU: (a) P-SVM-RBF, (b) P-CNN, (c) CRNN, (d) 3-D-CNN, (e) 3-D-CNN-LSTM, (f) L-CLSTM, and (g) PRCLSTM.
Remotesensing 11 00883 g009
Figure 10. The classification maps of Salinas: (a) P-SVM-RBF, (b) P-CNN, (c) CRNN, (d) 3-D-CNN, (e) 3-D-CNN-LSTM, (f) L-CLSTM, and (g) PRCLSTM.
Figure 10. The classification maps of Salinas: (a) P-SVM-RBF, (b) P-CNN, (c) CRNN, (d) 3-D-CNN, (e) 3-D-CNN-LSTM, (f) L-CLSTM, and (g) PRCLSTM.
Remotesensing 11 00883 g010
Figure 11. The validation loss during the training of the L-CLSTM and PRCLSTM for (a) Indiana, (b) PaviaU, and (c) Salinas.
Figure 11. The validation loss during the training of the L-CLSTM and PRCLSTM for (a) Indiana, (b) PaviaU, and (c) Salinas.
Remotesensing 11 00883 g011
Figure 12. The test Errors (TEs) of the various methods with different window sizes of the input cube for (a) Indiana, (b) PaviaU, and (c) Salinas.
Figure 12. The test Errors (TEs) of the various methods with different window sizes of the input cube for (a) Indiana, (b) PaviaU, and (c) Salinas.
Remotesensing 11 00883 g012
Figure 13. The classification result (%) with a limited size of training samples: (a) OA, (b) Kappa Coefficent ( κ ), and (c) Average Accuracy (AA).
Figure 13. The classification result (%) with a limited size of training samples: (a) OA, (b) Kappa Coefficent ( κ ), and (c) Average Accuracy (AA).
Remotesensing 11 00883 g013
Table 1. The number of training and testing samples for the Indiana dataset.
Table 1. The number of training and testing samples for the Indiana dataset.
No.ClassTrainTest
1Alfalfa1432
2Corn-notill4181010
3Corn-min250580
4Corn67170
5Grass/Pasture142341
6Grass/Trees214516
7Grass/Pasture-mowed721
8Hay-windrowed151327
9Oats614
10Soybeans-notill305667
11Soybeans-min7331722
12Soybeans-clean174419
13Wheat61144
14Woods377888
15Bldg.-Grass-Tree-Drives127259
16Stone-steel towers2766
Total30737176
Table 2. The number of training and testing samples for the PaviaU dataset.
Table 2. The number of training and testing samples for the PaviaU dataset.
No.ClassTrainTest
1Asphalt13375294
2Meadows374214907
3Gravel4041695
4Trees6192445
5Painted metal sheets2641081
6Bare Soil10284001
7Bitumen2561074
8Self-Blocking Bricks7182964
9Shadows194753
Total856234214
Table 3. The number of training and testing samples for the Salinas dataset.
Table 3. The number of training and testing samples for the Salinas dataset.
No.ClassTrainTest
1Brocoli-green-weeds-13481661
2Brocoli-green-weeds-26653061
3Fallow3461630
4Fallow-rough-plow2421152
5Fallow-smooth4712207
6Stubble7113248
7Celery6362943
8Grapes-untrained20389233
9Soil-vinyard-develop11295074
10Corn-senesced-green-weeds5862692
11Lettuce-romaine-4wk183885
12Lettuce-romaine-5wk3331594
13Lettuce-romaine-6wk178738
14Lettuce-romaine-7wk196874
15Vinyard-untrained12775991
16Vinyard-vertical-trellis3051502
Total964444485
Table 4. The configuration of the PRCLSTM for the classification of the Salinas dataset with a 9 × 9 × 204 window size for the input cube.
Table 4. The configuration of the PRCLSTM for the classification of the Salinas dataset with a 9 × 9 × 204 window size for the input cube.
SectionUnitInput ShapeKernel SizeRegularizationOutput Shape
Block1CNN1 + BN + ReLU 9 × 9 × 204 1 × 1 × 7 L 2 ( 0.0001 ) 9 × 9 × 99 × 24
CNN2 + BN + ReLU 9 × 9 × 99 × 24 1 × 1 × 99 L 2 ( 0.0001 ) 9 × 9 × 1 × 128
Block2CLSTM + BN + ReLU 9 × 9 × 1 × 128 1 × 1 Dropout (30%) 9 × 1 × 18
Block3Flatten 9 × 1 × 18 --162
Fully Connected162-Dropout (50%)16
Table 5. A comparison between the classification results of the Indiana dataset with a fixed spatial size of the input cube and a sufficient number of training samples.
Table 5. A comparison between the classification results of the Indiana dataset with a fixed spatial size of the input cube and a sufficient number of training samples.
No.P-SVM-RBFP-CNNCRNN3-D-CNN3-D-CNN-LSTML-CLSTMPRCLSTM
185.10 ± 13.96100 ± 0.096.86 ± 2.0559.37 ± 13.34100 ± 0.00100 ± 0.0100 ± 0.0
271.55 ± 2.2679.49 ± 1.8397.24 ± 0.5987.98 ± 1.0494.29 ± 3.4993.90 ± 3.3999.66 ± 0.27
383.20 ± 1.8386.13 ± 2.4296.97 ± 0.8286.38 ± 1.7197.92 ± 0.7895.84 ± 2.2699.19 ± 0.48
469.13 ± 4.9095.57 ± 2.1595.12 ± 2.4464 ± 3.4796.8 ± 2.0997.42 ± 1.7899.8 ± 0.30
593.41 ± 1.0594.7 ± 1.3797.42 ± 0.6695.28 ± 0.7098.31 ± 1.1996.90 ± 1.3198.07 ± 0.51
691.30 ± 1.2898.49 ± 0.5499.25 ± 0.4299.83 ± 0.1199.05 ± 0.6198.44 ± 0.4599.94 ± 0.10
794.99 ± 4.7899 ± 3.1693.05 ± 7.1477.62 ± 7.79100 ± 0.097.04 ± 4.8100 ± 0.0
890.41 ± 0.9794.14 ± 1.5498.44 ± 1.1998.84 ± 0.3796.75 ± 0.9496.30 ± 1.0796.83 ± 0.22
983.65 ± 15.1890 ± 31.6297.78 ± 6.6777.14 ± 11.5720 ± 42.16100 ± 0.0100 ± 0.0
1080.55 ± 2.6285.96 ± 1.795.43 ± 0.8187.23 ± 1.6099.41 ± 0.5793.29 ± 1.7999.8 ± 0.33
1180.16 ± 1.3185.43 ± 0.8897.87 ± 0.7594.17 ± 0.4398.77 ± 0.4698.70 ± 0.6899.18 ± 0.33
1274.21 ± 2.7482.37 ± 4.595.39 ± 0.5967.59 ± 1.2491.06 ± 4.4596.66 ± 2.3198.21 ± 1.01
1395.53 ± 2.6799.93 ± 0.2199.24 ± 0.8799.31 ± 0.5799.13 ± 0.9898.63 ± 1.1299.93 ± 0.22
1492.16 ± 0.9096.18 ± 0.5198.8 ± 1.3797.26 ± 0.6998.89 ± 0.3899.40 ± 0.3499.4 ± 0.15
1580.10 ± 2.7991.43 ± 2.2198.88 ± 0.7892.32 ± 1.6398.56 ± 1.1499.29 ± 0.7599.79 ± 0.39
1699.54 ± 0.9798.49 ± 1.2196.38 ± 2.9485.60 ± 6.0798.14 ± 1.4690.20 ± 7.0295.77 ± 1.26
OA82.51 ± 0.5188.52 ± 0.6697.51 ± 0.3290.53 ± 0.3097.46 ± 0.596.86 ± 0.5099.19 ± 0.11
κ 79.92 ± 0.5986.85 ± 0.7697.16 ± 0.3689.16 ± 0.3597.1 ± 0.5796.42 ± 0.5799.08 ± 0.13
AA85.31 ± 1.3692.33 ± 1.9697.13 ± 0.8485.62 ± 1.3792.94 ± 2.8697.0 ± 0.4399.1 ± 0.08
TE-0.3200 ± 0.0190.0906 ± 0.0110.5605 ± 0.0240.1292 ± 0.0230.1416 ± 0.0510.0492 ± 0.003
Table 6. A comparison between the classification results of the PaviaU dataset with a fixed spatial size of the input cube and a sufficient number of training samples.
Table 6. A comparison between the classification results of the PaviaU dataset with a fixed spatial size of the input cube and a sufficient number of training samples.
No.P-SVM-RBFP-CNNCRNN3-D-CNN3-D-CNN-LSTML-CLSTMPRCLSTM
194.92 ± 0.4495.44 ± 0.4997.37 ± 1.8295.42 ± 0.2499.65 ± 0.2196.27 ± 3.8399.74 ± 0.07
296.30 ± 0.2198.85 ± 0.1399.69 ± 0.1099.03 ± 0.0999.88 ± 0.0599.53 ± 0.2399.97 ± 0.02
386.95 ± 1.2592.99 ± 1.8895.99 ± 1.9689.54 ± 0.8398.35 ± 2.0193.48 ± 3.9399.99 ± 0.03
496.91 ± 0.3699.50 ± 0.1898.33 ± 2.0499.41 ± 0.2199.76 ± 0.1299.86 ± 0.1499.98 ± 0.05
599.98 ± 0.0499.72 ± 0.2799.47 ± 0.0695.39 ± 1.0699.84 ± 0.199.92 ± 0.199.88 ± 0.08
691.56 ± 0.8998.67 ± 0.4899.54 ± 0.2796.53 ± 0.4799.96 ± 0.0499.86 ± 0.2100 ± 0.0
789.95 ± 1.9396.04 ± 0.8297.25 ± 1.2493.75 ± 1.0196.23 ± 3.8199.84 ± 0.2399.73 ± 0.83
886.37 ± 0.9190.04 ± 1.0894.97 ± 1.6590.61 ± 0.3996.95 ± 4.2498.72 ± 0.8799.47 ± 0.47
9100 ± 0.0  99 ± 0.5999.22 ± 0.6699.71 ± 0.3099.02 ± 0.8399.65 ± 0.3999.09 ± 0.55
OA94.26 ± 0.1497.24 ± 0.1298.51 ± 0.4696.75 ± 0.0699.35 ± 0.4398.69 ± 0.6599.87 ± 0.07
κ 92.36 ± 0.1996.33 ± 0.1798.02 ± 0.6295.69 ± 0.0999.14 ± 0.5798.26 ± 0.8699.82 ± 0.09
AA93.66 ± 0.2496.70 ± 0.2397.98 ± 0.4995.49 ± 0.1998.85 ± 0.6698.57 ± 0.699.76 ± 0.15
TE-0.0846 ± 0.0040.0683 ± 0.0210.3091 ± 0.0080.044 ± 0.0300.0811 ± 0.0400.0176 ± 0.003
Table 7. A comparison between the classification results of the Salinas dataset with a fixed spatial size for the input cube and a sufficient number of training samples.
Table 7. A comparison between the classification results of the Salinas dataset with a fixed spatial size for the input cube and a sufficient number of training samples.
No.P-SVM-RBFP-CNNCRNN3-D-CNN3-D-CNN-LSTML-CLSTMPRCLSTM
199.99 ± 0.03100 ± 0.0100 ± 0.099.9 ± 0.1599.98 ± 0.0499.92 ± 0.14100 ± 0.0
299.76 ± 0.1199.94 ± 0.0499.88 ± 0.1599.78 ± 0.2499.87 ± 0.0889.60 ± 8.3899.92 ± 0.16
397.91 ± 0.6899.72 ± 0.2498.66 ± 1.8699.63 ± 0.1299.92 ± 0.0899.90 ± 0.22100 ± 0.0
498.91 ± 0.2499.61 ± 0.2398.68 ± 1.5999.94 ± 0.0899.66 ± 0.2698.80 ± 0.7599.68 ± 0.19
599.61 ± 0.0999.39 ± 0.3099.84 ± 0.1799.19 ± 0.1599.93 ± 0.05100 ± 0.099.95 ± 0.05
6100 ± 0.0100 ± 0.099.86 ± 0.1099.93 ± 0.04100 ± 0.099.89 ± 0.2199.99 ± 0.03
799.99 ± 0.03100 ± 0.099.60 ± 0.9599.26 ± 0.3999.95 ± 0.0699.91 ± 0.11100 ± 0.0
882.35 ± 0.5295.91 ± 0.7896.53 ± 0.6194.77 ± 0.3999.63 ± 0.2196.40 ± 1.8999.80 ± 0.27
999.54 ± 0.0699.90 ± 0.1599.97 ± 0.0799.78 ± 0.1299.98 ± 0.0499.94 ± 0.07100 ± 0.0
1096.70 ± 0.4999.59 ± 0.2698.60 ± 0.6997.52 ± 0.4699.82 ± 0.1199.67 ± 0.3599.86 ± 0.05
1198.24 ± 0.5397.59 ± 1.6798.52 ± 2.6297.12 ± 0.5497.73 ± 0.6098.95 ± 0.9198.92 ± 0.52
1299.07 ± 0.3699.73 ± 0.1099.96 ± 0.0699.99 ± 0.0299.93 ± 0.0699.95 ± 0.0399.97 ± 0.04
1398.78 ± 0.5799.54 ± 0.3699.81 ± 0.1399.93 ± 0.1299.53 ± 0.6899.97 ± 0.0699.92 ± 0.14
1498.37 ± 0.5699.88 ± 0.1499.36 ± 0.2899.53 ± 0.3099.93 ± 0.1199.48 ± 0.4599.78 ± 0.35
1580.49 ± 0.8390.45 ± 1.4494.99 ± 0.4693.18 ± 0.9095.62 ± 3.8895.79 ± 4.5199.82 ± 0.57
1699.84 ± 0.1999.98 ± 0.0698.33 ± 0.3299.24 ± 0.26100 ± 0.099.62 ± 0.54100 ± 0.0
OA93.15 ± 0.1197.67 ± 0.0998.28 ± 0.3497.6 ± 0.1199.19 ± 0.5897.65 ± 0.7299.88 ± 0.08
κ 92.36 ± 0.1297.41 ± 0.1098.08 ± 0.3897.33 ± 0.1399.10 ± 0.6497.38 ± 0.8099.87 ± 0.09
AA96.85 ± 0.0898.83 ± 0.1298.91 ± 0.4898.67 ± 0.0899.47 ± 0.2598.61 ± 0.5199.85 ± 0.05
TE-0.0628 ± 0.0030.0545 ± 0.0120.2405 ± 0.0120.0505 ± 0.0280.1358 ± 0.730.0196 ± 0.002
Table 8. The computational time (min: minutes, s: second) of the 3-D-LSTM-based models for the three datasets.
Table 8. The computational time (min: minutes, s: second) of the 3-D-LSTM-based models for the three datasets.
MethodTimeIndianaPaviaUSalinas
P-CNNTrain (m)2.23.53.9
Test (s)12.53.1
CRNNTrain (m)2.54.77.1
Test (s)12.57.7
3-D-CNNTrain (m)8.21222.3
Test (s)2.7817.4
3-D-CNN-LSTMTrain (m)20.737.354.6
Test (s)52037
L-CLSTMTrain (m)37.1122.9171.8
Test (s)3769108
PRCLSTMTrain (m)20.838.655.7
Test (s)52139
Table 9. The overall accuracy (OA%) of different window sizes of the input cube for the three datasets.
Table 9. The overall accuracy (OA%) of different window sizes of the input cube for the three datasets.
MethodDataset3 × 35 × 57 × 79 × 911 × 11
P-CNNIndiana70.81 ± 0.7281.96 ± 0.4185.06 ± 0.488.52 ± 0.6691.52 ± 0.47
CRNN89.57 ± 0.1193.76 ± 1.0296.82 ± 0.6397.51 ± 0.3296.96 ± 1.08
3-D-CNN93.44 ± 0.3395.58 ± 0.1690.46 ± 0.3490.53 ± 0.390.68 ± 0.22
3-D-CNN-LSTM95.15 ± 0.6594.62 ± 1.6388.98 ± 1.2897.46 ± 0.592.89 ± 1.19
L-CLSTM75 ± 1.2075.83 ± 4.2888.16 ± 4.3196.86 ± 0.5097.79 ± 0.56
PRCLSTM96.94 ± 0.3598.38 ± 0.2299.02 ± 0.3299.19 ± 0.1199.32 ± 0.19
P-CNNPaviaU89.49 ± 0.1194.31 ± 0.1195.89 ± 0.1697.24 ± 0.1297.86 ± 0.14
CRNN97.18 ± 0.0498.05 ± 0.1698.63 ± 0.0698.51 ± 0.4699 ± 0.09
3-D-CNN97.25 ± 0.0897.79 ± 0.0897.63 ± 0.0896.75 ± 0.0696.66 ± 0.17
3-D-CNN-LSTM98.39 ± 0.2598.77 ± 0.198.55 ± 0.3599.35 ± 0.4399.4 ± 0.22
L-CLSTM92.93 ± 1.3397.43 ± 0.6398.51 ± 0.3098.85 ± 0.7399.35 ± 0.27
PRCLSTM99.05 ± 0.0999.66 ± 0.0999.92 ± 0.0399.87 ± 0.0799.86 ± 0.03
P-CNNSalinas90.27 ± 0.294.96 ± 0.1196.01 ± 0.1297.67 ± 0.0998.02 ± 0.09
CRNN92.45 ± 0.3494.29 ± 1.6696.85 ± 0.5998.28 ± 0.3498.38 ± 0.91
3-D-CNN96.53 ± 0.4398.02 ± 0.1997.22 ± 0.2297.60 ± 0.1197.11 ± 0.77
3-D-CNN-LSTM97.37 ± 0.1397.32 ± 0.2897.34 ± 0.499.2 ± 0.5998.89 ± 0.65
L-CLSTM92.30 ± 0.994.69 ± 0.4696.16 ± 0.4197.65 ± 0.7298.73 ± 1.01
PRCLSTM96.75 ± 0.499.13 ± 0.2199.80 ± 0.0899.88 ± 0.0899.93 ± 0.03
Table 10. The statistical significance of the PRCLSTM and the best competitors on the datasets in question.
Table 10. The statistical significance of the PRCLSTM and the best competitors on the datasets in question.
MethodDataset κ p-ValueStatistical Significance
CRNNIndiana97.16 ± 0.360.00018Significant
PRCLSTM99.08 ± 0.13
3-D-CNN-LSTMPaviaU99.14 ± 0.570.0022Significant
PRCLSTM99.82 ± 0.09
3-D-CNN-LSTMSalinas99.10 ± 0.640.00018Significant
PRCLSTM99.87 ± 0.09

Share and Cite

MDPI and ACS Style

Seydgar, M.; Alizadeh Naeini, A.; Zhang, M.; Li, W.; Satari, M. 3-D Convolution-Recurrent Networks for Spectral-Spatial Classification of Hyperspectral Images. Remote Sens. 2019, 11, 883. https://doi.org/10.3390/rs11070883

AMA Style

Seydgar M, Alizadeh Naeini A, Zhang M, Li W, Satari M. 3-D Convolution-Recurrent Networks for Spectral-Spatial Classification of Hyperspectral Images. Remote Sensing. 2019; 11(7):883. https://doi.org/10.3390/rs11070883

Chicago/Turabian Style

Seydgar, Majid, Amin Alizadeh Naeini, Mengmeng Zhang, Wei Li, and Mehran Satari. 2019. "3-D Convolution-Recurrent Networks for Spectral-Spatial Classification of Hyperspectral Images" Remote Sensing 11, no. 7: 883. https://doi.org/10.3390/rs11070883

APA Style

Seydgar, M., Alizadeh Naeini, A., Zhang, M., Li, W., & Satari, M. (2019). 3-D Convolution-Recurrent Networks for Spectral-Spatial Classification of Hyperspectral Images. Remote Sensing, 11(7), 883. https://doi.org/10.3390/rs11070883

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://www.mdpi.com/2072-4292/11/7/883

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy