1 s2.0 S2772528621000340 Main

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

Neuroscience Informatics 2 (2022) 100034

Contents lists available at ScienceDirect

Neuroscience Informatics
journal homepage: www.elsevier.com/locate/neuri

Artificial Intelligence in Brain Informatics

Multiclass skin cancer classification using EfficientNets – a first step


towards preventing skin cancer
Karar Ali a,c,1, Zaffar Ahmed Shaikh a,1, Abdullah Ayub Khan a,b,1, Asif Ali Laghari b,∗,1
a
Faculty of Computing Science & Information Technology, Benazir Bhutto Shaheed University, Lyari, Karachi, Sindh, Pakistan
b
Department of Computer Science, Sindh Madressatul Islam University, Karachi, Sindh, Pakistan
c
NexDegree Private Limited, Block, House #421, 3 Siraj ud-Daulah Rd, Bahadurabad Bahadur Yar Jang CHS, Karachi, Karachi City, Sindh, Pakistan 2

A r t i c l e i n f o A B strACt

Article history:
Skin cancer is one of the most prevalent and deadly types of cancer. Dermatologists diagnose this
Received 24 October 2021
disease primarily visually. Multiclass skin cancer classification is challenging due to the fine-grained
Received in revised form 1 December 2021
Accepted 6 December 2021
variability in the appearance of its various diagnostic categories. On the other hand, recent studies have
demonstrated that convolutional neural networks outperform dermatologists in multiclass skin cancer
Keywords: classification. We developed a preprocessing image pipeline for this work. We removed hairs from the
Convolutional neural networks images, augmented the dataset, and resized the imageries to meet the requirements of each model. By
CNN performing transfer learning on pre-trained ImageNet weights and fine-tuning the Convolutional Neural
Deep learning Networks, we trained the EfficientNets B0-B7 on the HAM10000 dataset. We evaluated the performance
EfficientNet of all EfficientNet variants on this imbalanced multiclass classification task using metrics such as Precision,
HAM10000 dataset
Recall, Accuracy, F1 Score, and Confusion Matrices to determine the effect of transfer learning with fine-
Medical imaging
tuning. This article presents the classification scores for each class as Confusion Matrices for all eight
Multiclass skin cancer classification
Skin cancer classification
models. Our best model, the EfficientNet B4, achieved an F1 Score of 87 percent and a Top-1 Accuracy
Transfer learning of 87.91 percent. We evaluated EfficientNet classifiers using metrics that take the high-class imbalance
into account. Our findings indicate that increased model complexity does not always imply improved
classification performance. The best performance arose with intermediate complexity models, such as
EfficientNet B4 and B5. The high classification scores resulted from many factors such as resolution
scaling, data enhancement, noise removal, successful transfer learning of ImageNet weights, and fine-
tuning [70–72]. Another discovery was that certain classes of skin cancer worked better at generalization
than others using Confusion Matrices.
 2021 The Author(s). Published by Elsevier Masson SAS. This is an open access article under the CC
BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

1. Introduction
spleen, or brain [2,62]. Mainly, metastatic melanoma ranks third
most typical origin of central nervous system (CNS) metastases. In
According to the World Health Organization report, skin can- particular, the advanced stage of melanoma can often cause brain
cer is diagnosed in one out of three people worldwide. Further- metastasis, whose treatment requires radiation therapy and im-
more, one in every five Americans will develop skin cancer during munotherapy. Furthermore, melanoma accounts for almost 10 per-
their lifetime, according to the Skin Cancer Foundation [1,75,76]. cent of brain metastasis [3–5]. Melanoma is responsible for 10,000
Melanoma and non-melanoma skin cancers are the most com- deaths each year in the United States alone [6]. These figures ap-
mon types. Worldwide, approximately two to three million non- pear bleak, but detecting cancer at an early stage reduces the risks
melanoma skin cancers and 132000 melanoma skin cancers are of death significantly. Melanoma can be cured in nearly 95 per-
diagnosed each year [1,2]. cent of cases if detected early [7]. Thus, it is critical for early-stage
Melanoma is the deadliest type of skin cancer. A melanoma diagnosis of skin cancer to prolong patient survival.
cell tends to travel to other body parts, including the lungs, liver, The dermatologist’s experience limits the visual evaluation of
dermatoscopic images (or manual dermatoscopy). Due to the sub-
jectivity of human decision-making, besides considerable inter-
* Corresponding author. class similarity in skin lesions and other confounding factors, this
E-mail address: asifalilaghari@gmail.com (A.A. Laghari). method is prone to mistakes. General diagnostic procedures for
1
These authors contributed equally to this work. identifying skin cancer, such as the ABCD (Asymmetry, Border,
2
https://g.co/kgs/LMViMh. Color, Diameter) rule [8] or the 7-point checklist [9], can only
https://doi.org/10.1016/j.neuri.2021.100034
2772-5286/ 2021 The Author(s). Published by Elsevier Masson SAS. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

guide the practitioner with certain thumb rules. Medical profes-


Table 1
sionals can misinterpret and misclassify the same dermatoscopic Dataset distribution.
image sample as belonging to different kinds of skin cancer. Thus,
Diagnostic category Number of images Percentage
an automated computational system needs to undertake large
akiec 327 3.27%
quantities of visual exploration using past data to imitate medi-
bcc 514 5.13%
cal experts’ expertise better and maybe outperform them too, i.e., bkl 1099 10.97%
a data-driven strategy. df 115 1.15%
With the current resurgence of interest in machine learning, mel 1113 11.11%
deep learning, and neural networks, automated skin cancer clas- nv 6705 66.95%
vasc 142 1.42%
sification has been an important topic of research [10–16]. There-
fore, we leveraged the power of deep Convolutional Neural Net-
works (CNN) for skin lesion classification. Many machine learning-
fer learning and fine-tune the CNNs for the HAM10000 dataset.
based techniques for binary classification of melanoma and non-
Precision, Recall, Accuracy, F1 Score, Specificity, Roc Auc Score, and
melanoma exist. Sánchez-Reyes et al. [11] have employed HSV
Confusion Matrices evaluated the EfficientNets B0-B7 performance
color space, mathematical morphology, and a Gaussian filter for
on this imbalanced multiclass classification task. This paper also
ROI identification to estimate four descriptors (symmetry, edge,
presents the per-class classification exactitudes in the form of Con-
color, and size) in dermatological and straightforward images. Us-
fusion Matrices for all eight models. In particular, our best model,
ing k-Fold Cross-Validation [66,67] and a multilayer perceptron as
the EfficientNet B4, achieved an 87 percent F1 Score and 87.91 per-
a classifier for malignant and benign melanoma, they attained Ac-
cent Top-1 Accuracy. Our findings indicate the superior performance
curacy of 98.5 percent and 98.6 percent, respectively. Alquran et al.
of the EfficientNets B0-B7 for multiclass skin cancer classification
[12] have segmented and extracted characteristics using the Gray
on the HAM10000 dataset.
Level Co-occurrence Matrix and the ABCD rule followed by Prin-
The remainder of the paper is structured as follows. The
cipal Component Analysis. Finally, they employed a support vector
HAM10000 dataset and its distribution appear in the Dataset sec-
machine to classify the data. They achieved an Accuracy of 92.1
tion. The Methodology section depicts the research approach. Effi-
percent. Almaraz-Damian et al. [13] suggested a CAD system that
cientNets B0-B7 implementation and training details emerge in the
incorporated characteristics based on texture, ABCD rule, and sup-
Implementation section. The performance evaluation measures for
port vector machine for classification. Jafari et al. [14] developed
fine-tuning CNNs appear in the Performance Evaluation Metrics
an efficient pre-screening technique for binary classification of pig-
section. The Results and Discussion section explains the study’s
mented skin lesions in two groups of Melanoma and Benign. They
findings. Finally, the Conclusion section puts the article to a close.
employed directed filtering to enhance border detection of the le-
sion and then applied the ABCD rule for feature extraction. After
that, they fed those discriminative features into a support vector 2. Dataset
machine for classification.
The preceding research has exhibited remarkable effectiveness This section describes the HAM10000 dataset and its distribu-
for binary classification of Melanoma and Benign skin cancer. How- tion for training, validation, and testing.
ever, because of the significant inter-class similarity and intra-class
variability, multiclass skin cancer classification (also known as au- 2.1. HAM10000 dataset
tomated skin cancer classification) remains problematic.
The success of neural networks in the ImageNet Large Scale The standard HAM10000 dataset has benchmarked our ap-
Visual Recognition Challenge [10] fostered various deep learning- proach. HAM10000 stands for Human Against Machine with 10000
based solutions for multiclass skin lesion classification. Codella et training images. The final dataset consists of 10015 dermatoscopic
al. [17] have used sparse coding, support vector machines, and images called ISIC archive from a training set by ISIC [23]. In the
deep learning to achieve 93.1 percent Accuracy when analyzing HAM10000 dataset, the pigmented skin lesion classes are akiec,
archived images from the International Skin Imaging Collabora- bcc, bkl, df , mel, nv, and vasc. The number and percentages of im-
tion (ISIC). These imageries corresponded to mel (i.e., melanoma), ages in each class appear in Table 1. It is quite evident that there is
bkl (i.e., Benign), and nv (i.e., Non-Vascular) (i.e., Atypical Nevi) a high-class imbalance in the dataset, with more than two-thirds
(i.e., Atypical Nevi) (i.e., Atypical Nevi). Hosny et al. [18] employed of the imageries belonging to the nv class.
data-augmentation and transfer-learning on AlexNet and achieved
an Accuracy of 95.91 percent on the three-class ISIC multiclass 2.2. Dataset distribution
dataset. Nugroho et al. [19] exploited the HAM10000 dataset to
construct a skin cancer identification system utilizing custom CNN. We divided the HAM10000 dataset into three parts: training
They utilized a scaled image of 90 ×120 resolution. The Accuracy
they attained was 78 percent. Bassi et al. [20] followed a deep- (72 percent), validation (8 percent), and testing (20 percent). The
learning strategy on the HAM10000 dataset, i.e., transfer learn- Testing set helped assess the effectiveness of our trained models.
ing and fine-tuning. They resized HAM10000 dataset photos into We made sure no image had duplicates when it came to Valida-
224×224 and applied a fine-tuned VGG-16 model. The Accuracy tion and Testing sets. Table 2 shows three sets of the class-wise
they attained was 82.8 percent. Moldovan et al. [21] employed a distribution of the HAM10000 dataset.
transfer-learning-based technique on the HAM10000 dataset and
made an Accuracy of 85 percent in the first stage. C¸ evik et al. [22] 3. Methodology
scaled photos to 400×300 and leveraged VGGNET architecture to
build a bespoke CNN. They obtained an Accuracy of 85.62 percent. This section explains the preprocessing image pipeline that we
In this paper, we investigate the classification performance of built to remove hairs from images, augment the dataset, and resize
the EfficientNets B0-B7 on the HAM10000 dataset of dermato- images according to the requirements of each model of the Effi-
scopic images [23,24]. The dataset consists of 10015 images be- cientNets B0-B7. The EfficientNet model architecture, modification
longing to seven skin cancer classes: akiec, bcc, bkl, df , mel, nv, in model architectures, and the transfer-learning process, which
and vasc. We used ImageNet pre-trained weights to perform trans- trains the HAM10000 dataset on pre-trained weights of ImageNet
and fine-tunes CNNs, are also explained.

2
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

Table 2 3.2. EfficientNet model architecture


Class wise distribution of the HAM10000 dataset.

Diagnostic category Training Validation Testing CNNs can be scaled to achieve better Accuracy. However, the
akiec 48 30 49 scaling process was never thoroughly investigated. It entailed an
bcc 370 48 96 iterative manual tuning process, either by arbitrarily increasing the
bkl 775 90 234
depth or width of the CNN or by using a higher input image res-
df 82 8 25
mel 883 85 145 olution. The EfficientNet family of architectures was developed by
nv 4745 550 1410 [24] with an intent to find a suitable method to scale CNNs to
vasc 108 12 22 achieve better Accuracy (i.e., model performance) and efficiency
Total 7211 823 1981
(i.e., model parameters and FLOPS). The authors propose a com-
pound scaling method that uses a fixed set of coefficients to uni-
formly scale width, depth, and resolution. That method allowed
authors to produce efficient CNN architecture, which they named
EfficientNet B0.
Further, they obtained EfficientNets B1-B7 by scaling the base-
line network (i.e., EfficientNet B0) using the same compound
model scaling method. Thus, [24] displays eight different scales’
CNN architectures and their performances based on the ImageNet
dataset [24]. While EfficientNet B0 CNN architecture has 5.3 mil-
lion parameters and takes a 224×224 image as input, EfficientNet
B7 has 66 million parameters and takes a 600×600 image as input.
Scaling the depth of the network allows CNNs to capture richer
and more complex features. However, network training becomes
Fig. 1. Preprocessing pigmented skin lesion images: (a) original image for each class more challenging due to the vanishing gradient problem [30]. Scal-
(b) preprocessed image for each class. ing the width of the network allows the network to capture more
fine-grained features. It is also easy to train. Wide and shallow
3.1. Image preprocessing pipeline networks, on the other hand, are incapable of capturing high-level
features. Finally, higher resolution images allow CNNs to capture
In the HAM10000 dataset, each image has a dimension of finer-grained patterns. Larger images require more computational
600×450. The images were resized (resolution scaling) based on power and memory. In our experiments, we tested the perfor-
the EfficientNet [24,39] variant used for training. mance of the eight EfficientNet models (EfficientNets B0-B7) on
As images in the HAM10000 dataset consist of pigmented skin the HAM10000 dataset [23].
lesions, our goal, i.e., the classification of skin cancer classes, the
presence of hairs is not relevant. The fur in the image contributes 3.3. Transfer learning
to the noise. CNN will have to learn that the arbitrary strands
spread across the skin lesion image are irrelevant to our task. Also, Transfer learning, also known as domain adaptation, is a high-
there is a danger for the CNN model to discover correlations be- level concept that utilizes the knowledge acquired in a domain or
tween noise and the target (class of skin cancer) (class of skin task to solve related tasks. We leveraged this previously learned
cancer). If we do not remove this noise from the image, CNN will knowledge from the models trained on the ImageNet dataset and
have to learn about ignoring the noise by gradient descent across used their parameters for our task. However, our approach evalu-
a large dataset of images. Due to limitations in the size of the ates EfficientNet models on a medical image dataset of pigmented
dataset (only 10015 images) and computation steps, image pre- skin lesions. Due to the difference in the domains of the dataset
processing removed most of the noise while preserving the signal images, we cannot directly use the pre-trained weights for infer-
in the image (Fig. 1) using image inpainting [25,72–74]. The al- ence and expect high performance. Thus, we performed a fine-
gorithm relies on the Fast Marching Method [26,27]. We need to tuning process. In this step, the [trained] model’s parameters are
create a mask corresponding to the area, which must be inpainted tweaked precisely to adapt to the new domain of the images.
(in our case, the hair strands in each image). The blackhat trans- There are many ways to do fine-tuning. These include fine-
form [28] worked as the mask. Through these two algorithms, we tuning all or some parameters of the last few layers of a pre-
got a cleaner dataset of skin lesion images. The samples of the al- trained model [31,32] or utilizing a pre-trained model as a fixed
gorithm output for each class are in Fig. 1. features’ extractor from which features better feed each classifier,
We increased the dataset size through image augmentation. i.e., support vector machine for classification [33]. We employed
The size of the dataset has usually been an issue in the medi- both transfer learning and fine-tuning in EfficientNets B0-B7.
cal domain as neural nets require a colossal amount of labeled
data for training. Labeling the medical images is expensive and
3.4. Modifications in network architecture
requires a qualified medical professional for the task. It is unlike
other domains where non-experts can perform the labeling of the
The top three layers of EfficientNet models (EfficientNets B0-
data. Previously, the importance of data augmentation for skin le-
B7) were suitable for the ImageNet dataset. Therefore, new layers
sion analysis has been established [29]. We artificially augmented
for our use case (seven-class skin cancer prediction) replaced the
the dataset size through rotation, zooming, horizontal, and verti-
top three layers. In particular, EfficientNets B0-B6 were overfitting
cal flipping. This section explains the preprocessing image pipeline
with the top three-layer structure. For this reason, we realized the
that we built to remove hairs from images, augment the dataset,
necessity to add more dense batch normalization, dropout layers
and resize images according to the requirements of each model of
at the top of each model after removing the top three layers from
the EfficientNets B0-B7. The EfficientNet model architecture, mod-
each model. Thus, the top three layers, i.e., Global Average Pool-
ification in model architectures, and the transfer-learning process,
ing 2D, dropout, and dense layers of each model, were entirely
which trains the HAM10000 dataset on pre-trained weights of Im-
replaced with layers defined in Table 3.
ageNet and fine-tunes CNNs, are also explained.

3
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

Table 3
Modified layer structure for EfficientNet B0-B6.

Layer name Layer type Size of feature map Activation function


Global Avg Pooling 2D Global Average Pooling 2D Varies for all models N/A
Dense_1 Dense 512 swish
BatchNormalization_1 BatchNormalization 512 N/A
Dropout_1 Dropout (0.5) 512 N/A
Dense_2 Dense 256 swish
BatchNormalization_2 BatchNormalization 256 N/A
Dropout_2 Dropout (0.5) 256 N/A
Dense_output Dense 7 softmax

Fig. 2. The official and modified block diagrams of EfficientNet B0.

The visualized modification for EfficientNet B0 is in Fig. 2. The


classes. The modification standard used for EfficientNets B0-B6 was
figure shows the block diagram of the official EfficientNet B0 base-
not suitable for EfficientNet B7. This high complexity model was
line network and the improvements we made at the top of Effi-
severely overfitting using the two additional dense blocks. Thus,
cientNet B0 architecture highlighted with a blue border. The base the dropout and output layer follows the global average pool to
model (i.e., feature extractor blocks) was kept unchanged; instead, alleviate the overfitting issue. The methodology of fine-tuning Effi-
we modified the top layers of EfficientNet B0 architecture. The of- cientNets B0-B7 is in Section 4.
ficial B0 network has 3 top layers (i.e., global average pooling 2D, The initialization of all model parameters employed ImageNet
dropout, and dense layer), causing the model to overfit. We mod- pre-trained weights.
ified top layers and added the additional dense, batch normaliza-
tion and drop out layers at the top of B0 base architecture where 4. Implementation
for dense (i.e., fully connected) layers, we used swish activation
function [34] as used by [24] instead of RELU activation function. To ensure reproducibility, we provide EfficientNets B0-B7 im-
A blue border around the top layers in Fig. 2 highlights changes. plementation and training details in this section.
Like EfficientNet B0, all other official EfficientNet models (i.e., B1-
B7) had three layers (i.e., global average pooling2D, dropout, dense) 4.1. Learning rate range
at the top where the same modifications appeared in EfficientNet
One of the most critical hyper-parameters for fast and stable
B1-B6. The same eight layers (i.e., keeping the same batch normal-
neural network training is the learning rate. It decides how much
ization, dropout rate, and feature map size of dense layers) added
of the loss gradient is to be applied to the current parameters to
at the top of Efficient B0 replaced the top 3 layers of
move them in the direction of lower loss.
EfficientNet B1-B6. Please refer to the Supporting information
We obtained the reasonable bounds of the learning rate (for
section at the end of this article.
a model and dataset pair) using the methods described in [35].
For EfficientNet B7, we removed the top three layers, i.e., global
It was done by linearly increasing the learning rate of a network
average pooling 2D, dropout, and dense output layer for 1000 (each trained for a few epochs) in a particular range of values. In
classes. We replaced them with global average pooling 2D, fol- our experiment, we increased the learning rate from 10-10 to 101
lowed by a dropout of 0.5 and the dense output layer for seven and observed the change in validation loss. Fig. 3 shows the final

4
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

5. Performance evaluation metrics

This section describes the evaluation metrics used to assess


each model’s performance, namely Precision, Recall, Accuracy, F1
Score, Specificity, Roc Auc Score, and Confusion Matrices [71].
There are no universal standards for evaluating the performance
of any classification model. In literature, we see a standard set of
performance measures that depend upon the user’s requirement.
When classes are highly imbalanced (as shown in Table 1), the
metrics Precision, Recall, Accuracy, F1 Score, Specificity, Roc Auc Score,
and Confusion Matrices are useful. As a result, we relied on them.

5.1. Confusion matrix

We created an N xN table (where N is the number of classes)


that summarizes how well a classification model’s predictions per-
formed. It is a correlation matrix between the actual label and
the classification of the model (predicted label). The results of the
Fig. 3. Obtaining reasonable bounds of the learning rate by plotting validation loss
across different learning rates.
Confusion Matrices fall into one of four categories. A True Positive
(TP) is when the model correctly predicts an image’s positive class.
When the model incorrectly predicts the positive class of an image,
plot of the validation loss versus the learning rate. As can be seen, it generates a False Positive (FP). A True Negative (TN) corresponds
the effective model training occurs between the range of values to the case when the model correctly predicts an image’s negative
0.0001 to 0.01, and after this range, the validation loss is high. class. When the model mispredicts the negative class, the result is
We used early stopping for all models [36]. The implementa- a False Negative (FN). In the case of a multiclass problem, the posi-
tion relied on Keras, with TensorFlow serving as the backend. The tive class is the label for which the calculation is being performed,
models were trained on Google Colab, which has a 12 GB NVIDIA and the negative class is the remaining label.
Tesla K80 GPU.
5.2. Accuracy
4.2. Fine-tuning B0-B5
The percentage of correctly predicted image classes to the to-
For EfficientNets B0-B5, all layers (i.e., all layers received gra- tal number of images is referred to as Accuracy [39]. It is the
dient updates while training) received fine-tuning, i.e., the pre- most straightforward performance metric. Accuracy is, however,
trained weights came from ImageNet, which contains images from only valid when the class distribution is symmetric, that is, when
1000 different classes of day-to-day objects. Whereas, HAM10000 the number of images (or observations) in each class is roughly the
dataset is a highly domain-specific medical image dataset. There- same [40]. For each of EfficientNets B0-B7, we report Top-1, Top-2,
fore, fine-tuning of all convolutional layers happened due to the and Top-3 types of Accuracy. Top-1 Accuracy is a conventional Ac-
significant differences in both datasets distributions. We used a curacy, i.e., in the model’s prediction, the class with the highest
Stochastic Gradient Descent (SGD) optimizer and a default learn- probability matches the expected or the actual class [41]. Top-k Ac-
ing rate decay for these models. curacy means that any of the model’s top-k predictions based on
probability must match the actual class of the image for which it
4.3. Fine-tuning B6-B7 is to be considered a correct prediction [42].

For EfficientNets B6-B7, the training was more unstable in the 5.3. Precision
above conditions. Thus, fine-tuning happened in two steps. In the
first step, we performed fine-tuning only for newly added lay- Precision amounts to the fraction of images correctly labeled as
ers while keeping convolutional blocks frozen. It means that the
belonging to the positive class divided by the total number of im-
convolutional base received no gradient updates. Now that the re-
ages labeled as belonging to the positive class by the model [43,44]
cently added layers received some weights, so for the second step,
as follows:
we defreezed the last four convolutional blocks of the base model
while keeping all other blocks frozen and performing fine-tuning Σl
again. The last four convolutional blocks took into account com- Precision = Σ i=1 tpi
.
puting limitations. l
i=1 (tpi + fpi)
Also, the model complexity of the official EfficientNets B6-B7
(i.e., the model’s ability to overfit on our dataset) is much larger as
5.4. Recall
EfficientNets B6, and B7 contain more parameters than Efficient-
Nets B0-B5. So, it was reasonable not to fine-tune all layers. Also,
The proportion of actual positives correctly identified by the
rather than using SGD, we used Adam optimizer as [37,65] did. We
model comes from the Recall metric. Alternatively, it is the number
found that in terms of stability and performance, Adam optimizer
of true positives divided by the number of images in the positive
[37,68] yielded better results than SGD while training big models
like EfficientNet (B6, B7). Lastly, the polynomial decay learning rate class [45–47]:
scheduler [38,69,70] allowed for more stable convergence. We have Σl
summarized the model-specific modifications in image size, batch Recall = Σ i=1 tpi
l .
size, and other hyperparameters in Table 4. (
i=1 tpi + fni)

5
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

Table 4
Model-specific modifications.

Model variant Image size Batch size* Learning rate Optimizer Learning rate decay
EfficientNet B0 224×224 32 0.001 SGD SGD decay rate
EfficientNet B1 240×240 32 0.001 SGD SGD decay rate
EfficientNet B2 260×260 32 0.001 SGD SGD decay rate
EfficientNet B3 300×300 16 0.001 SGD SGD decay rate
EfficientNet B4 380×380 8 0.001 SGD SGD decay rate
EfficientNet B5 456×456 4 0.0006 SGD SGD decay rate
EfficientNet B6 528×528 16 0.0025 Adam Polynomial-decay
EfficientNet B7 600×600 16 0.0025 Adam Polynomial-decay
*
The batch size was varied for higher complexity models due to limitations in computational resources.

5.5. F1 score
Table 5
Accuracy test results of EfficientNet B0-B7.
By the definitions of Precision and Recall, there seems to be Models Top-1 accuracy Top-2 accuracy Top-3 accuracy
a trade-off between the two measures. When we improve Recall, EfficientNet B0 83.02% 93.80% 97.39%
we reduce Precision and vice versa [48–50]. Depending upon the EfficientNet B1 83.69% 93.90% 97.34%
EfficientNet B2 83.95% 93.75% 97.39%
application domain and the user requirement, we might require
EfficientNet B3 83.90% 94.63% 97.65%
maximizing one over the other. However, we use the F Beta Score EfficientNet B4 87.91% 95.67% 97.81%
in case we want the optimal blend of both metrics (i.e., to as- EfficientNet B5 87.62% 94.59% 97.55%
sign different weights to each metric). The F Beta Score amounts
to the weighted harmonic mean between Precision and Recall [51].
Table 6
It favors Recall over Precision by a factor of Beta. Precision and Re- Model Wise Precision, Recall, F1 Score, Specificity, and Roc_Auc Comparisons.
call are both equally important in this context. As a result, Beta=1
becomes the F1 Score. The harmonic mean of Precision and Recall Models Precision Recall F1 score Specificity Roc_Auc

gives the F1 Score. Higher F1 Score values indicate good predictive EfficientNet B0 84% 83% 82% 84% 95.94%
EfficientNet B1 85% 84% 83% 84% 96.10%
power [52–55]. In the case of a multiclass classification problem, EfficientNet B2 85% 84% 84% 86% 96.36%
we calculated the F1 Score over all classes to get a holistic view of EfficientNet B3 87% 84% 84% 91% 96.67%
the model’s performance as per the expression below. Note that F1 EfficientNet B4 88% 88% 87% 88% 97.53%
EfficientNet B5 88% 88% 87% 88% 97.54%
Score results are not in between Precision and Recall results.
EfficientNet B6 86% 85% 85% 89% 96.76%
(β2 + 1)Precision ∗ Recall EfficientNet B7 86% 86% 85% 87% 97.23%
EfficientNet B6 85.36% 94.01% 96.97%
EfficientNet B7 85.52% 94.84% 98.12%

F 1 Score = .
β2Precision + Recall

5.6. Specificity
are simpler models, EfficientNets B0-B3. As a result, the ranking of
Specificity is a metric to determine the model’s percentage Top-1 Accuracy is B4∼B5 > B6∼B7 > B1∼B2∼B3 > B0. There is
of actual negative cases correctly identified as negative [56–58]. a difference of roughly 5 percent in Top-1 Accuracy performance
Specificity is the ratio of TN and the sum of TN and FP. Higher between the best model, EfficientNet B4, and the worst model, Ef-
Specificity implies a higher TN value while a lower FP value [59– ficientNet B0. Across all models, the Top-2 Accuracy is practically
61], according to the same, with a maximum absolute deviation of 1.25 percent. At
the same time, the maximum absolute deviation of 0.9 percent
Σ
l emerges for Top-3 Accuracy.
Specificity = Σ i=1 tni
. As proven in the Dataset Section, the class distribution for the
l
i=1 (tni + fpi) HAM10000 dataset is very unsymmetrical, i.e., high-class imbal-
ance. Table 6 illustrates the Precision, Recall, F1 Score, Specificity, and
5.7. Roc_Auc score Roc Auc scores for each EfficientNet variation on the HAM10000
dataset. Once again, the same pattern occurs for all these perfor-
Roc_Auc or AUC is also known as AUROC, the area under receiver EfficientNets B4 and B5 beat EfficientNets B6 and B7. Finally, there
operating characteristics. The Roc_Auc Score represents the degree
of separability. It indicates how well the model can distinguish be-
tween classes. This metric usually appraises a binary classification
task [62–64]. For a multiclass classification problem, one versus all
methodology is used to obtain Roc_Auc Scores. The range of the
Roc_Auc Score is from 0 to 1. A model having a Roc_Auc Score close
to 1 is considered the best model. We employed one versus all
methodology to obtain a weighted Roc_Auc Score for our models.

6. Results and discussion

Table 5 exhibits the Top-1, Top-2, and Top-3 Accuracy results of


EfficientNets B0-B7 assessed on the HAM10000 dataset.
The EfficientNet B4 reports the highest Top-1 and Top-2 Accura-
cies among all eight models. The middle-level complexity models,
6
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034
mance criteria, except Specificity.
We observed general order, i.e., B4∼B5 > B6∼B7 > B1∼B2∼B3
> B0. For Specificity, we noticed a different order, i.e., B3 > B6 >
B4∼B5 > B7 > B2 > B0∼B1. Here, EfficientNet B3 outperformed
the other models with the Specificity score of 91 percent. The
high-
est performing models, i.e., EfficientNets B4 and B5, achieved a
Precision of 88 percent, Recall of 88 percent, F1 Score of 87
per- cent, Specificity of 88 percent, and Roc Auc Score of 97.5
percent. In contrast, a dramatic difference of 5 percent has also
appeared in the F1 Score of the best model, i.e., EfficientNet B4, and
the worst model, i.e., EfficientNet B0.
The size of the dataset and model complexity justifies this ob-
served performance trend. With models having more model com-
plexity, there is a higher probability of attaining a better perfor-
mance metric. However, at the same time, they are more prone
to overfitting the dataset. HAM10000 provides little photos for a
deep learning dataset (in comparison to the benchmark
ImageNet

7
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

Fig. 4. Confusion matrices of EfficientNets B0-B7.

collection of 1 million images) (in contrast to the benchmark Ima-


majority class nv (Accuracy > 94 percent). The photos belonging to
geNet dataset of 1 million images). Thus, outstanding performance
the akiec class were misclassified pretty commonly as mel and df
from the witnessed mid-level complexity models (EfficientNet B4
and B5) makes intuitive sense. Lower complexity models such as classes. Even though vasc was the minority class (only 1.42 per-
EfficientNet B0-B3 have the less discriminating capability. Whereas cent of the photos belonged to this class), the average Accuracy
the higher complexity models EfficientNet B6-B7 are overfitting on across all eight models was 85.62 percent. The models, on average,
our dataset. fared poorly on akiec (34.12 percent) and df (24 percent) classes.
Fig. 4 illustrates the Confusion Matrices of EfficientNets B0-B7. They delivered best on nv (95.75 percent) and vasc (85.62 percent)
Some general findings are that all models performed well on the
classes. Thus, using Confusion Matrices, we notice that the perfor-

8
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

Table 7
Comparative study of the HAM10000 dataset.

Study Preprocessing Image type No. of classes Method Accuracy


[19] Yes RGB 7 CNN 78%
[20] Yes RGB 7 CNN-transfer learning 82.8%
[21] Yes RGB 7 CNN-transfer learning 85%
[22] Yes RGB 7 CNN 85.62%
Our Proposed EfficientNet B4 Yes RGB 7 CNN-transfer learning 87.9%

mance of eight EfficientNet models differs substantially based on


References
the seven classifications of cancer.
In Table 7, the performance of our best model, i.e., Efficient- [1] World Health Organization, Radiation: ultraviolet (UV) radiation and skin can-
Net B4, is compared with existing studies. They all used the same cer – how common is skin cancer, https://www.who.int/news-room/q-a-detail/
HAM10000 dataset. Nugroho et al. [19] and Bassi et al. [20] at- radiation-ultraviolet-(uv)-radiation-and-skin-cancer#, 2017. (Accessed 12 June
2021).
tained Accuracy of 78 percent and 82.8 percent, respectively. In
[2] N. Nordmann, M. Hubbard, T. Nordmann, P.W. Sperduto, H.B. Clark, M.A. Hunt,
comparison, Moldovan et al. [21] and C¸ evik et al. [22] had an Effect of gamma knife radiosurgery and programmed cell death 1 receptor an-
Accuracy of 85 percent and 85.62 percent correspondingly. This tagonists on metastatic melanoma, Cureus 9 (2017).
comparison reveals that our suggested method has outperformed [3] U. Chukwueke, T. Batchelor, P. Brastianos, Management of brain metastases in
patients with melanoma, J. Oncol. Pract. 12 (2016) 536–542.
existing multiclass skin cancer classification methods. [4] M.R. Lekkala, S. Mullangi, Malignant melanoma metastatic to the central
nervous system, StatPearls 2020, https://www.statpearls.com/ArticleLibrary/
7. Conclusion viewarticle/24930. (Accessed 12 June 2021).
[5] S. Morais, A. Cabral, G. Santos, N. Madeira, Melanoma brain metastases pre-
senting as delirium: a case report, Arch. Clin. Psychiatry 44 (2017) 53–54.
Skin cancer is one of the most prevalent and severe cancers. [6] H.W. Rogers, M.A.S. Weinstock, R. Feldman, B.M. Coldiron, Incidence estimate
of non-melanoma skin cancer (keratinocyte carcinomas) in the US population
This condition is primarily diagnosed visually by dermatologists.
2012, JAMA Dermatol. 151 (2015) 1081–1086.
Due to the fine-grained diversity in the look of its numerous di- [7] M. Thörn, F. Ponté, R. Bergström, P. Sparén, H.O. Adami, Clinical and
agnostic categories, multiclass skin cancer classification is a tough histopathologic predictors of survival in patients with malignant melanoma:
undertaking [75,76]. In recent studies, on the other hand, CNNs a population-based study in Sweden, J. Natl. Cancer Inst. 86 (1994) 761–769.
[8] W. Stolz, A. Riemann, A.B. Cognetta, L. Pillet, W. Abmayr, D. Holzel, P. Bilek,
have outperformed dermatologists in multiclass skin cancer clas-
F. Nachbar, M. Landthaler, O. Braun-Falco, ABCD rule of dermatoscopy: a new
sification. For this effort, we constructed a pretreatment picture practical method for early recognition of malignant melanoma, Eur. J. Dermatol.
pipeline in which we eliminated hairs from the photos, enriched 4 (1994) 521–527.
the dataset, and scaled images according to each model’s need. [9] G. Argenziano, G. Fabbrocini, P. Carli, V. De Giorgi, E. Sammarco, M. Delfino,
Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin
We trained the EfficientNets B0-B7 on the HAM10000 dataset by
lesions. Comparison of the ABCD rule of dermatoscopy and a new 7-point
performing transfer-learning on pre-trained weights of ImageNet checklist based on pattern analysis, Arch. Dermatol. 134 (1998) 1536–1570.
and fine-tuning the Convolutional Neural Networks. To analyze the [10] S.D. Pande, P.P. Jadhav, R. Joshi, A.D. Sawant, V. Muddebihalkar, S. Rathod, S.
influence of transfer learning and fine-tuning, we evaluated the Das, Digitization of handwritten devanagari text using CNN transfer learning–a
better customer service support, Neurosci. Inform. (2021) 100016.
performance of all EfficientNet variants on this imbalanced mul-
[11] H. Chaves, F. Dorr, M.E. Costa, M.M. Serra, D.F. Slezak, M.F. Farez, C. Cejas, Brain
ticlass classification problem using measures such as Precision, Re- volumes quantification from MRI in healthy controls: assessing correlation,
call, Accuracy, F1 Score, and Confusion Matrices. This study shows the agreement and robustness of a convolutional neural network-based software
per-class classification scores as Confusion Matrices for all eight against FreeSurfer, CAT12 and FSL, J. Neuroradiol. 48 (3) (2021) 147–156.
[12] S.L. Bangare, Classification of optimal brain tissue using dynamic region grow-
models. In particular, our most robust model, the EfficientNet B4, ing and fuzzy min-max neural network in brain magnetic resonance images,
scored an 87 percent F1 Score and 87.91 percent Top-1 Accuracy. Neurosci. Inform. (2021) 100019.
As far as we know, this is the first study to examine Efficient- [13] H. Alquran, I.A. Qasmieh, A.M. Alqudah, S. Alhammouri, E. Alawneh, A. Abug-
Nets B0-B7 performance on the HAM10000 dataset as well as the hazaleh, F. Hasayen, The melanoma skin cancer detection and classification
using support vector machine, in: 2017 IEEE AEECT, 2017, pp. 1–5.
skin cancer classification challenge. We tested EfficientNet classi- [14] A. Guarnizo, R. Glikstein, C. Torres, Imaging features of isolated hypoglossal
fiers using criteria accounting for the high-class imbalance. Our nerve palsy, J. Neuroradiol. 47 (2) (2020) 136–150.
results demonstrate that more model complexity does not neces- [15] K. Askaner, A. Rydelius, S. Engelholm, L. Knutsson, J. Lätt, K. Abul-Kasim, P.C.
Sundgren, Differentiation between glioblastomas and brain metastases and re-
sarily equal better classification performance [77,78]. We noticed
garding their primary site of malignancy using dynamic susceptibility contrast
the best performance with middle-level complexity models such MRI at 3T, J. Neuroradiol. 46 (6) (2019) 367–372.
as EfficientNet B4 and B5. High classification scores resulted from [16] J.C. Benson, V.T. Lehman, C.M. Carr, J.T. Wald, H.J. Cloft, G. Lanzino, W. Brin-
a combination of reasons viz. the usage of resolution-scaling, data- jikji, Beyond plaque: a pictorial review of non-atherosclerotic abnormalities of
extracranial carotid arteries, J. Neuroradiol. 48 (1) (2021) 51–60.
augmentation, noise-removal, successful transfer-learning of Ima-
[17] N. Codella J. Cai, M. Abedini, R. Garnavi, A. Halpern, J.R. Smith, ep learning,
geNet weight, and fine-tuning. Lastly, Confusion Matrices revealed sparse coding, and SVM for melanoma recognition in dermoscopy images, in:
that different classes of skin cancer showed greater generalization International Workshop on Machine Learning in Medical Imaging, Springer,
performance than others. It suggests space for further advance- Cham, pp. 118–126.
[18] K.M. Hosny, M.A. Kassem, M.M. Foaud, Classification of skin lesions using trans-
ment with fine-tuned models for any given kind of malignancy.
fer learning and augmentation with Alex-net, PLoS ONE 14 (2019) 14.
[19] A.A. Nugroho, I. Slamet, Sugiyanto, Skins cancer identification system of
Declaration of competing interest HAMl0000 skin cancer dataset using convolutional neural network, Proceed-
ings of the AIP 2019 Conference, vol. 2202, No. 1, AIP Publishing LLC, p. 020039,
https://doi.org/10.1063/1.5141652.
The authors did not have any conflict of interest. [20] S. Bassi, A. Gomekar, Deep learning diagnosis of pigmented skin lesions, in:
Proceedings of the 10th International Conference on Computing, Communica-
tion and Networking Technologies (ICCCNT), IEEE, 2019, pp. 1–6.
Appendix A. Supplementary material [21] D. Moldovan, Transfer Learning Based Method for Two-Step Skin Cancer Images
Classification, in: 2019 E-Health and Bioengineering Conference, (EHB) 2019
Nov 21, IEEE, pp. 1–4.
Supplementary material related to this article can be found on- [22] E. C¸ evik, K. Zengin, Classification of skin lesions in dermatoscopic images with
line at https://doi.org/10.1016/j.neuri.2021.100034. deep convolution network, Avrupa Bilim ve Teknoloji Dergisi (2019) 309–318.

9
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

[23] P. Tschandl, C. Rosendahl, H. Kittler, The HAM10000 dataset, a large collection


of multi-source dermatoscopic images of common pigmented skin lesions, Sci. [51] Y.W. Zhong, Y. Jiang, S. Dong, W.J. Wu, L.X. Wang, J. Zhang, M.W. Huang, Tu-
Data (2018). mor radiomics signature for artificial neural network-assisted detection of neck
metastasis in patient with tongue cancer, J. Neuroradiol. (2021).
[24] M. Tan, Q.V. Le, EfficientNet: rethinking model scaling for convolutional neural
networks, arXiv preprint, arXiv:1905.11946, 2019. [52] A. Sharma, R. Kumar, An optimal routing scheme for critical healthcare HTH
[25] A. Telea, An image inpainting technique based on the fast marching method, J. services—an IoT perspective, in: ICIIP, IEEE, 2017, pp. 1–5.
Graph. Tools 9 (2004) 23–34. [53] Z.A. Shaikh, A.A. Laghari, O. Litvishko, V. Litvishko, T. Kalmykova, A. Meynkhard,
[26] R.M. Sumir, S. Mishra, N. Shastry, Segmentation of brain tumor from MRI im- Liquid-phase deposition synthesis of ZIF-67-derived synthesis of Co3O4@ TiO2
ages using fast marching method, in: Proceedings of the 2019 IEEE Interna- composite for efficient electrochemical water splitting, Metals 11 (3) (2021)
tional Conference on Electrical, Computer and Communication Technologies, 420.
[54] H. Cebeci, A. Kilincer, H.I˙. Duran, N. Seher, M. S¸ ahinog˘lu, H. Karabag˘lı, Y. Paksoy,
2019, pp. 1–5.
[27] S.K. Siri, M.V. Latte, A novel approach to extract exact liver image boundary Precise discrimination between meningiomas and schwannomas using time-to-
from abdominal CT scan using neutrosophic set and fast marching method, J. signal intensity curves and percentage signal recoveries obtained from dynamic
Intell. Syst. 28 (2019) 517–532. susceptibility perfusion imaging, J. Neuroradiol. 48 (3) (2021) 157–163.
[28] A. Yamada, A. Teramoto, K. Kudo, T. Otsuka, H. Anno, H. Fujita, Basic study [55] A. Sharma, G. Rathee, R. Kumar, H. Saini, V. Varadarajan, Y. Nam, N. Chil-
on the automated detection method of skull fracture in head CT images us- amkurti, A secure, energy- and SLA-efficient (SESE) E-healthcare framework for
ing surface selective blackhat transform, J. Med. Imag. Health Inform. 8 (2018) quickest data transmission using cyber-physical system, Sensors 19 (9) (2019)
1069–1076. 2119.
[29] F. Perez, C. Vasconcelos, S. Avila, E. Valle, Data augmentation for skin lesion [56] C. Kang, I.H. Lee, J.S. Park, Y. You, W. Jeong, H.J. Ahn, J.H. Min, Measuring
analysis, in: Computer Assisted Robotic Endoscopy, Clinical Image-Based Proce- global impairment of cerebral perfusion using dynamic susceptibility con-
dures, and Skin Image Analysis, in: LNCS, vol. 11041, 2018, pp. 303–311. trast perfusion-weighted imaging in out-of-hospital cardiac arrest survivors: a
[30] S. Zagoruyko, N. Komodakis, Wide residual networks, in: BMVC, 2016. prospective preliminary study, J. Neuroradiol. 48 (5) (2021) 379–384.
[57] Z.A. Shaikh, S.A. Khoja, Role of teacher in personal learning environments, Dig.
[31] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accu-
Educ. Rev. (2012) 23–32.
rate object detection and semantic segmentation, in: Proceedings of the CVPR,
2014. [58] A. Sharma, R. Kumar, A constrained framework for context-aware remote E-
healthcare (CARE) services, Trans. Emerg. Telecommun. Technol. (2019) e3649.
[32] M. Long, Y. Cao, J. Wang, M.I. Jordan, Learning transferable features with deep
adaptation networks, in: Proceedings of the ICML, 2015. [59] M. Poongodi, A. Sharma, M. Hamdi, M. Maode, N. Chilamkurti, Smart healthcare
in smart cities: wireless patient monitoring system using IoT, J. Supercomput.
[33] H. Sharif Razavian, J. Azizpour, Sullivan, S. Carlsson, CNN features off-the-shelf:
(2021) 1–26.
an astounding baseline for recognition, in: DeepVision Workshop, Proceedings
of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, [60] U.R. Kandula, D. Philip, S. Mathew, A. Subin, A.A. Godphy, N. Alex, B. Renju,
2014. Efficacy of video educational program on interception of urinary tract infec-
[34] P. Ramachandran, B. Zoph, Q.V. Le, Searching for activation functions, arXiv tion and neurological stress among teenage girls: an uncontrolled experimental
preprint, arXiv:1710.05941, 2018. study, Neurosci. Inform. 100026 (2021).
[35] L.N. Smith, Cyclical learning rates for training neural networks, in: Proceedings [61] Z.A. Shaikh, S.A. Khoja, Role of ICT in shaping the future of Pakistani higher
of the 2017 IEEE Winter Conference on Applications of Computer Vision, 2017, education system, Turk. Online J. Educ. Technol. 10 (1) (2011) 149–161.
pp. 464–472. [62] Z. Najafpour, A. Fatemi, Z. Goudarzi, R. Goudarzi, K. Shayanfard, F. Noorizadeh,
Cost-effectiveness of neuroimaging technologies in management of psychiatric
[36] L. Prechelt, Early stopping-but when?, in: Neural Networks: Tricks of the Trade,
and insomnia disorders: a meta-analysis and prospective cost analysis, J. Neu-
1998, pp. 55–69.
roradiol. (2020).
[37] A.M. Alqudah, H. Alquraan, I.A. Qasmieh, Segmented and non-segmented skin
[63] A. Sharma, R. Kumar, Service level agreement and energy cooperative cyber
lesions classification using transfer learning and adaptive moment learning
physical system for quickest healthcare services, J. Intell. Fuzzy Syst. 36 (5)
rate technique using pretrained convolutional neural network, J. Biomimet. Bio-
(2019) 4077–4089.
mater. Biomed. Eng. 42 (2019) 67–78.
[38] Adrian Rosebrock, Deep learning for computer vision with Python, [64] Z.A. Shaikh, A.I. Umrani, A.K. Jumani, A.A. Laghari, Technology enhanced learn-
PyImageSearch, https://web.archive.org/web/20200119143500/, https:// ing: a digital timeline learning system for higher educational institutes, Int. J.
www.pyimagesearch.com/deep-learning-computer-vision-python-book/, Comput. Sci. Netw. Secur. 19 (10) (2019) 1–5.
archived on 19 January 2019. [65] E. Vosoughi, J.M. Lee, J.R. Miller, M. Nosrati, D.R. Minor, R.E. Abendroth, J.W.
[39] Abdullah Ayub Khan, Aftab Ahmed Shaikh, Omar Cheikhrouhou, Asif Ali Lee, B.T. Andrews, L.Z. Leng, M. Wu, S.P. Leong, M. Kashani-Sabet, K.B. Kim,
Laghari, Mamoon Rashid, Muhammad Shafiq, Habib Hamam, IMG-forensics: Survival and clinical outcomes of patients with melanoma brain metastasis in
multimedia-enabled information hiding investigation using convolutional neu- the era of checkpoint inhibitors and targeted therapies, BMC Cancer 18 (2018).
ral network, IET Image Process. (2021). [66] V.V. Estrela, L.A. Rivera, P.C. Beggio, R.T. Lopes, Regularized pel-recursive
[40] Saleemullah Memon, Kamlesh Kumar Soothar, Kamran Ali Memon, Arif Hus- motion estimation using generalized cross-validation and spatial adaptation, in:
sain Magsi, Asif Ali Laghari, Muhammad Abbas, The design of wireless portable 16th Brazilian Symposium on Computer Graphics and Image Processing
electrocardiograph monitoring system based on ZigBee, EAI Endorsed Transact. (SIBGRAPI 2003), 2003, pp. 331–338.
Scalable Inf. Syst. 7 (28) (2020) e6. [67] T. Wong, P. Yeh, Reliable accuracy estimates from k-fold cross validation, IEEE
[41] Asif Ali Laghari, Hui He, Muhammad Shafiq, Asiya Khan, Assessment of quality Trans. Knowl. Data Eng. 32 (2020) 1586–1594.
of experience (QoE) of image compression in social cloud computing, Multia- [68] A.M. Coelho, V.V. Estrela, EM-based mixture models applied to video event de-
gent Grid Syst. 14 (2) (2018) 125–143. tection, in: Principal Component Analysis - Engineering Applications, Parinya
[42] Shahid Karim, Ye Zhang, Shoulin Yin, Asif Ali Laghari, Ali Anwar Brohi, Impact Sanguansat, IntechOpen, 2012.
of compressed and down-scaled training images on vehicle detection in remote [69] I.T. Jolliffe, J. Cadima, Principal component analysis: a review and recent devel-
sensing imagery, Multimed. Tools Appl. 78 (22) (2019) 32565–32583. opments, Philos. Trans. - Royal Soc. A, Math. Phys. Eng. Sci. 374 (2065) (2016)
[43] V. Shestak, D. Gura, N. Khudyakova, Z.A. Shaikh, Y. Bokov, Chatbot design is- 20150202.
sues: building intelligence with the Cartesian paradigm, Evol. Intell. (2020) 1–9. [70] A. Deshpande, P. Patavardhan, V.V. Estrela, N. Razmjooy, J.D. Hemanth, Deep
[44] Z.A. Shaikh, Keyword detection techniques, Eng. Tech. Appl. Sci. Res. 8 (1) learning as an alternative to super-resolution imaging in UAV systems, Ch. 9,
(2018) 2590–2594. in: V.V. Estrela, J. Hemanth, O. Saotome, G. Nikolakopoulos, R. Sabatini (Eds.),
[45] G. Rathee, A. Sharma, H. Saini, R. Kumar, R. Iqbal, A hybrid framework for mul- Imaging and Sensing for Unmanned Aircraft Systems, vol. 2, IET, London, UK,
timedia data processing in IoT-healthcare using blockchain technology, Mul- 2020, pp. 177–212.
timed. Tools Appl. 79 (15) (2020) 9711–9733. [71] A. Deshpande, V.V. Estrela, P. Patavardhan, The DCT-CNN-ResNet50 architecture
[46] A. Sharma, R. Kumar, Service-level agreement—energy cooperative quickest am- to classify brain tumors with super-resolution, convolutional neural network,
bulance routing for critical healthcare services, Arab. J. Sci. Eng. 44 (4) (2019). and the ResNet50, Neurosci. Inform. (2021).
[47] Z.A. Shaikh, N. Moiseev, A. Mikhaylov, S. Yüksel, Facile synthesis of copper [72] S.R. Fernandes, V.V. Estrela, H.A. Magalhaes, O. Saotome, On improving sub-
oxide-cobalt oxide/nitrogen-doped carbon (Cu2O-Co3O4/CN) composite for ef- pixel accuracy by means of B-spline, in: Proceedings of the 2014 IEEE In-
ficient water splitting, Appl. Sci. 11 (21) (2021) 9974. ternational Conference on Imaging Systems and Techniques (IST 2014), 2014,
pp. 68–72.
[48] A. Sharma, R. Tomar, N. Chilamkurti, B.G. Kim, Blockchain based smart con-
tracts for Internet of medical things in e-healthcare, Electron. 9 (10) (2020) [73] V.V. Estrela, N. Razmjooy, A.C.B. Monteiro, R.P. Franc¸ a, M.A. de Jesus, Y. Iano,
1609. A computational intelligence perspective on multimodal image registration for
[49] Z.Y. Liu, Z. Shaikh, F. Gazizova, Using the concept of game-based learning in unmanned aerial vehicles (UAVs), in: N. Razmjooy, M. Ashourian, Z. Foroozan-
education, Int. J. Emerg. Technol. Learn. 15 (14) (2020) 53–64. deh (Eds.), Metaheuristics and Optimization in Computer and Electrical Engi-
neering, in: Lecture Notes in Electrical Engineering, vol. 696, Springer, Cham,
[50] A. Sharma, R. Kumar, Computation of the reliable and quickest data path for
2021.
healthcare services by using service-level agreements and energy constraints,
Arab. J. Sci. Eng. 44 (11) (2019) (Springer Science & Business Media BV). [74] M. Lu, S. Niu, A detection approach using LSTM-CNN for object removal caused
by exemplar-based image inpainting, Electron. 9 (2020) 858.

10
K. Ali, Z.A. Shaikh, A.A. Khan et al. Neuroscience Informatics 2 (2022) 100034

[75] Francesco Piccialli, Vittorio Di Somma, Fabio Giampaolo, Salvatore Cuomo, Gi-
ancarlo Fortino, A survey on deep learning in medicine: why, how and when?, [77] J. Hemanth, V.V. Estrela, Deep Learning for Image Processing Applications, Ad-
Inf. Fusion 66 (2021) 111–137. vances in Parallel Computing, vol. 31, IOS Press, Amsterdam, Netherlands, 2017.
[76] N. Razmjooy, M. Ashourian, M. Karimifard, V.V. Estrela, H.J. Loschi, D. do Nasci- [78] A. Deshpande, V.V. Estrela, N. Razmjooy, Computational Intelligence Methods
mento, R.P. Franc¸ a, M. Vishnevski, Computer-Aided Diagnosis of Skin Cancer: A for Super-Resolution in Image Processing Applications, Springer Nature, Zurich,
Review, Current Medical Imaging, Bentham Science Publishers, Sharjah, U.A.E, Switzerland, 2021.
2020.

11

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy