Diagnostics 12 02472 v2
Diagnostics 12 02472 v2
Diagnostics 12 02472 v2
Article
Automatic Malignant and Benign Skin Cancer Classification
Using a Hybrid Deep Learning Approach
Atheer Bassel 1 , Amjed Basil Abdulkareem 2 , Zaid Abdi Alkareem Alyasseri 3,4,5, * , Nor Samsiah Sani 2, *
and Husam Jasim Mohammed 6
Abstract: Skin cancer is one of the major types of cancer with an increasing incidence in recent
decades. The source of skin cancer arises in various dermatologic disorders. Skin cancer is classified
into various types based on texture, color, morphological features, and structure. The conventional
approach for skin cancer identification needs time and money for the predicted results. Currently,
medical science is utilizing various tools based on digital technology for the classification of skin
cancer. The machine learning-based classification approach is the robust and dominant approach
for automatic methods of classifying skin cancer. The various existing and proposed methods of
deep neural network, support vector machine (SVM), neural network (NN), random forest (RF), and
Citation: Bassel, A.;
K-nearest neighbor are used for malignant and benign skin cancer identification. In this study, a
Abdulkareem, A.B.; Alyasseri,
method was proposed based on the stacking of classifiers with three folds towards the classification of
Z.A.A.; Sani, N.S.; Mohammed, H.J.
Automatic Malignant and Benign
melanoma and benign skin cancers. The system was trained with 1000 skin images with the categories
Skin Cancer Classification Using a of melanoma and benign. The training and testing were performed using 70 and 30 percent of the overall
Hybrid Deep Learning Approach. data set, respectively. The primary feature extraction was conducted using the Resnet50, Xception,
Diagnostics 2022, 12, 2472. and VGG16 methods. The accuracy, F1 scores, AUC, and sensitivity metrics were used for the overall
https://doi.org/10.3390/ performance evaluation. In the proposed Stacked CV method, the system was trained in three levels by
diagnostics12102472 deep learning, SVM, RF, NN, KNN, and logistic regression methods. The proposed method for Xception
Academic Editor: Vadim V. Grubov techniques of feature extraction achieved 90.9% accuracy and was stronger compared to ResNet50 and
VGG 16 methods. The improvement and optimization of the proposed method with a large training
Received: 26 August 2022
dataset could provide a reliable and robust skin cancer classification system.
Accepted: 13 September 2022
Published: 12 October 2022
Keywords: skin cancer; deep learning; CNN; machine learning; prediction
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations. 1. Introduction
The goal of detecting and curing cancer in humans is a difficult one for medical science.
In the United States, skin cancer is the most frequent type of cancer. Melanoma is one
Copyright: © 2022 by the authors.
of the fastest-growing and most dangerous cancers. In its advanced stages, treating this
Licensee MDPI, Basel, Switzerland.
cancer is very difficult. The goal of early identification and treatment of this form of cancer
This article is an open access article
is to reduce the number of cancer patients in the United States. Malignant melanoma
distributed under the terms and is the deadliest kind of skin cancer, accounting for 5000 fatalities per year in the United
conditions of the Creative Commons States [1,2]. Early detection of the type of cancer is particularly critical, because patients’
Attribution (CC BY) license (https:// health problems are becoming more severe as time passes. Melanoma begins with the
creativecommons.org/licenses/by/ formation of cytes in the pigment melanin, which gives the skin its color. It has the ability
4.0/).
with the formation of cytes in the pigment melanin, which gives the skin its color. I
the ability to travel to the lower layers of our skin, enter the circulation, and then sp
to travel to the lower layers of our skin, enter the circulation, and then spread to other
to other regions of our bodies.
regions of our bodies.
Computer-assisted technologies and methods are required for early skin cance
Computer-assisted technologies and methods are required for early skin cancer diag-
agnosis and detection. The accuracy of clinical diagnosis for cancer detection is impr
nosis and detection. The accuracy of clinical diagnosis for cancer detection is improved
by computer-aided techniques and equipment. The most significant non-invasive me
by computer-aided techniques and equipment. The most significant non-invasive method
for detecting malignant, benign, and other pigmented skin cancers is dermoscopy [3]
for detecting malignant, benign, and other pigmented skin cancers is dermoscopy [3].
eye-based examination and recording of color changes in the skin are the traditional m
The eye-based examination and recording of color changes in the skin are the traditional
ods of melanoma detection and main feature identification. This classic technique for
methods of melanoma detection and main feature identification. This classic technique
cancer detection relies on the surface structure and color of the skin. Dermoscopy al
for skin cancer detection relies on the surface structure and color of the skin. Dermoscopy
for improved classification of cancer types based on their appearance and morpholo
allows for improved classification of cancer types based on their appearance and mor-
characteristics [4]. Dermatologists rely on their experience while inspecting dermos
phological characteristics [4]. Dermatologists rely on their experience while inspecting
photos. Computerized analysis of dermoscopy pictures has become an important s
dermoscopy photos. Computerized analysis of dermoscopy pictures has become an im-
topic to decrease diagnostic mistakes due to the complexity and subjectivity of hu
portant study topic to decrease diagnostic mistakes due to the complexity and subjectivity
interpretation [5]. The accuracy of skin cancer diagnosis can be improved by using
of human interpretation [5]. The accuracy of skin cancer diagnosis can be improved by
moscopy pictures to identify cancer. Figure 1 shows a graphical illustration of the dis
using dermoscopy pictures to identify cancer. Figure 1 shows a graphical illustration of the
tions between melanoma and benign skin cancer.
distinctions between melanoma and benign skin cancer.
Figure 1. for
Figure 1. Representation Representation for (a) “benign
(a) “benign cancer” cancer” (b)
(b) “Melanoma” “Melanoma” [6].
[6].
Related Work
Many studies on the detection and diagnosis of malignant and benign skin cancer have
been conducted in the last decade. The numerous datasets are provided for the research
community. Researchers have applied strategies based on splitting, merging, clustering,
and classification to the identification and treatment of skin cancer. Each approach has its
own set of limitations and advancements from the medical community to assist medical
experts in making decisions.
Rajasekhar et al. (2020) suggested an automated melanoma detection and classifi-
cation approach based on border and wavelet-based texturing algorithms. For wavelet-
decomposition and boundary-series models, the suggested approach used texture, border,
and geometry information. SVM, random forest, logistic model tree, and hidden naive
Bayes algorithms were used to classify the data [16].
A malignant skin cancer recognition system based on a support vector machine was
proposed by Murugan et al. (2019). The asymmetry, border irregularity, color variation,
diameter, and texture features were used for the classification of the system. The texture of
the skin is the dominant feature used for decision making. The convolution neural network
using the VGG net is used for the problem solving of skin cancer identification. The system
is trained using the transfer learning approach [17].
Seeja et al. (2019) presented the heuristic hybrid rough set particle swarm optimization
(HRSPSO) method for segmenting and classifying a digital picture into multiple segments
that are more relevant and easier to study [18].
Goyal et al. (2019) offered three classification algorithms and proposed a multi-scale
integration strategy for segmentation [19]. Multiclass classification, binary classification,
and an ensemble model are all examples of classification methods. Taghanaki et al. (2020)
employed the discrete wavelet transform to extract features and analyze texture. These
collected characteristics were then used to train and assess the lesions as malignant and
benign using stack auto encoders (SAEs) [20].
The early diagnosis of cancer categorization based on interpretation, according to
Hasan, Md Kamrul et al. (2020), is time-consuming and subjective. Adaptive threshold,
gradient vector flow, adaptive snake, level set technique, expectation-maximization level
set, fuzzy based split, and merging algorithm were among the six segmentation methods
employed in the suggested system. The system’s performance is measured using four
Diagnostics 2022, 12, x FOR PEER REVIEW 4 o
Figure
Figure 2. Summary used2.for
Summary usedidentification
skin cancer for skin cancer identification [22].
[22].
Thegraphical
The graphicalrepresentation
representationofofthe
thesample
sampleimages
imagesfrom
fromthe
thedataset
datasetininthe
thetwo-class
two-class
categoryfor
category forbenign
benignisisrepresented
representedininFigure
Figure3.3.
Figure3.3.Samples
Figure Samplesfor
forbenign
benignand
andmalignant
malignantcancer
cancerininthe
thedataset.
dataset.
2.2.
2.2.Methods
Methodsfor
forImplementation
Implementation
Over
Over the pastfew
the past fewyears,
years,the
theadvancement
advancementofofa aconvolution
convolutionneural
neuralnetwork
networkhas
hasbeen
been
developed
developed by researchers to solve the computer vision problem more preciselywithin
by researchers to solve the computer vision problem more precisely within
minimum
minimumtime.time.The
Thefeatures
featureswere
wereextracted
extractedusing
usingthe
thepre-trained
pre-trained(Xception)
(Xception)model
modelfor
for
obtaining the features of each image in the dataset [29]. The deep convolution neural
networks were pre-trained using Tensor Flow. Tensor Flow is a deep learning framework
developed by Google [30]. The structure of the full convolution neural network is described
in Figure 4.
Diagnostics
Diagnostics 2022,
2022, 12, 12, x FOR
x FOR PEER
PEER REVIEW
REVIEW 6 of6 15
of 15
obtaining
obtaining thethe features
features of of each
each image
image in the
in the dataset
dataset [29].
[29]. The
The deepdeep convolution
convolution neural
neural net-
net-
works
works were
were pre-trained
pre-trained using
using Tensor
Tensor Flow.
Flow. Tensor
Tensor Flow
Flow is ais deep
a deep learning
learning framework
framework
Diagnostics 2022, 12, 2472 developed
developed byby Google
Google [30].
[30]. TheThe structure
structure of of
thethe
fullfull convolution
convolution neural
neural network
network is de-
is de- 6 of 15
scribed
scribed in Figure
in Figure 4. 4.
Figure
Figure 4. Structure
4. Structure of convolution
of convolution neural
neural network
network [31].
[31].
Figure 4. Structure of convolution neural network [31].
TheThe implementation
implementation of of deep
deep learning
learning mechanisms
mechanisms hashas various
various modules
modules and
and forms.
forms.
This
The implementation
research concentrated
of deep learningThey
mechanisms has unsupervised
various modules and forms. This
This research concentrated onon auto
auto encodes.
encodes. belong
They belong to to
thethe learning
unsupervised learning
research
class of concentrated
neural networks. on
Theauto encodes.
graphical They belong
representation of theto the
auto unsupervised
class of neural networks. The graphical representation of the auto encoder is shown in in
encoder is shown learning class of
neural
Figure
Figure networks. The graphical representation of the auto encoder is shown in Figure 5.
5. 5.
Figure
Figure
Figure 5.
5.Auto
5. Auto
Autoencoder
encoder module
module
encoder forfor
module convolution
convolution neural
neural
for convolution network
network [32].
[32].
neural network [32].
TheThe
The dataset
dataset was
dataset was
wastested
tested using
using
tested thethe regression,
regression,
using SVMSVM KNN,
KNN,
the regression, RF,RF,
SVM andand
KNN, deep
deep learning
learning
RF, tech-learning tech-
tech-
and deep
niques.
niques. Our
Our stacking
stacking approach
approach was
was compared
compared to the
to the performance
performance of of testing
testing results
results in the
in the
niques. Our stacking approach was compared to the performance of testing results in
literature.
literature. The The experiment
experiment waswas conducted
conducted onon a computer
a computer system
system with
with a core
a core Intel4
Intel4 pro-
pro-
the
cessor literature.
cessor with
with 12 12
GBGB The
RAM.
RAM. experiment
A brief
A brief was block
conceptual
conceptual conducted
block on
diagram
diagram a illustrated
is computer
is illustrated insystem
Figure
in Figure 6. 6.with a core Intel4
processor with 12 GB RAM. A brief conceptual block diagram is illustrated in Figure 6.
7. The7.basic
Figure Figure architecture
The basic architectureofofstacked CValgorithm
stacked CV algorithm
[35].[35].
Based on the functionality and basic architecture of stacked CV, the proposed block
diagram for the research is shown in Figure 8.
Diagnostics 2022, 12, 2472 8 of 15
Based on the functionality and basic architecture of stacked CV, the proposed block
diagram for the research is shown in Figure 8.
The proposed stacking-based classification method has three folds. The original
training data are passed to the level 1 model for classification such as deep learning. The
outcome of the classifier of deep learning becomes prediction 1, which becomes a feature
for the level 2 training data.
The level 2 training data are trained using support vector machine (SVM), neural
network (NN), random forest (RF), and K- nearest neighbor (KNN) classifiers. The outcome
of each classifier of the level 2 model is a prediction and it acts as a feature for the level 3
Figure 7. data.
training The basic architecture of stacked CV algorithm [35].
The level 3 training data are passed to the level 3 model towards the classification.
Based on
The output thelevel
of the functionality
3 model isandthebasic
final architecture of used
prediction and stacked CV,outcome
as the the proposed
results.block
The
diagram for the outcome
final prediction research is shown
detects inclass
the Figure 8. cancer as a result.
of skin
regression. The comparative analysis is conducted with the proposed approach and other
traditional techniques used for the classification.
AUC provides the area under the ROC-curve integrated from (0, 0) to (1, 1). It gives
the aggregate measure of all possible classification thresholds. AUC has a range of 0 to 1. A
100% correct classified version will have the AUC value 1.0 and it will be 0.0 if there is a
100% wrong classification. The F1 score is calculated based on precision and recall. The
mathematical representation of precision and recall are explained below [38,39].
Precision checks how precise the model works by checking the correct true positives
from the predicted ones.
TP
Precision =
TP + FP
Recall calculates how many actual true positives the model has captured, labeling
them as positives.
Recall = TPTP
+ FN
Precision×Recall
F1 = 2 × Precision+Recall
The accuracy is the most important performance measure. Accuracy determines how
many true positives TP, true negatives TN, false positives FP, and false negatives FN were
correctly classified [39–41].
TP + TN
Accuracy =
TP + TN + FP + FN
The sensitivity is the performance measure, and it is calculated as the number of
positive items correctly identified.
TP
Sensitivity =
TN + FN
3. Experimental Analysis
The experiment is tested with three modes of the feature extraction: Resnet50, Xception,
and VGG 16. From the extracted feature the system is passed through the classification
mode of SVM, KNN, regression, AdaBoost, RF, decision tree, and GaussianNB. The system
is tested with our proposed stacking approach which is a hybrid combination of the
proposed model. This proposed approach aims to improve the classification performance
of the system. This research splits 70% of the dataset as a training set, 15% as a validation
set, and 15% as the testing set to evaluate the performance. For the evaluation of the
performance of the system, the accuracy, F1 score, sensitivity, and area under ROC (AUC)
metrics are used. The numerical outcome of the Resnet50 features with the performance
evaluation metrics is described in Table 2. The graphical representation of the comparative
performance of Resnet50 features with a given classification approach is shown in Figure 9.
Diagnostics 2022, 12, 2472 10 of 15
Accuracy (%)
StackingCV(Proposed)
SVM
Regression
KNeighbors
AdaBoost
RF
DecisionTree
GaussianNB
Figure9.9.Performance
Figure Performanceofofthe
thesystem
systemfor
forResnet50
Resnet50features.
features.
Thenumerical
The numericalresults
resultsofof the
the Xception
Xception features
features with
with thethe performance
performance evaluation
evaluation met-
metrics
ricsdescribed
are are described in Table
in Table 3. The3. The graphical
graphical representation
representation of comparative
of the the comparative performance
performance of
of Xception
Xception features
features withwith a given
a given classification
classification approach
approach is shown
is shown in Figure
in Figure 10. 10.
Table3.3.Performance
Table Performanceevaluation
evaluationofofthe
thesystem
systemfor
forthe
theXception
Xceptionfeature
featureextraction
extractionmethod.
method.
Classifier
Classifier Accuracy
Accuracy (%)(%) F1-Score
F1-Score Sensitivity
Sensitivity AUCAUC
StackingCV (Proposed) 90.9 0.896 0.886 0.917
StackingCV (Proposed) 90.9 0.896 0.886 0.917
SVM 86.7 0.838 0.834 0.862
SVM
Regression 86.786.3 0.838
0.837 0.8340.853 0.862
0.862
KNN
Regression 86.379.5 0.732
0.837 0.8530.678 0.778
0.862
AdaBoost
KNN 79.583.7 0.801
0.732 0.6780.798 0.831
0.778
RF 80.3 0.739 0.678 0.784
AdaBoost 83.7 0.801 0.798 0.831
DecisionTree 75.7 0.676 0.614 0.736
RF
GaussianNB 80.376.1 0.739
0.731 0.6780.788 0.784
0.765
DecisionTree 75.7 0.676 0.614 0.736
GaussianNB 76.1 0.731 0.788 0.765
Accuracy (%)
StackingCV(Proposed)
SVM
Regression
KNeighbors
AdaBoost
RF
DecisionTree
SVM 86.7 0.838 0.834 0.862
Regression 86.3 0.837 0.853 0.862
KNN 79.5 0.732 0.678 0.778
AdaBoost 83.7 0.801 0.798 0.831
RF 80.3 0.739 0.678 0.784
Diagnostics 2022, 12, 2472 11 of 15
DecisionTree 75.7 0.676 0.614 0.736
GaussianNB 76.1 0.731 0.788 0.765
Accuracy (%)
StackingCV(Proposed)
SVM
Regression
KNeighbors
AdaBoost
RF
DecisionTree
GaussianNB
Figure10.
Figure
Diagnostics 2022, 12, x FOR PEER REVIEW 10.Performance
Performanceof
ofthe
thesystem
systemfor
forXception
Xceptionfeatures.
features. 12 of 15
The experimental outcome results of the VGG16 features with the performance eval-
uation metrics are described in Table 4. The graphical representation of the comparative
The experimental
performance outcomewith
of VGG16 features results of theclassification
a given VGG16 features with is
approach theshown
performance
in Figureeval-
11.
uation metrics are described in Table 4. The graphical representation of the comparative
performance
Table of VGG16
4. Performance features
evaluation with
of the a given
system classification
for VGG16 approach is shown in Figure
feature extraction.
11.
Classifier Accuracy (%) F1-Score Sensitivity AUC
Table 4. Performance evaluation
StackingCV (Proposed) of the system
86.5 for VGG16
0.842 feature extraction.
0.804 0.843
Classifier
SVM Accuracy
86.7 (%) F1-Score
0.835 Sensitivity
0.810 AUC
0.859
StackingCV (Proposed)
Regression 87.586.5 0.842
0.847 0.804
0.844 0.843
0.870
SVM 86.7 0.835 0.810 0.859
KNN 81 0.761 0.733 0.799
Regression 87.5 0.847 0.844 0.870
AdaBoost
KNN 79.981 0.766
0.761 0.798
0.733 0.799
0.799
AdaBoost
RF 8479.9 0.766
0.805 0.798
0.798 0.799
0.834
RF
DecisionTree 76.184 0.805
0.701 0.798
0.678 0.834
0.749
DecisionTree 76.1 0.701 0.678 0.749
GaussianNB 77.6 0.723 0.706 0.766
GaussianNB 77.6 0.723 0.706 0.766
Accuracy (%)
StackingCV(Proposed)
SVM
Regression
KNeighbors
AdaBoost
RF
DecisionTree
GaussianNB
Figure11.
Figure 11.Performance
Performanceof
ofthe
thesystem
systemfor
forVGG16
VGG16features.
features.
The comparative performance of the system with all classification approaches is cal-
culated in Table 5. The graphical representation of the comparative performance of the
system is shown in Figure 12.
Diagnostics 2022, 12, 2472 12 of 15
The ROC curve and area under the ROC curve is the most prominent results for the
performance evaluation. The graphical representation of the ROC curve of this research is
shown in Figure 13.
The performance testing based on the proposed model was utilized by the researcher.
In this analysis, the classification of malignant and benign cancer was performed using the
stacking CV model implemented using a deep learning approach. The experiment was
tested in a three-fold training mechanism. The original dataset was trained using a deep
learning approach. The output of deep learning became a feature set for the level 2 model
such as with SVM, RF, NN, and KNN techniques. The second level utilized the prediction
of the previous classifier as output and processed the prediction. The prediction was the
third level model in the stacking CV algorithm and was extracted based on the previous
level output. For the proposed approach, stacking CV on the Xception feature extraction
Diagnostics 2022, 12, x FOR PEER REVIEW 13 of 15
mode proved dominant and promising, with 90.9% accuracy.
Resnet50 features
Xception Features
VGG16 features
Figure 12.
Figure 12. Comparative
Comparative performance
performance of
of the
the proposed
proposed and
and available
available classification
classification approaches.
approaches.
The ROC curve and area under the ROC curve is the most prominent results for the
performance evaluation. The graphical representation of the ROC curve of this research
is shown in Figure 13.
Figure 12. Comparative performance of the proposed and available classification approaches.
The ROC curve and area under the ROC curve is the most prominent results for the
Diagnostics 2022, 12, 2472 performance evaluation. The graphical representation of the ROC curve of this research
13 of 15
is shown in Figure 13.
Author Contributions: Data curation, A.B.; Formal analysis, Z.A.A.A.; Funding acquisition, N.S.S.;
Investigation, A.B. and Z.A.A.A.; Methodology, A.B. and A.B.A.; Project administration, Z.A.A.A.;
Resources, A.B.A. and N.S.S.; Software, A.B.A.; Supervision, Z.A.A.A. and N.S.S.; Visualization,
H.J.M.; Writing—original draft, A.B. and A.B.A.; Writing—review & editing, Z.A.A.A., N.S.S. and
H.J.M. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by Universiti Kebangsaan Malaysia (Grant code: GUP2019-060).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available in article.
Conflicts of Interest: The authors declare no conflict of interest.
Diagnostics 2022, 12, 2472 14 of 15
References
1. Chaturvedi, S.S.; Gupta, K.; Prasad, P.S. Skin lesion analyzer: An efficient seven-way multi-class skin cancer classification using
MobileNet. In Proceedings of the International Conference on Advanced Machine Learning Technologies and Applications,
Cairo, Egypt, 20–22 March 2020; Springer: Singapore, 2020.
2. Cancer Facts and Figures 2019. American Cancer Society. 2019. Available online: https://www.cancer.org/content/dam/cancer-
org/research/cancer-facts-and-statistics/annual-cancerfacts-and-figures/2019/cancer-facts-and-figures-2019.pdf (accessed on
22 June 2020).
3. Zghal, N.S.; Derbel, N. Melanoma Skin Cancer Detection based on Image Processing. Curr. Med. Imaging 2020, 16, 50–58.
[CrossRef] [PubMed]
4. Polat, K.; Koc, K.O. Detection of skin diseases from dermoscopy image using the combination of convolutional neural network
and one-versus-all. J. Artif. Intell. Syst. 2020, 2, 80–97. [CrossRef]
5. Wei, L.; Ding, K.; Hu, H. Automatic Skin Cancer Detection in Dermoscopy Images based on Ensemble Lightweight Deep Learning
Network. IEEE Access 2020, 8, 99633–99647. [CrossRef]
6. Giuffrida, R.; Conforti, C.; Di Meo, N.; Deinlein, T.; Guida, S.; Zalaudek, I.; Giuffrida, R.; Conforti, C.; Di Meo, N.; Deinlein, T.;
et al. Use of noninvasive imaging in the management of skin cancer. Curr. Opin. Oncol. 2020, 32, 98–105. [CrossRef] [PubMed]
7. Ech-Cherif, A.; Misbhauddin, M.; Ech-Cherif, M. Deep Neural Network-based mobile dermoscopy application for triaging skin
cancer detection. In Proceedings of the 2019 2nd International Conference on Computer Applications & Information Security
(ICCAIS), Riyadh, Saudi Arabia, 1–3 May 2019.
8. Milton, M.A.A. Automated skin lesion classification using an ensemble of deep neural networks in ISIC 2018: Skin lesion analysis
towards melanoma detection challenge. arXiv 2019, arXiv:1901.10802.
9. Nasif, A.; Othman, Z.A.; Sani, N.S. The deep learning solutions on lossless compression methods for alleviating data load on IoT
nodes in smart cities. Sensors 2021, 21, 4223. [CrossRef]
10. Yélamos, O.; Braun, R.P.; Liopyris, K.; Wolner, Z.J.; Kerl, K.; Gerami, P.; Marghoob, A.A. Usefulness of dermoscopy to improve
the clinical and histopathologic diagnosis of skin cancers. J. Am. Acad. Dermatol. 2019, 80, 365–377. [CrossRef] [PubMed]
11. Holliday, J.; Sani, N.; Willett, P. Ligand-based virtual screening using a genetic algorithm with data fusion. Match Commun. Math.
Comput. Chem. 2018, 80, 623–638.
12. Othman, Z.A.; Bakar, A.A.; Sani, N.S.; Sallim, J. Household Overspending Model Amongst B40, M40 and T20 using Classification
Algorithm. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 392–399. [CrossRef]
13. Wolner, Z.J.; Yélamos, O.; Liopyris, K.; Rogers, T.; Marchetti, M.A.; Marghoob, A.A. Enhancing skin cancer diagnosis with
dermoscopy. Dermatol. Clin. 2017, 35, 417–437. [CrossRef] [PubMed]
14. Dascalu, A.; David, E.O. Skin cancer detection by deep learning and sound analysis algorithms: A prospective clinical study of an
elementary dermoscopy. EBioMedicine 2019, 43, 107–113. [CrossRef] [PubMed]
15. Kassani, S.H.; Kassani, P.H. A comparative study of deep learning architectures on melanoma detection. Tissue Cell 2019, 58,
76–83. [CrossRef] [PubMed]
16. Rajasekhar, K.S.; Babu, T.R. Analysis and Classification of Dermoscopic Images Using Spectral Graph Wavelet Transform. Period.
Polytech. Electr. Eng. Comput. Sci. 2020, 64, 313–323. [CrossRef]
17. Murugan, A.; Nair, S.H.; Kumar, K.P.S. Detection of skin cancer using SVM, Random Forest, and kNN classifiers. J. Med. Syst.
2019, 43, 269. [CrossRef]
18. Seeja, R.D.; Suresh, A. Deep learning-based skin lesion segmentation and classification of melanoma using support vector
machine (SVM). Asian Pac. J. Cancer Prev. APJCP 2019, 20, 1555–1561.
19. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin lesion segmentation in dermoscopic images with ensemble deep
learning methods. IEEE Access 2019, 8, 4171–4181. [CrossRef]
20. Taghanaki, S.A.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Hamarneh, G. Deep semantic segmentation of natural and medical
images: A review. Artif. Intell. Rev. 2020, 54, 137–178. [CrossRef]
21. Hasan, K.; Dahal, L.; Samarakoon, P.N.; Tushar, F.I.; Martí, R. DSNet: Automatic dermoscopic skin lesion segmentation. Comput.
Biol. Med. 2020, 120, 103738. [CrossRef]
22. Munir, K.; Elahi, H.; Ayub, A.; Frezza, F.; Rizzi, A. Cancer diagnosis using deep learning: A bibliographic review. Cancers 2019, 11, 1235.
[CrossRef]
23. Jianu, S.R.S.; Ichim, L.; Popescu, D. Automatic diagnosis of skin cancer using neural networks. In Proceedings of the 2019 11th
International Symposium on Advanced Topics in Electrical Engineering (ATEE), Bucharest, Romania, 28–30 March 2019.
24. Garg, N.; Sharma, V.; Kaur, P. Melanoma skin cancer detection using image processing. In Sensors and Image Processing; Springer:
Singapore, 2018; pp. 111–119.
25. Nafea, A.A.; Omar, N.; Al-Ani, M.M. Adverse Drug Reaction Detection Using Latent Semantic Analysis. J. Comput. Sci. 2021, 17,
960–970. [CrossRef]
26. AL-Ani, M.M.; Omar, N.; Nafea, A.A. A Hybrid Method of Long Short-Term Memory and Auto-Encoder Architectures for
Sarcasm Detection. J. Comput. Sci. 2021, 17, 1093–1098. [CrossRef]
27. Jamal, N.; Mohd, M.; Noah, S.A. Poetry classification using support vector machines. J. Comput. Sci. 2012, 8, 1441–1446.
28. Kassem, M.A.; Hosny, K.M.; Fouad, M.M. Skin lesions classification into eight classes for ISIC 2019 using deep convolutional
neural network and transfer learning. IEEE Access 2020, 8, 114822–114832. [CrossRef]
Diagnostics 2022, 12, 2472 15 of 15
29. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolution neural networks.
Multimed. Tools Appl. 2020, 79, 28477–28498. [CrossRef]
30. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow:
A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and
Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016.
31. Dorj, U.-O.; Lee, K.-K.; Choi, J.-Y.; Lee, M. The skin cancer classification using deep convolution neural network. Multimed. Tools
Appl. 2018, 77, 9909–9924. [CrossRef]
32. Kucharski, D.; Kleczek, P.; Jaworek-Korjakowska, J.; Dyduch, G.; Gorgon, M. Semi-Supervised Nests of Melanocytes Segmentation
Method Using Convolutional Autoencoders. Sensors 2020, 20, 1546. [CrossRef]
33. Kim, M.; Lee, M.; An, M.; Lee, H. Effective automatic defect classification process based on CNN with stacking ensemble model
for TFT-LCD panel. J. Intell. Manuf. 2020, 31, 1165–1174. [CrossRef]
34. Sánchez-Morales, A.; Sancho-Gómez, J.L.; Figueiras-Vidal, A.R. Complete auto encoders for classification with missing values.
Neural Comput. Appl. 2021, 33, 1951–1957. [CrossRef]
35. Kadam, V.J.; Jadhav, S.M.; Kurdukar, A.A.; Shirsath, M.R. Arrhythmia Classification using Feature Ensemble Learning based on
Stacked Sparse Autoencoders with GA-SVM Guided Features. In Proceedings of the 2020 International Conference on Industry
4.0 Technology (I4Tech), Pune, India, 13–15 February 2020.
36. Chen, M.; Chen, W.; Chen, W.; Cai, L.; Chai, G. Skin Cancer Classification with Deep Convolution Neural Networks. J. Med.
Imaging Health Inform. 2020, 10, 1707–1713. [CrossRef]
37. Nahata, H.; Singh, S.P. Deep Learning Solutions for Skin Cancer Detection and Diagnosis. In Machine Learning with Health Care
Perspective; Springer: Cham, Switzerland, 2020; pp. 159–182.
38. Tr, G.B. An Efficient Skin Cancer Diagnostic System Using Bendlet Transform and Support Vector Machine. An. Acad. Bras.
Ciências 2020, 92, e20190554.
39. Abdulkareem, A.B.; Sani, N.S.; Sahran, S.; Alyessari, Z.A.A.; Adam, A.; Rahman, A.H.A.; Abdulkarem, A.B. Predicting COVID-19
based on environmental factors with machine learning. Intell. Autom. Soft Comput. 2021, 28, 305–320. [CrossRef]
40. Khamparia, A.; Singh, P.K.; Rani, P.; Samanta, D.; Khanna, A.; Bhushan, B. An internet of health things-driven deep learning framework
for detection and classification of skin cancer using transfer learning. Trans. Emerg. Telecommun. Technol. 2020, 32, e3963. [CrossRef]
41. Alameri, S.A.; Mohd, M. Comparison of fake news detection using machine learning and deep learning techniques. In Proceedings
of the 2021 3rd International Cyber Resilience Conference (CRC), Langkawi Island, Malaysia, 29–31 January 2021; pp. 1–6.
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: