Skin Cancer Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

DenseNet Powered Skin Cancer Detection: A Deep

Learning Approach

A PROJECT REPORT

Submitted by

SYED KALEEMUDEEN Z (310620106132)


VIGNESH C (310620106145)

in partial fulfillment for the award of the

degree of

BACHELOR OF ENGINEERING

in

ELECTRONICS AND COMMUNICATION ENGINEERING

EASWARI ENGINEERING COLLEGE, CHENNAI


(Autonomous Institution)
affiliated to
ANNA UNIVERSITY: CHENNAI - 600025

NOVEMBER 2023
EASWARI ENGINEERING COLLEGE, CHENNAI
(AUTONOMOUS INSTITUTION)

AFFILIATED TO ANNA UNIVERSITY, CHENNAI 600025

BONAFIDE CERTIFICATE

Certified that this project report “DenseNet Powered Skin Cancer Detection: A

Deep Learning Approach” is the bonafide work of “Syed Kaleemudeen Z

(310620106132) & Vignesh C (310620106145)” who carried out the project

work under my supervision.

SIGNATURE SIGNATURE
Dr. M. DEVARAJU, M.Tech., Ph.D., Dr. S SUDHA, M.E., Ph.D.,
HOD OF THE DEPARTMENT SUPERVISOR
Professor and Head Professor
Department of Electronics and Department of Electronics and
Communication Engineering Communication Engineering
Easwari Engineering College Easwari Engineering College
Ramapuram, Chennai - 600089 Ramapuram, Chennai - 600089

Submitted for Semester Examination held on

INTERNAL EXAMINER EXTERNAL EXAMINER


ACKNOWLEDGEMENT

We are indebted to many people who helped us complete our project and
assisted us from the inception of the idea until the day we documented it.
Firstly, we would like to express our gratitude to Dr. T R
PAARIVENDHAR, Founder Chairman, SRM Group of Institutions,
without whose support our project could not have been completed
successfully.

We would also like to thank Dr. R SHIVA KUMAR, Chairman, SRM Group
of Institutions, Ramapuram Campus for providing us with the required
assistance and Dr. R S KUMAR, Principal, Easwari Engineering College for
his continuous encouragement that led us to where we are right now.

We feel glad to express our sincere thanks to Dr. M DEVARAJU, Head of the
Department, Electronics and Communication Engineering and Dr. D
JESSINTHA, Project Coordinator, Electronics and Communication
Engineering for their suggestions, support, and encouragement towards the
completion of the project with perfection.

We are thankful to our supervisor Dr. S SUDHA, Professor, Electronics and


Communication Engineering for her intellectual guidance and valuable
suggestions in making the project a successful one.

We would extend our hearty thanks to all our panel members for their
valuable suggestions in completing this project.
SYED KALEEMUDEEN Z
VIGNESH C
ABSTRACT

Skin cancer, a prevalent and life-threatening disease, poses a significant global


public health concern, with millions of new cases diagnosed annually. Traditional
diagnostic methods relying on visual skin examination exhibit subjectivity and
variable accuracy. Leveraging the convergence of artificial intelligence and
dermatology, this project proposes a pioneering approach to skin cancer detection
utilizing DenseNet, a deep convolutional neural network known for its efficient
feature extraction and representation learning. A vast dataset of skin tone images,
meticulously annotated by dermatologists, serves as the foundation for training and
optimizing the DenseNet model. Our methodology encompasses comprehensive data
collection, preprocessing, and model architecture design. The model's training
involves the HAM10000 dataset, consisting of 10030 skin images across seven
diagnostic categories, focusing on four cancer types: Actinic Keratosis, Basal Cell
Carcinoma, Benign Keratosis-like Lesions, and Dermatofibroma. Data preprocessing
ensures image quality, standardizing pixel values, and applying augmentation
techniques for robustness. The DenseNet architecture, characterized by dense
connectivity between layers, forms the basis of our skin cancer detection approach,
addressing the vanishing gradient problem and enhancing model performance.
Feature extraction employs a Convolutional Neural Network (CNN) implemented
with Keras, facilitating the detection and extraction of relevant features from skin
lesion images. The training phase iteratively refines internal parameters, enabling the
model to learn discriminative features associated with skin cancer. The project attains
real-world applicability through deployment, with a focus on integration into
telemedicine platforms for remote consultations and extending access to underserved
regions. Continuous monitoring and updates ensure the model's adaptability to
evolving dermatological practices.

i
CHAPTER TITLE PAGE
NO. NO

ABSTRACT i

LIST OF FIGURES iv

LIST OF ABBREVIATIONS vi

1 INTRODUCTION 1

1.1 BACKGROUND ON SKIN CANCER 1

1.2 LIMITATIONS OF TRADITIOANAL DIAGNOSTIC


METHODS 3

1.3 CONCEPT OF AI IN DERMATOLOGY

1.4 INTRODUCTION TO OUR PROJECT AND MODEL

1.5 DEEP LEARNING 4

2 LITERATURE SURVEY 7

2.1 DEEP LEARNING TECHNIQUES 7

2.2 MACHINE LEARNING APPROACHES 11

3 PROBLEM STATEMENT / MOTIVATIONS 18

4 PROPOSED SYSTEM 21

4.1 STEPS FOR IMPLEMENTATION OF SKIN


CANCER DETECTION USING DENSENET DEEP
LEARNING MODAL 21
4.2 LIBRARIES USED IN PROPOSED SYSTEM 23

ii
4.3 PERFORMANCE METRICS 24

5 RESULTS AND DISCUSSIONS 26

6 CONCLUSIONS & FUTURE WORKS 31

REFERENCES

iii
LIST OF FIGURES

S. NO FIGURE NO TITLES PAGE. NO


1 4.1 Sample images of Actinic 23
Keratosis, Dermatofibroma,
Basal cell carcinoma, and
Benign
2 4.2 Densenet Model 24
3 4.3 Training and Validation 31
Accuracy
4 4.4 Training and Validation Loss 31
5 5.1 User input and Predicted 35
output with the label
6 5.2 Plot of training and validation 36
accuracy
7 5.3 Plot of training and validation 36
Loss

iv
LIST OF ABBREVIATIONS

ABBREVIATIONS EXPLANATION

WHO World Health Organization

CNN Convolutional Neural Network

AKIEC Actinic Kerotosis

BCC Basal Cell Carcinoma

BKL Benign Keratosis

DF Dermatofibroma

NV Nevus

OS Operating System

Conv2D Convolutional 2-D

TP True Positive

TN True Negative

FP False Positive

FN False Negative

v
CHAPTER 1

INTRODUCTION

1.1 Background on Skin Cancer

Skin cancer, a pervasive and potentially fatal affliction, stands as a significant


global health challenge. The incidence of skin cancer continues to escalate, with
millions of new cases being diagnosed each year. It encompasses various types,
including melanoma, basal cell carcinoma (BCC), squamous cell carcinoma (SCC),
and less common types like Merkel cell carcinoma. Among these, melanoma is
particularly notorious for its aggressive nature and potential for metastasis.

The primary risk factor for skin cancer is ultraviolet (UV) radiation exposure,
primarily from sunlight. As lifestyles evolve, outdoor activities increase,
contributing to the rising prevalence of skin cancer. Moreover, factors such as
genetic predisposition, immunosuppression, and exposure to certain chemicals also
play roles in skin cancer development.

Early detection of skin cancer is imperative for successful treatment and


improved patient outcomes. Timely identification allows for less invasive
interventions, reducing morbidity and mortality associated with advanced stages.
Consequently, there is a pressing need for accurate and efficient diagnostic methods
to address the growing burden of skin cancer.

1.2 Limitations of Traditional Diagnostic Methods

Traditional diagnostic methods for skin cancer primarily rely on visual skin
examination by dermatologists. While this approach has been the cornerstone of
diagnosis for decades, it suffers from inherent limitations. Dermatologists'
6
assessments are subjective and may vary based on experience, leading to
inconsistencies in diagnosis. Moreover, the accuracy of visual inspection heavily
depends on the expertise of the practitioner, potentially resulting in missed
diagnoses or false positives.

Skin cancer lesions can manifest in diverse forms, making it challenging to


differentiate benign from malignant lesions through visual examination alone.
Additionally, certain subtypes of skin cancer, such as melanoma, may present with
atypical features that are easily overlooked. The limitations of traditional methods
underscore the necessity for innovative approaches that can enhance diagnostic
accuracy and efficiency.

1.3 Concept of AI in Dermatology

The convergence of artificial intelligence (AI) and dermatology has emerged


as a promising avenue to address the challenges posed by traditional diagnostic
methods. Machine learning, a subset of AI, facilitates the development of models
capable of learning patterns and features from vast datasets. In dermatology, AI is
applied to automate medical image analysis tasks, particularly in the realm of skin
lesion detection and classification.

AI-based systems leverage computational algorithms to analyze images,


detect abnormalities, and provide diagnostic insights. These systems can process
information at a speed and scale beyond human capability, potentially
revolutionizing the field of dermatology. By integrating AI into dermatological
practices, there is an opportunity to enhance diagnostic accuracy, reduce
subjectivity, and facilitate early detection of skin cancer.

7
1.4 Introduction to Our Model and Project
In response to the limitations of traditional diagnostic methods and the
potential of AI in dermatology, this project introduces a novel approach for skin
cancer detection. Our methodology revolves around the utilization of DenseNet, a
deep convolutional neural network architecture renowned for its effective feature
extraction and representation learning.

DenseNet's distinctive architecture, characterized by dense connectivity


between layers, addresses challenges such as the vanishing gradient problem. This
enables the model to efficiently learn intricate features from skin lesion images,
contributing to superior performance in skin cancer detection. The project employs
a diverse and meticulously annotated dataset, encompassing various types of skin
cancer, to train and optimize the DenseNet-based model.

Through comprehensive data preprocessing, model architecture design, and


feature extraction, our approach aims to achieve state-of-the-art accuracy in skin
cancer classification. The project not only focuses on model development but also
emphasizes real-world applicability through deployment, including integration into
telemedicine platforms for remote consultations.

As we delve into the details of our working methodology, outcomes, and


discussions, this project envisions contributing significantly to the advancement of
skin cancer detection technology. By leveraging the power of AI and deep learning,
we aspire to enhance early diagnosis, improve patient care, and ultimately mitigate
the impact of this life-threatening disease.

1.5 DEEP LEARNING

Deep learning, a subset of machine learning, has emerged as a powerful paradigm


in artificial intelligence, enabling machines to autonomously learn and make
8
decisions from data. At the forefront of this revolution is the application of deep
neural networks, intricate models inspired by the human brain's neural architecture.
In the realm of computer vision and medical image analysis, deep learning models
have demonstrated remarkable efficacy, especially in tasks like skin cancer
detection.
Our project harnesses the potential of deep learning, employing a specific
architecture known as DenseNet, or Densely Connected Convolutional Networks.
Unlike traditional machine learning approaches, which often rely on handcrafted
features, deep learning models learn hierarchical representations directly from raw
data. In the context of skin cancer detection, this means that the model can
automatically discern intricate patterns and features indicative of various skin
conditions.
DenseNet introduces a dense connectivity pattern among layers, fostering a more
efficient flow of information through the network. This architecture enhances
feature reuse, enables the model to learn intricate representations with fewer
parameters, and mitigates issues like vanishing gradients. As a result, DenseNet has
proven to be highly effective in tasks involving medical image analysis, making it a
compelling choice for our skin cancer detection project.
In the subsequent sections of this report, we delve into the intricacies of our
chosen model, exploring the architecture of DenseNet, the rationale behind its
selection, and its role in extracting meaningful features from dermatological images.
The deep learning model serves as the backbone of our project, contributing to the
advancement of automated and accurate skin cancer diagnosis.

9
CHAPTER 2

LITRETURE SURVEY

Skin Cancer Detection using Deep Learning [1]

The study contributes to the growing body of research at the intersection of


dermatology and deep learning, addressing the need for accurate and efficient
methods in skin cancer diagnosis. In the literature, there is a recognized trend
towards leveraging deep learning techniques for medical image analysis, and
particularly for skin cancer detection. Previous studies have demonstrated the
effectiveness of deep learning models, such as convolutional neural networks
(CNNs), in accurately classifying skin lesions. Kumar et al.'s work aligns with this
trend, and it is crucial to place their research in the context of the broader movement
toward integrating advanced computational methods into dermatological
diagnostics.The geographic context of the conference, held in Tuticorin, India, may
introduce regional nuances in the dataset and diagnostic challenges that need to be
considered. Local variations in skin types and conditions can impact the
generalizability of models, and addressing these factors is essential for the
successful implementation of skin cancer detection systems in diverse populations.
Furthermore, the specific deep learning methodology employed in the paper needs
to be reviewed in the context of existing approaches. The study's novelty may lie in
the unique combination of deep learning techniques or the utilization of a specific
architecture. A comprehensive literature survey should explore how Kumar et al.'s
work builds upon or deviates from existing methods and the implications of these
choices for the accuracy and robustness of skin cancer detection systems.

Skin Cancer Detection using Machine Learning Techniques [2]

Computing and Communication Technologies in Bangalore, India. In the


10
evolving landscape of medical image analysis, the application of machine learning
methods to dermatology has gained substantial attention, and Vidya and Karki
contribute to this domain by focusing on skin cancer detection.The literature reveals
a consistent interest in exploring machine learning techniques for skin cancer
diagnosis, emphasizing their potential to enhance accuracy and efficiency. Machine
learning models have been employed in previous studies for image classification
and feature extraction in the context of dermatological images. Vidya and Karki's
work can be positioned within this broader context, contributing to the ongoing
efforts to develop robust and reliable systems for skin cancer detection.it is crucial
to acknowledge the potential impact of regional variations in skin types and
conditions on the study's outcomes. Local factors, such as prevalent skin disorders
and demographic characteristics, may influence the generalizability of the proposed
machine learning model. A comprehensive literature survey should explore how
existing research accounts for regional disparities and whether Vidya and Karki
address these factors in their skin cancer detection approach. Additionally, delving
into the specific machine learning techniques employed in their study is essential for
a thorough literature survey. Understanding the methodology and comparing it with
existing approaches will provide insights into the novelty and effectiveness of their
proposed techniques. Evaluating the model's performance metrics and
benchmarking against established standards will further contribute to the critical
assessment of Vidya and Karki's work in the broader context of skin cancer detection
research.

Melanoma Skin Cancer Detection Using Deep Learning and Advanced


Regularizer [3]

well-established trend toward employing deep learning techniques for skin


cancer detection due to their capacity for feature extraction and representation

11
learning. Hossin et al.'s work builds upon this foundation by incorporating an
advanced regularizer, suggesting a novel approach to improving the model's
robustness and generalization. Existing literature in skin cancer detection
acknowledges the challenges posed by imbalanced datasets, limited interpretability,
and the need for regularization techniques. Hossin et al. contribute to this discourse
by proposing an advanced regularizer tailored for melanoma detection, providing
potential solutions to address these challenges. By aligning their work with the
prevailing issues in the field, the authors demonstrate a keen awareness of the
practical hurdles in deploying machine learning models for real-world medical
applications. The choice of presenting their research at ICACSIS in Depok,
Indonesia, introduces a geographic dimension to the study. Understanding regional
variations in skin types, melanoma prevalence, and healthcare infrastructure is
crucial for assessing the external validity and applicability of Hossin et al.'s model.
A comprehensive literature survey should explore how other studies account for
regional disparities and whether the proposed method considers these factors in
melanoma detection, thereby contributing to the contextual understanding of the
research. Finally, a thorough analysis of the advanced regularizer introduced by
Hossin et al. is essential for evaluating the innovation and effectiveness of their
approach. Comparing their method with established regularization techniques and
assessing its impact on model performance will provide insights into the potential
advancements it brings to the field of skin cancer detection.

Review on Different Skin Cancer Detection and Classification Techniques [4]

The review spans multiple skin cancer detection approaches, including


traditional image processing methods and contemporary machine learning
techniques. By presenting an inclusive examination of the field, the authors offer
insights into the evolution of skin cancer detection methodologies over time. This
12
comprehensive approach allows readers to gain a holistic understanding of the
strengths and limitations of various techniques, aiding in the identification of gaps
and opportunities for further research. Given the conference's location in Chennai,
India, the review may provide insights into regional considerations within skin
cancer research. It is important to investigate whether Keerthi and Kumar highlight
unique challenges or opportunities specific to the Indian context, such as prevalent
skin conditions, demographics, or healthcare infrastructure. Understanding the
regional nuances enhances the applicability of the review's findings in diverse
healthcare settings. Moreover, an exploration of the review's citations and references
can reveal emerging trends, key contributors, and potential areas of consensus or
divergence in skin cancer detection research. Evaluating the comprehensiveness of
the literature covered and identifying any biases in the selection of studies will
contribute to a nuanced understanding of the state of the art in skin cancer detection
and classification techniques as presented by Keerthi and Kumar.

Pre-trained CNN Models and Machine Learning in Skin Cancer Diagnosis [5]

In addition to their exploration of pre-trained CNN models and machine learning


integration, Gairola et al. (2022) shed light on the practical implications of their
findings. The study not only demonstrates the technical feasibility of combining
these technologies but also discusses the potential for real-world application in the
clinical setting. Emphasizing the translational aspect of their research, the authors
discuss how the fusion of pre-trained CNN models and machine learning can
contribute to more accurate and efficient skin cancer diagnosis, addressing
challenges faced by healthcare practitioners. This pragmatic approach enhances the
paper's significance, showcasing a pathway for the adoption of advanced
technologies in real-world medical scenarios.

13
Furthermore, Gairola et al. (2022) acknowledge the importance of
interpretability in medical decision-making. The paper delves into the
interpretability of the combined model, elucidating how clinicians can gain insights
into the diagnostic process. By fostering a deeper understanding of the decision-
making process, the study not only contributes to the technical advancements in skin
cancer detection but also aligns with the broader trend in AI and healthcare,
emphasizing the importance of transparent and interpretable models for gaining trust
and acceptance within the medical community. This dual focus on technical
robustness and practical applicability positions the research as a valuable
contribution to the field of skin cancer diagnosis using advanced computational
techniques.

Exploration of convolutional neural networks (CNNs) for skin cancer


classification [6]

It extends beyond the technical aspects to highlight the broader implications for
healthcare. The paper emphasizes the transformative potential of CNNs in
automating skin cancer classification, thereby offering a solution to the increasing
demand for efficient and accurate diagnostic tools. In doing so, the research
addresses a critical need within dermatology, where timely and precise diagnosis
plays a pivotal role in treatment planning. By presenting their findings at the 2015
IEEE/RSJ International Conference on Intelligent Robots and Systems, the authors
contribute to the discourse on the integration of advanced computational methods
into the medical field, positioning CNNs as a promising technology for
revolutionizing dermatological diagnostics. The emphasis on the evolving landscape
of medical image analysis reflects a forward-looking perspective, anticipating the
growing influence of AI in healthcare. This foresight is particularly relevant in the
context of dermatology, where early detection and accurate classification of skin
14
lesions can significantly impact patient outcomes. As the research community
continues to build upon these insights, Haloi and Dutta's foundational work serves
as a catalyst for the ongoing development of intelligent systems in healthcare, with
potential applications reaching far beyond skin cancer classification.

Melanoma Disease Detection Using Convolutional Neural Networks [7]

Skin Lesion detection and classification are very critical in diagnosing skin
malignancy. Existing Deep learning-based Computer-aided diagnosis (CAD)
methods still perform poorly on challenging skin lesions with complex features such
as fuzzy boundaries, artifacts presence, low contrast with the background and,
limited training datasets. They also rely heavily on a suitable turning of millions of
parameters which often leads to over-fitting, poor generalization, and heavy
consumption of computing resources. This study proposes a new framework that
performs both segmentation and classification of skin lesions for automated
detection of skin cancer.
The proposed framework consists of two stages: the first stage leverages on
an encoder-decoder Fully Convolutional Network (FCN) to learn the complex and
inhomogeneous skin lesion features with the encoder stage learning the coarse
appearance and the decoder learning the lesion borders details. Our FCN is designed
with the sub-networks connected through a series of skip pathways that incorporate
long skip and short-cut connections unlike, the only long skip connections
commonly used in the traditional FCN, for residual learning strategy and effective
training. The network also integrates the Conditional Random Field (CRF) module
which employs a linear combination of Gaussian kernels for its pairwise edge
potentials for contour refinement and lesion boundaries localization. The second
stage proposes a novel FCN-based DenseNet framework that is composed of dense
15
blocks that are merged and connected via the concatenation strategy and transition
layer. The system also employs hyper-parameters optimization techniques to reduce
network complexity and improve computing efficiency. This approach encourages
feature reuse and thus requires a small number of parameters and effective with
limited data. The proposed model was evaluated on publicly available HAM10000
dataset of over 10000 images consisting of 7 different categories of diseases with
98% accuracy, 98.5% recall, and 99% of AUC score respectively.

A Refined Approach for Classification and Detection of Melanoma Skin


Cancer using Deep Neural Network [8]

This paper focuses on the classification of dermoscopic images to identify the


type of Skin lesion whether it is benign or malignant. Dermoscopic images provide
deep insight for the analysis of any type of skin lesion. Initially, a custom
Convolutional Neural Network (CNN) model is developed to classify the images for
lesion identification. This model is trained across different train-test split and 30%
split of train data is found to produce better accuracy. To further improve the
classification accuracy a Batch Normalized Convolutional Neural Network (BN-
CNN) is proposed. The proposed solution consists of 6 layers of convolutional
blocks with batch normalization followed by a fully connected layer that performs
binary classification. The custom CNN modal is similar to the proposed model with
the absence of Batch normalization and presence of Dropout at Fully connected
layer. Experimental results for the proposed model provided better accuracy of
89.30%. Final work includes analysis of the proposed model to identify the best
tuning parameters.

16
Skin Cancer Detection Using Machine Learning Algorithms [9]

Skin cancer is the most frequent type of cancer. It must be detected and treated
as soon as possible because it can be fatal. The distinction between cancerous and
benign skin lesions is difficult to discern with the naked eye, making cancer
detection difficult. Because several lesions may have similar appearances, precise
identification of skin cancer is challenging. With the growth of technology and
computer vision, several machine learning and deep learning techniques are getting
popular in skin lesion categorization tasks.
In this paper we proposed a deep learning model using a pretrained DenseNet
architecture. For our work, we used the HAM10000 dataset, which contains 10015
dermoscopic images. To demonstrate the significance of using balanced dataset in
classification tasks, we conducted two experiments. The imbalanced dataset was
employed in the first experiment, while a resampled dataset with balanced classes
was used in the second task. In the classification task, employing balanced classes
resulted in better performance having accuracy 82.1%.

Detection of Skin Cancer Using Deep Neural Networks [10]

The complex detection background and lesion features make the automatic
detection of dermoscopy image lesions face many challenges. The previous
solutions mainly focus on using larger and more complex models to improve the
accuracy of detection, there is a lack of research on significant intra-class differences
and inter-class similarity of lesion features. At the same time, the larger model size
also brings challenges to further algorithm application; In this paper, we proposed a
lightweight skin cancer recognition model with feature discrimination based on fine-
grained classification principle.
The propose model includes two common feature extraction modules of

17
lesion classification network and a feature discrimination network. Firstly, two sets
of training samples (positive and negative sample pairs) are input into the feature
extraction module (Lightweight CNN) of the recognition model. Then, two sets of
feature vectors output from the feature extraction module are used to train the two
classification networks and feature discrimination networks of the recognition
model at the same time, and the model fusion strategy is applied to further improve
the performance of the model, the proposed recognition method can extract more
discriminative lesion features and improve the recognition performance of the model
in a small amount of model parameters; In addition, based on the feature extraction
module of the proposed recognition model, U-Net architecture, and migration
training strategy, we build a lightweight semantic segmentation model of lesion area
of dermoscopy image, which can achieve high precision lesion area segmentation
end-to-end without complicated image preprocessing operation; The performance
of our approach was appraised through widespread experiments comparative and
feature visualization analysis, the outcome indicates that the proposed method has
better performance than the start-of-the-art deep learning-based approach on the
ISBI 2016 skin lesion analysis towards melanoma detection challenge dataset.

Skin Cancer Detection: A Review Using Deep Learning Techniques [11]

Melanoma is considered the most serious type of skin cancer. All over the world,
the mortality rate is much high for melanoma in contrast with other cancer. There
are various computer-aided solutions proposed to correctly identify melanoma
cancer. However, the difficult visual appearance of the nevus makes it very difficult
to design a reliable Computer-Aided Diagnosis (CAD) system for accurate
melanoma detection. Existing systems either uses traditional machine learning
models and focus on handpicked suitable features or uses deep learning-based
methods that use complete images for feature learning. The automatic and most
discriminative feature extraction for skin cancer remains an important research
18
problem that can further be used to better deep learning training. Furthermore, the
availability of the limited available images also creates a problem for deep learning
models. From this line of research, we propose an intelligent Region of Interest
(ROI) based system to identify and discriminate melanoma with nevus cancer by
using the transfer learning approach. An improved k-mean algorithm is used to
extract ROIs from the images. These ROI based approach helps to identify
discriminative features as the images containing only melanoma cells are used to
train system. We further use a Convolutional Neural Network (CNN) based transfer
learning model with data augmentation for ROI images of DermIS and DermQuest
datasets. The proposed system gives 97.9% and 97.4% accuracy for DermIS and
DermQuest respectively. The proposed ROI based transfer learning approach
outperforms existing methods that use complete images for classification.

The Melanoma Skin Cancer Detection and Classification Using Support Vector
Machine [12]

Melanoma skin cancer detection at an early stage is crucial for an efficient


treatment. Recently, it is well known that, the most dangerous form of skin cancer
among the other types of skin cancer is melanoma because it’s much more likely to
spread to other parts of the body if not diagnosed and treated early. The non-invasive
medical computer vision or medical image processing plays increasingly significant
role in clinical diagnosis of different diseases. Such techniques provide an automatic
image analysis tool for an accurate and fast evaluation of the lesion. The steps
involved in this study are collecting dermoscopy image database, preprocessing,
segmentation using thresholding, statistical feature extraction using Gray Level Co-
occurrence Matrix (GLCM), Asymmetry, Border, Color, Diameter (ABCD) etc..,
features selection using Principal component analysis (PCA), calculating total
Dermoscopy Score and then classification using Support Vector Machine(SVM).
The accuracy shows 92.1%
19
Skin cancer detection and classification [13]

Skin cancer is the most common type of cancer, which affects the life of
millions of people every year. About three million people are diagnosed with the
disease every year in the United States alone. The rate of survival decreases steeply
as the the disease progresses. However, detection of skin cancer in the early stages
is a difficult and expensive process. In this study, we propose a methodology that
detects and identifies skin lesions as benign or malignant based upon images taken
from general cameras. The images are segmented, features extracted by applying the
ABCD rule and a Neural Network is trained to classify the lesions to a high degree
of accuracy. The trained Neural Network achieved an overall classification accuracy
of 76.9% on a dataset of 463 images, divided into six distinct classes.

Automatic Skin Cancer Detection using Machine Learning Techniques [14]

As increasing instant of skin cancer every year with regards of malignant


melanoma, the dangerous type of skin cancer. And the detection of skin cancer is
difficult from the skin lesion due to artifacts, low contrast, and similar visualization
like mole, scar etc. Hence Automatic detection of skin lesion is performed using
techniques for lesion detection for accuracy, efficiency and performance criteria.
The proposed algorithm applies feature extraction using ABCD rule, GLCM and
HOG feature extraction for early detection of skin lesion. In the proposed work, Pre-
processing is to improve the skin lesion quality and clarity to reduce artifacts, skin
color, hair, etc., Segmentation was performed using Geodesic Active Contour
(GAC) which segments the lesion part separately which was further useful for
feature extraction. ABCD scoring method was used for extracting features of
symmetry, border, color and diameter. HOG and GLCM was used for extracting

20
textural features. The extracted features are directly passed to classifiers to classify
skin lesion between benign and melanoma using different machine learning
techniques such as SVM, KNN and Naïve Bayes classifier. In this project skin lesion
images were downloaded from International Skin Imaging Collaboration (ISIC) in
which 328 images of benign and 672 images of melanoma. The classification result
obtained is 97.8 % of Accuracy and 0.94 Area under Curve using SVM classifiers.
And additionally the Sensitivity obtained was 86.2 % and Specificity obtained was
85 % using KNN.

21
CHAPTER 3 PROBLEM STATEMENTS /
MOTIVATIONS

Skin cancer is an escalating global health concern, with millions of new cases
diagnosed annually. The conventional approach to diagnosis, primarily relying on
visual skin examination, is hindered by subjectivity and variable accuracy. This
human-dependent method can lead to delayed or inaccurate diagnoses, emphasizing
the need for more reliable and efficient diagnostic tools.

The importance of early detection in skin cancer cannot be overstated, as timely


intervention significantly improves treatment outcomes and reduces the complexity
and cost of subsequent medical interventions. Dermoscopic imaging, while offering
detailed views of skin lesions, presents challenges such as lighting variability,
diverse skin types, and the need for precise lesion boundary identification,
necessitating a more advanced and automated approach to diagnosis. This project
seeks to leverage the power of deep learning, specifically utilizing DenseNet, to
enhance accuracy in skin cancer detection. The motivation stems from the potential
for automation in dermatology, offering a more objective and scalable approach to
screening. Beyond the immediate application in skin cancer detection, the project
aspires to contribute to the broader field of medical image analysis. By providing a
reproducible framework and methodology, the aim is to establish a standardized
approach that promotes collaboration, transparency, and validation within the
scientific community.
The anticipated clinical impact on patient outcomes further underscores the
significance of accurate and early skin cancer detection. Success in this endeavor
not only addresses the specific challenges in skin cancer diagnosis but also holds the
potential for generalization to other areas of healthcare, where precise image
interpretation is paramount for effective medical decision-making

22
CHAPTER 4 PROPOSED SYSTEM

4.1 STEPS FOR IMPLEMENTATION FOR SKIN CANCER DETECTION


USING DENSENET DEEP LEARNING MODEL

4.1.1 Data Collection


The classification model was trained and validated on the HAM10000 which
is composed of 10030 skin images with corresponding class labels. The dataset is
composed of 7 important diagnostic categories of skin lesions, from which we used
four types of cancers represented as Actinic Keratosis (AKIEC), Basal Cell
Carcinoma (BCC), Benign Keratosis-like Lesions (BKL), Dermatofibroma (DF).
The first step in our working methodology involves the careful curation of a diverse
and extensive dataset of skin images.
These images serve as the foundation of our deep learning model. The dataset
comprises a wide range of skin lesions, including several types of skin cancer such
as Melanoma, Vascular Lesions, Basal Cell carcinoma, Actinic Keratosis,
Dermatofibroma, Benign keratosis, and Nevus. To ensure the dataset's accuracy,
each image is meticulously annotated by dermatologists. This comprehensive
dataset is essential for training and evaluating our skin cancer detection model.

Fig 4.1: Sample images of Actinic Keratosis, Dermatofibroma, Basal cell carcinoma, and Benign
keratosis.
23
4.1.2 Data Preprocessing

Before feeding the data into our deep learning model, we perform data
preprocessing to enhance image quality and suitability for training. This
preprocessing includes resizing images to a consistent format, normalization to bring
pixel values within a standardized range, and applying data augmentation techniques
to increase the model's robustness. These steps help mitigate the risk of overfitting
and improve the model's ability to generalize to unseen data.

4.1.3 Model Architecture

The basis of our skin cancer detection approach is the DenseNet architecture.
Densely Connected Convolutional Networks are selected for their unique
architecture characterized by dense connectivity between layers.

Fig 4.2: Densenet Model

The main concept of a DenseNet is to forward the connection to every other layer,
creating a dense network. The structure is shaped through numerous dense blocks,
every of which consists of numerous convolutional layers which are tightly coupled
to 1 another. The output of every layer in a dense block is combined with the output
of all preceding layers and provided as an input to the following layer in the block.
24
The densely connected blocks allow for deep networks without suffering from the
vanishing gradient problem, ultimately improving the model's performance, and
enabling the training of even deeper networks with a relatively small number of
parameters. This concept has gained prominence in various computer vision tasks,
including image classification and object detection, by offering state-of-the-art
results with efficient parameter utilization and improved accuracy.

4.1.4 Feature extraction


Feature extraction is performed using a Convolutional Neural Network
(CNN) implemented with Keras. The CNN model consists of convolutional layers,
max-pooling layers, and dropout layers to detect and extract relevant features from
skin lesion images and then use the extracted features for different classifications of
skin lesions. Before segmentation, images are of equal size (32x32 pixels) and
normalized to (0,1). Feature extraction is performed using Convolutional Neural
Network (CNN) implemented with Keras.

4.1.5 Training and Validation

During the training phase, we feed the pre-processed skin images into our
DenseNet-based model. The model iteratively refines its internal parameters to learn
discriminative features and patterns associated with skin cancer. To monitor the
model's progress and prevent overfitting, we employ a validation dataset that is
separate from the training data. Our model is trained for image classification, and
we employ a dataset consisting of a diverse range of images. The training process
spans 50 epochs, and we carefully monitor the model's performance using various
metrics.

25
4.1.6 Deployment and Integration
The last step in our working methodology involves deploying the trained
model for real-world clinical applications. We explore deployment options,
including integration into telemedicine platforms. This ensures that our skin cancer
detection system can be readily accessed by dermatologists, facilitating remote
consultations, and extending its reach to underserved regions. Additionally,
continuous monitoring and updates are planned to ensure the model's ongoing
performance and adaptability to evolving dermatological practices.

26
4.2 LIBRARIES USED IN PROPOSED SYSTEM

 os: Provides a way to interact with the operating system and access files and
directories.
 cv2: A computer vision library that can be used for image processing tasks.
 numpy: A numerical computing library that provides support for multi-
dimensional arrays and matrix operations.
 matplotlib: A plotting library that can be used to visualize data.
 sklearn: A machine learning library that provides tools for data pre- processing,
model selection, and evaluation.
 keras: A deep learning library that provides a high-level API for building and
training neural networks.
 backend: Provides a way to interact with the backend engine of Keras (e.g.,
TensorFlow or Theano).
 optimizers: Provides a way to specify optimization algorithms for training neural
networks.
 Sequential: A Keras model type that allows for a linear stack of layers.
 Conv2D: A 2D convolutional layer that can be used for image processing tasks.
 MaxPooling2D: A 2D pooling layer that can be used to reduce the spatial
dimensions of the input data.
 Activation: A layer type that applies an activation function to the input data.
 Dropout: A regularization technique that randomly drops out a fraction of the
input units during training to reduce overfitting.
 Flatten: A layer type that flattens the input data into a 1D array.
 Dense: A fully connected layer type that applies a linear transformation to the
input data.
 ModelCheckpoint: A Keras call back that saves the model weights after each
epoch.

27
 ReduceLROnPlateau: A Keras callback that reduces the learning rate when the
validation loss stops improving.
 EarlyStopping: A Keras callback that stops training when the validation loss stops
improving.
 classification_report: A function that generates a report of classification metrics
(e.g., precision, recall, F1-score).
 confusion_matrix: A function that generates a confusion matrix to evaluate the
performance of a classification model.
 accuracy_score: A function that calculates the accuracy of a classification model.
 model_from_json: A Keras function that loads a saved model architecture from a
JSON file.
 plot_model: A Keras function that generates a visualization of a model
architecture.

28
4.4 OUTCOMES
The expected outcome of our project is a skin cancer detection system that is
accurate, reliable, and accessible for clinical use, contributing to early diagnosis and
improved patient care in the field of dermatology.

4.4.1 Enhanced Skin Cancer Detection Model


Our primary goal is to create an advanced deep-learning model that can
effectively identify four types of skin cancer, such as Actinic Keratosis, Basal Cell
Carcinoma, Benign Keratosis-like lesions, and Dermatofibroma. We do this by
meticulously selecting and organizing a diverse set of skin cancer images and
leveraging the robust DenseNet architecture. Our commitment to this project is
driven by the potential to save lives and improve the early detection of skin cancer,
making it a vital endeavor in the realm of healthcare.

4.4.2 Enhanced Data Quality


We take the utmost care to ensure the quality of our data. By employing
techniques like resizing, normalization, and data augmentation, we optimize the
images used for training. This guarantees that our model learns from the highest
quality data, which, in turn, improves its ability to generalize and make accurate
predictions. Our dedication to data quality reflects our unwavering commitment to
the accuracy and reliability of our skin cancer detection model.

4.4.3 Efficient Feature Extraction


Our choice of the DenseNet architecture is not arbitrary; it is because it excels at
efficiently extracting essential features from skin images. These features are crucial
for detecting skin cancer and capturing intricate details that might be missed by other
models. This emphasis on feature extraction ensures that our model does not just
identify skin cancer but also provides valuable insights for dermatologists to make
more informed decisions.
29
4.4.4 Real-world Deployment
Our project does not stop at building a model; it extends to its real-world
application. We aim to deploy the trained model in clinical settings, making it
accessible to dermatologists through telemedicine platforms. This not only
facilitates remote consultations but also reaches underserved regions, thus
broadening its impact. By bridging the gap between technology and medical
practice, we are contributing to more accessible and efficient healthcare services.

4.4.5 Cost Savings in Healthcare


The accurate prediction of skin cancer using DenseNet can lead to cost savings
in healthcare systems. Early detection and more precise diagnoses can result in less
aggressive treatments and fewer complications, ultimately reducing the financial
burden on patients and healthcare providers alike. This not only improves healthcare
access but also contributes to cost-effective care.

4.4.6 Continuous Monitoring and Updates


We do not consider our project finished with the initial model. To ensure ongoing
success and adapt to changes in dermatological practices, we have planned for
continuous monitoring and updates. This approach guarantees our model remains
relevant and effective over time. Our dedication to continuous improvement is an
essential part of our commitment to staying at the forefront of skin cancer detection
technology.

4.4.7 Model Training and Validation


We conducted the training process with a batch size of 122, and each epoch takes
approximately 39 seconds to complete. The training process is computationally
intensive, and we train our model for a total of 50 epochs. After the completion of
50 epochs, we achieved an impressive test accuracy of approximately 94.7%.
30
Fig 4.3: Training and Validation Accuracy

Fig 4.4: Training and Validation loss

4.4.8 Confusion matrix


A confusion matrix is a matrix that summarizes the performance of a machine
learning model on a set of test data. It is frequently used to degree the overall
performance of type models, which purpose to expect a specific label for every enter
instance. The matrix presents the variety of actual positives (TP), actual negatives
(TN), fake positives (FP), and fake negatives (FN) produced with the aid of using
the version at the take a look at data. The confusion matrix, as obtained from our
classification model, is presented below. The matrix corresponds to a multi-class
classification task with four classes, denoted as Class 0, Class 1, Class 2, and
31
Class 3. Each row represents the genuine class, and every column represents the
anticipated class.

4.5 PERFORMANCE METRICS


Sensitivity:
Sensitivity is a measure of how well a diagnostic test correctly identifies true
positive cases. For skin cancer classification, sensitivity represents the ability of a
model to correctly identify individuals with skin cancer among those who have the
condition.
It is defined as the ratio of correctly predicted positive instances to all actual
positive instances. Sensitivity for each class has been obtained and the average has
been calculated.
Sensitivity (Recall) = TP / (TP + FN)

Class 0: Sensitivity (R0) = TP0 / (TP0 + FN0)


= 510 / (510 + 20)
≈ 0.9623

Class 1: Sensitivity (R1) = TP1 / (TP1 + FN1)


= 440 / (440 + 25)
≈ 0.9462

Class 2: Sensitivity (R2) = TP2 / (TP2 + FN2)


= 440 / (440 + 58)
≈ 0.8835

Class 3: Sensitivity (R3) = TP3 / (TP3 + FN3)


= 500 / 500 + 0
=1

Average Sensitivity = R0 + R1 + R2 + R3 / 4

32
= 0.96+0.94+0.88+1 / 4
= 0.945 (94.5%)
Specificity:
Specificity measures how well a diagnostic test correctly identifies true
negative cases. In the context of skin cancer, it represents the ability of a model to
correctly identify individuals without skin cancer among those who are free of the
condition.
It is defined as the ratio of correctly predicted negative instances to all actual
negative instances. Specificity for each class has been obtained and the average has
been calculated.
Specificity = TN / (TN + FP)

Class 0: Specificity (S0) = TN0 / (TN0 + FP0)


= 1224 / (1224 + 30)
≈ 0.9764

Class 1: Specificity (S1) = TN1 / (TN1 + FP1)


= 1221 / (1221 + 50)
≈ 0.9607

Class 2: Specificity (S2) = TN2 / (TN2 + FP2)


= 1233 / (1233 + 18)
≈ 0.9856

Class 3: Specificity (S3) = TN3 / (TN3 + FP3)


= 1040 / (1040 + 0)
=1

Average Specificity = S0 + S1 + S2 + S3 / 4
= 0.97+0.96+0.98+1 / 4
= 0.977 (97.7%)

33
Precision:
Precision, also known as positive predictive value, measures the accuracy of
positive predictions made by a model. In the context of skin cancer, it indicates the
proportion of correctly predicted skin cancer cases among all predicted skin cancer
cases. Precision for each class has been obtained and the average has been
calculated.

Precision (P) = TP / (TP + FP)

Class 0: Precision (P0) = TP0 / (TP0 + FP0)


= 510 / (510 + 30)
≈ 0.9444

Class 1: Precision (P1) = TP1 / (TP1 + FP1)


= 440 / (440 + 50)
≈ 0.8974

Class 2: Precision (P2) = TP2 / (TP2 + FP2)


= 440 / (440 + 18)
≈ 0.9609

Class 3: Precision (P3) = TP3 / (TP3 + FP3)


= 500 / 500 + 0
=1

Average Precision = P0 + P1 + P2 + P3 / 4
= 0.94+0.89+0.96+1 / 4
= 0.947(94.7%)

34
CHAPTER 5 RESULTS AND DISCUSSION

The input is obtained from the user as an image file path from the local
system. When entered, the given input image is predicted from the four classes of
cancers represented as Actinic Keratosis (AKIEC), Basal Cell Carcinoma (BCC),
Benign Keratosis-like Lesions (BKL), Dermatofibroma (DF), and gives the
predicted label below the input image given by the user. The below image is the
output that is obtained by running our program. This works whenever a user gives
the file path of the image that is to be predicted, and finally the input image is
predicted and the labels at the output.

Fig 5.1: User input and Predicted output with the label

35
Fig 5.2: Plot of training and validation accuracy

Fig 5.3: Plot of Training and Validation loss

36
CHAPTER 6

CONCLUSION & FUTUREWORKS

The classification of the pathogen is done with utmost accuracy of 93% and low loss
from the given pathogen database. There are many different approaches to this
problem and better chance to make this prediction model more accurate. Current
method used stained samples of bacteria colonies and they were observed under oil
immersion. In this research, the focus was only on bacteria classification from
sample images of individual bacteria colonies so there was no use case of mixed
bacteria sample in single image. It is possible to extend this classification problem
to advance level to directly identify bacteria colonies on plate rather than one colony
of bacteria. This approach may require different technique in image input as well as
data extraction. In terms of data collection, it is suggested to use laser beam method
to expose colonies and take high resolution pictures. Hence, data gathering technique
may vary and problem can first be to localization of bacteria colonies and on next
level it is possible to classify each colony. This research has much brighter scope in
detecting human life exitance on Mars or other planet where scientists and
researchers believe that life is possible in those environmental conditions. This
research primarily focuses on the sensory system and bacteria classification
problem, which can be extended to colonies detection using deep learning by image
detection and localization modules.

37
REFERENCES

[1] R. S. Kumar, A. Singh, S. Srinath, N. K. Thomas and V. Arasu, "Skin Cancer


Detection using Deep Learning," 2022 International Conference on Electronics and
Renewable Systems (ICEARS), Tuticorin, India, 2022, pp. 1724-1730, doi:
10.1109/ICEARS53579.2022.9751826.

[2] M. Vidya and M. V. Karki, "Skin Cancer Detection using Machine Learning
Techniques," 2020 IEEE International Conference on Electronics, Computing and
Communication Technologies (CONECCT), Bangalore, India, 2020, pp. 1-5, doi:
10.1109/CONECCT50063.2020.9198489.

[3] M. A. Hossin, F. F. Rupom, H. R. Mahi, A. Sarker, F. Ahsan, and S. Warech,


"Melanoma Skin Cancer Detection Using Deep Learning and Advanced
Regularizer," 2020 International Conference on Advanced Computer Science and
Information Systems (ICACSIS), Depok, Indonesia, 2020, pp. 89-94, doi:
10.1109/ICACSIS51025.2020.9263118.

[4] K. V. Keerthi and V. V. Kumar, "Review on Different Skin Cancer Detection


and Classification Techniques," 2020 International Conference on Communication
and Signal Processing (ICCSP), Chennai, India, 2020, pp. 0426-0431, doi:
10.1109/ICCSP48568.2020.9182096.

[5] A. K. Gairola, V. Kumar, and A. K. Sahoo, "Exploring the strengths of Pre-


trained CNN Models with Machine Learning Techniques for Skin Cancer
Diagnosis," 2022 IEEE 2nd Mysore Sub Section International Conference
(MysuruCon), Mysuru, India, 2022, pp. 1-6, doi:
10.1109/MysuruCon55714.2022.9972741.

[6] A. Haloi and P. R. Dutta, "Skin cancer classification using convolutional neural
networks," in 2015 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Hamburg, Germany, 2015, pp. 3927-3932, doi:
10.1109/IROS.2015.7354069.

[7] Ravva Sai Sanketh,Dr. M Madhu Bala, PanatiViswa Narendra Reddy,G V S


Phani Kumar., “Melanoma Disease Detection Using Convolutional Neural
Networks,” International Conference on Intelligent Computing and Control Systems
(ICICCS 2020).

38
[8] M. Babar, R. T. Butt, H. Batool, M. A. Asghar, A. R. Majeed and M. J. Khan,
"A Refined Approach for Classification and Detection of Melanoma Skin Cancer
using Deep Neural Network," 2021 International Conference on Digital Futures and
Transformative Technologies (ICoDT2), Islamabad, Pakistan, 2021, pp. 1-6, doi:
10.1109/ICoDT252288.2021.9441520.

[9] M. Ramachandro, T. Daniya and B. Saritha, "Skin Cancer Detection Using


Machine Learning Algorithms," 2021 Innovations in Power and Advanced
Computing Technologies (i-PACT), Kuala Lumpur, Malaysia, 2021, pp. 1-7, doi:
10.1109/i-PACT52855.2021.9696874.

[10] M. M. I. Rahi, F. T. Khan, M. T. Mahtab, A. K. M. Amanat Ullah, M. G. R.


Alam, and M. A. Alam, "Detection of Skin Cancer Using Deep Neural Networks,"
2019 IEEE Asia-Pacific

[11] Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.;
Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin Cancer
Detection: A Review Using Deep Learning Techniques. Int. J. Environ. Res. Public
Health 2021, 18, 5479.

[12] H. Alquran et al., "The melanoma skin cancer detection and classification using
support vector machine," 2017 IEEE Jordan Conference on Applied Electrical
Engineering and Computing Technologies (AEECT), Aqaba, Jordan, 2017, pp. 1-5,
doi: 10.1109/AEECT.2017.8257738.

[13] P. Dubal, S. Bhatt, C. Joglekar, and S. Patil, "Skin cancer detection and
classification," 2017 6th International Conference on Electrical Engineering and
Informatics (ICEEI), Langkawi, Malaysia, 2017, pp. 1-6, doi:
10.1109/ICEEI.2017.8312419.

[14] Azadeh Noori Hoshyar, A. Al-Jumaily and R. Sulaiman, "Review on automatic


early skin cancer detection," 2011 International Conference on Computer Science
and Service System (CSSS), Nanjing, China, 2011, pp. 4036-4039, doi:
10.1109/CSSS.2011.5974581.

[15] R. B. Aswin, J. A. Jaleel and S. Salim, "Hybrid genetic algorithm — Artificial


neural network classifier for skin cancer detection," 2014 International Conference
on Control, Instrumentation, Communication and Computational Technologies
(ICCICCT), Kanyakumari, India, 2014, pp. 1304-1309, doi:
39
10.1109/ICCICCT.2014.6993162.

[16] J. Daghrir, L. Tlig, M. Bouchouicha and M. Sayadi, "Melanoma skin cancer


detection using deep learning and classical machine learning techniques: A hybrid
approach," 2020 5th International Conference on Advanced Technologies for Signal
and Image Processing (ATSIP), Sousse, Tunisia, 2020, pp. 1-5, doi:
10.1109/ATSIP49331.2020.9231544.

[17] S. Mane and S. Shinde, "A Method for Melanoma Skin Cancer Detection Using
Dermoscopy Images," 2018 Fourth International Conference on Computing
Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 1-6,
doi: 10.1109/ICCUBEA.2018.8697804.

[18] J. Zhao, H. Lui, D. I. McLean and H. Zeng, "Real-time raman spectroscopy for
non-invasive skin cancer detection - preliminary results," 2008 30th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society,
Vancouver, BC, Canada, 2008, pp. 3107-3109, doi: 10.1109/IEMBS.2008.4649861.

[19] H. Ramos, R. Giraldo, and F. Arbeláez, "Early diagnosis of melanoma from 3D


body scans: A deep learning approach," in 2018 IEEE 15th International
Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 2018, pp.
1304-1307, doi: 10.1109/ISBI.2018.8363730.

[20] Conference on Computer Science and Data Engineering (CSDE), Melbourne,


VIC, Australia, 2019, pp. 1-7, doi: 10.1109/CSDE48274.2019.9162400.

[21] Usharani Bhimavarapu and Gopi Battineni Journal: Healthcare, 2022, Volume
10, Number 5, Page 962 DOI: 10.3390/healthcare10050962

[22] R. Ashraf et al., "Region-of-Interest Based Transfer Learning Assisted


Framework for Skin Cancer Detection," in IEEE Access, vol. 8, pp. 147858-147871,
2020, doi: 10.1109/ACCESS.2020.3014701.

[23] N. A. Al-Dmour, M. Salahat, H. K. G. Nair, N. Kanwal, M. Saleem and N.


Aziz, "Intelligence Skin Cancer Detection using IoT with a Fuzzy Expert System,"
2022 International Conference on Cyber Resilience (ICCR), Dubai, United Arab
Emirates, 2022, pp. 1-6, doi: 10.1109/ICCR56254.2022.9995733.

40
[24] L. Wei, K. Ding and H. Hu, "Automatic Skin Cancer Detection in Dermoscopy
Images Based on Ensemble Lightweight Deep Learning Network," in IEEE Access,
vol. 8, pp. 99633-99647, 2020, doi: 10.1109/ACCESS.2020.2997710.

[25] N.A and C. S. Nair, "Multiclass Skin Lesion Classification Using Densenet,"
2022 Third International Conference on Intelligent Computing Instrumentation and
Control Technologies (ICICICT), Kannur, India,2022, pp.506510,
doi:10.1109/ICICICT54557.2022.9917913.

[26] R. Desai, J. Panchal, A. Mitra, S. Mamlekar, H. Chari and S. Aswale, "Skin


Cancer Detection Using Machine Learning: A Survey," 2022 3rd International
Conference on Intelligent Engineering and Management (ICIEM), London, United
Kingdom, 2022, pp. 31-35, doi:10.1109/ICIEM54221.2022.9853084.

[27] Heibel, H.D., Hooey, L. & Cockerell, C.J. A Review of Non-invasive


Techniques for Skin Cancer Detection in Dermatology. Am J Clin Dermatol 21,
513–524 (2020).

[28] G. S. Jayalakshmi and V. S. Kumar, "Performance analysis of Convolutional


Neural Network (CNN) based Cancerous Skin Lesion Detection System," 2019
International Conference on Computational Intelligence in Data Science (ICCIDS),
Chennai, India, 2019, pp. 1-6, doi: 10.1109/ICCIDS.2019.8862143.

[29] Dr. A. Rasmi, Dr. A. Jayanthiladevi Skin Cancer Detection and Classification
System by Applying Image Processing and Machine Learning Techniques. Opt.
Mem. Neural Networks 32, 197–203 (2023),doi: 10.3103/S1060992X23030086

[30] A. A. Adegun and S. Viriri, "FCN-Based Densenet Framework for Automated


Detection and Classification of Skin Lesions in Dermoscopy Images," in IEEE
Access, vol. 8, pp. 150377-150396, 2020, doi: 10.1109/ACCESS.2020.3016651.

41
PUBLICATION PROOF

42

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy