0% found this document useful (0 votes)
22 views11 pages

IEEE-paper2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views11 pages

IEEE-paper2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1

Prediction of Cervical cancer in women with


Colposcopy images using CNN models

Dr. R. Thangarajan BE, ME, PhD Dr. S. Anitha BE, ME, PhD A . Deenu Mol BE, ME
Faculty, Department of Computer Faculty, Department of Computer Faculty, Department of Information
Science and Design, Science and Design, Technology,
Kongu Engineering College, Kongu Engineering College, Kongu Engineering College,
Perundurai, Perundurai, Perundurai,
Erode-638060, Erode-638060, Erode-638060,
Tamilnadu, Tamilnadu, Tamilnadu,
India India India
rt.cse@kongu.edu anithame.it@kongu.edu deenumol.it@kongu.ac.in

P. Revathi V. Swetha U. Vimalan


UG Student, UG Student, UG Student,
Department of Information Department of Information Department of Information
Technology, Technology, Technology,
Kongu Engineering College, Kongu Engineering College, Kongu Engineering College,
Perundurai, Perundurai, Perundurai,
Erode-638060, Erode-638060, Erode-638060,
Tamilnadu, Tamilnadu, Tamilnadu,
India India India
revathip.21it@kongu.edu swethav.21it@kongu.edu vimalanu.21it@kongu.edu

models can deliver reliable and rapid diagnostic outcomes,


Abstract - Cervical cancer remains one of the most prevalent offering valuable tools for cervical cancer screening.
diseases globally, posing a significant health challenge,
particularly for women in low-resource settings. Delayed 1. INTRODUCTION
diagnosis in these contexts often leads to high morbidity and
As one of the most prevalent diseases affecting women
mortality rates, highlighting the critical need for effective
globally, cervical cancer continues to pose a serious threat to
early
global health. The World Health Organization (WHO) report
detection methods. This study explores the potential of
that a significant portion of women's cancer-related fatalities
Convolutional Neural Networks (CNNs), an advanced deep
are caused by cervical cancer, especially in low- and middle-
learning architecture, to enhance diagnostic accuracy using
income nations with insufficient access to preventive
colposcopy images. The research systematically evaluates the
healthcare. The effect of this disease can be reduced by early
performance of pre-trained models, including DenseNet121,
identification and treatment; unfortunately, standard screening
ResNet50 and VGG19 as feature extractors within a binary
techniques, such Pap smears and HPV tests, are frequently
classification framework. Each model is integrated with a
unavailable, expensive, or prone to delays in settings with
custom fully connected network for the classification of
limited resources. This emphasizes the critical need for widely
cervical neoplasia within a transfer learning pipeline. After
deployable, accurate, efficient, and affordable diagnostic
training and validating the models on a carefully curated
technologies.
dataset of colposcopic images, DenseNet121 demonstrated the
highest diagnostic accuracy at 69.39%. Although this accuracy
The field of medical image analysis has undergone a
is moderate, DenseNet121 shows potential for improving
revolution thanks to recent advances in deep learning, namely
diagnostic workflows by providing non-invasive, automated
in Convolutional Neural Networks (CNNs). In a variety of
screening methods. The study suggests that deep learning
medical disciplines, CNNs have shown impressive
performance in tasks like object identification, segmentation,
and image classification. CNNs have the ability to
2

automatically identify anomalies in colposcopy pictures, opportunities in improving feature extraction and
which are frequently utilized for cervix visual inspection in the classification processes. The study employed CNNs to classify
context of cervical cancer screening. It is feasible to improve colposcopy images, achieving an accuracy of 93.5%,
early detection accessibility, lower human error, and improve sensitivity of 92%, and specificity of 91.8%. Despite these
diagnostic accuracy by utilizing CNNs. promising results, the study encountered issues such as
inconsistent image quality, a limited dataset, and difficulty in
The effectiveness of various cutting-edge pre-trained generalizing the findings to different clinical contexts and
architectures inside a transfer learning framework is the main cancer stages. Addressing these challenges is crucial for
focus of this study's investigation into the use of CNN models enhancing the robustness and applicability of these models in
for cervical cancer prediction utilizing colposcopy visuals. In real-world medical diagnostics.
particular, the performance of models like DenseNet121, Hannah Ahmadzadeh Sarhangi et al. (2024) introduced a
ResNet50 and VGG19 as feature extractors was assessed in a deep learning-based approach for cervical cancer screening
binary classification task that was designed to detect cervical using an enhanced CNN architecture. The model, trained and
neoplasia. Through optimization of these models and validated on a dataset of cervical images, achieved a
integration with fully linked layers customized for the classification accuracy of 91.45%. However, the study's
classification task, this inquiry strives to identify the most generalizability was constrained by the limited dataset size,
efficient architecture. which may affect its applicability to various clinical settings
The research constitutes its experimental results, takes on and stages of cervical abnormalities.
the model training and evaluation technique, and considers the Additionally, research by Shefa Tawalbeh, Hiam Alquran,
consequences of using CNN-based methods for cervical and Mohammed Alsalatie (2023) focused on deep feature
cancer detection in the parts that follow. engineering for detecting cervical cancer using colposcopy
images. The study involved preprocessing images, applying
2. LITERATURE REVIEW deep learning techniques for feature extraction, and comparing
Madhura M. Kalbhor and Swati V. Shinde (2023) different machine learning classifiers to identify the most
explored the use of transfer learning and pre-trained effective approach. The classification results demonstrated the
effectiveness of deep feature extraction, with an accuracy of
convolutional neural networks (CNNs) for cervical cancer
approximately 96%. However, similar to other studies, the
detection. Their study evaluated several deep learning models,
research was limited by the small dataset size and inconsistent
including ResNet-50, GoogleNet, ResNet-18, and AlexNet,
image quality, which could impact the model's stability and
using a dataset of cervical cancer images. The results revealed
that ResNet-50 achieved a testing accuracy of 92.02%, generalizability across different clinical settings.
GoogleNet 96.01%, ResNet-18 88.76%, and AlexNet 87.68%. Sai Prasanthi Neerukonda (2023) employed transfer
learning techniques to classify cervical cancer images using
Despite the limitations posed by the dataset size and the
pre-trained CNNs such as VGG16, ResNet, and InceptionV3.
models' adaptability to varying cancer stages and datasets, the
The study refined these models through preprocessing
study demonstrated the potential of pre-trained models in
techniques to enhance image quality and feature extraction,
enhancing classification accuracy.
In a separate study, Nina Youneszade, Mohsen Marjani, achieving a high classification accuracy exceeding 90%.
and Sayan Kumar Ray (2023) developed a predictive model Despite these promising results, the study faced challenges
related to the small dataset size and image quality variability,
utilizing CNN algorithms on digital colposcopy images to
which could limit the model's applicability in diverse clinical
accurately detect cervical abnormalities. The model involved
scenarios. Suggestions for improving accuracy included
preprocessing the images, extracting significant features, and
advanced data augmentation and exploring alternative model
classifying them. It achieved a classification accuracy of 94%,
with a sensitivity of 93% and specificity of 92%. However, the architectures.
small dataset size and variations in image quality posed
2.1 VGG 19
challenges to the model's generalizability across different
clinical settings and stages of cervical disease.
Similarly, a study by Sandeep Kumar Mathivanan, Divya VGG19 is a deep convolutional neural network with 19
Francis, Saravanan Srinivasan, Vaibhav Khatavkar, layers, known for its straightforward architecture of sequential
Karthikeyan P, and Mohd Asif Shah (2024) focused on 3x3 convolution layers and max-pooling layers. It excels in
implementing CNNs for detecting cervical abnormalities using feature extraction, especially for complex images, making it a
digital colposcopy images. Their CNN-based model attained popular choice for transfer learning due to its well-structured
an overall accuracy of 94.7%, indicating its capability to design and reliable performance.
differentiate between positive and negative cases.
Nevertheless, the study highlighted the need for greater
generalization across various datasets to improve the model's
adaptability in clinical environments, considering the
variability in imaging conditions and patient demographics.
Nina Youneszade, Mohsen Marjani, and Chong Pei (2023)
further investigated deep learning techniques for cervical
cancer diagnosis, emphasizing the challenges and
3

Fig.1 VGG19 Architecture for Deep Feature Extraction Colposcopy Image Analysis

2.4 Deep Learning Classification of Cervical Images


2.2 ResNet50
This research focuses on the application of deep learning
ResNet50 is a 50-layer deep network that utilizes residual techniques to classify cervical images for detecting both
connections to address the vanishing gradient problem, cancerous and non-cancerous conditions. The study utilized
allowing for effective training of very deep networks. Its three well-established CNN models—VGG19, ResNet50 and
ability to maintain performance even in deep architectures DenseNet121—selected for their effectiveness in extracting
makes it highly effective for image recognition tasks, enabling features from medical images. Each model was adapted by
more accurate and detailed image analysis. incorporating additional dense layers to facilitate binary
classification and was trained with augmented data to enhance
performance and generalization. Model evaluation was
conducted using metrics such as specificity, sensitivity,
positive predictive value (PPV), and negative predictive value
(NPV), alongside accuracy and confusion matrices.
DenseNet121 outperformed the other models, demonstrating
superior capability in discerning fine details indicative of
cervical abnormalities.
Fig.2 ResNet50 Architecture for Deep Feature Extraction in
Colposcopy Image Analysis 3. MAIN TASK

2.3 DenseNet121 Colposcopy is an essential diagnostic process that uses


enlarged images of the cervix to assess cervical abnormalities.
DenseNet121 is a deep network where each layer is Colposcopy visuals are first obtained from human subjects in
directly connected to every other layer, enhancing feature order to obtain high-resolution visual data in a variety of
reuse and reducing the total number of parameters. This lighting scenarios. In order to ensure that the acquired images
connectivity leads to better generalization and often higher accurately depict the cervical tissues, including normal,
accuracy, making it particularly effective for complex image precancerous, and malignant lesions, image acquisition is
classification tasks by leveraging its efficient parameter usage. essential.
The analytical process is divided into six primary sections
in this section:

1) Quality Assessment and Optimization of Colposcopy


Images (Section III-A).
2) Image Preparation and Normalization (Section III-B).
3) Division of Anatomical Regions (Section III-C).
4) Multimodal Image Registration.(Section III-D).
5) Identification of Pathological Features (Section III-E).
6) Characterization of Global Features (Section III-F).

After a series of methodical procedures, a thorough analysis


of the colposcopy images is produced, which offers insightful
information for the early diagnosis and identification of
cervical cancer and supports prompt clinical intervention and
Fig.3 DenseNet121 Architecture for Feature Extraction in therapy.Usually, these tasks are applied one after the other, as
Colposcopy Image Analysis Figure shows.
4

standard input sizes (e.g., 224x224 pixels for DenseNet or


ResNet architectures) in order to prepare them for deep
learning models.
Image normalization: This process makes sure that the
values of the pixels fall within a predetermined range, usually
between 0 and 1. For deep learning models to converge well,
this phase aids in standardizing the intensity across the dataset.
Data enrichment: To improve the diversity of the training
set, augmentation methods such as flipping, rotating, zooming,
and brightness alteration are used. This reduces the likelihood
of overfitting and improves the model's ability to generalize.
Artifact Removal: During the colposcopy process, some
undesired aspects, such as specular highlights or reflections
from the light source, might mask important features.
Techniques for minimizing or eliminating these artifacts are
frequently included in preprocessing stages.

Fig.4 Sequential Workflow for Colposcopy Image Analysis in C. ANATOMICAL REGION SEGMENTATION
Cervical Cancer Detection
To isolate particular regions of interest, like the cervix,
A. QUALITY ASSESSMENT AND OPTIMIZATION lesions, or other pertinent places, from the background or
In order to guarantee the dependability of colposcopy images surrounding tissues, anatomical region segmentation is an
for further analysis, quality assessment and optimization are essential first step. By ensuring that only pertinent anatomical
essential processes. There are several elements that can impact regions are examined, segmentation enhances the precision of
the quality of these images, including focus, lighting, motion feature extraction and ensuing predictions.
artifacts, noise, and noise. Consequently, in order to increase
Segmentation techniques: Expert manual segmentation was
image clarity and diagnostic utility, it is critical to assess these
the standard in the past, but it takes a lot of time and is prone
factors and implement the required improvements.
to human mistake. To precisely segment anatomical regions,
modern methods use automated algorithms such as Fully
Evaluation of Image Quality: First, the images are examined Convolutional Networks (FCNs), U-Net, or level-set methods.
for noise and artifacts, as well as for brightness, contrast, and Cervix Boundary Delineation: Determining the cervix's
sharpness. Finding these problems is crucial since inferior boundary precisely is essential for the identification of
images can cause abnormalities to be mistakenly detected. cervical cancer. The segmentation algorithms often separate
Decreased noise: A prevalent issue in medical imaging is the background and irrelevant tissues while concentrating on
background noise. You can apply denoising algorithms, detecting the cervix and any lesion sites inside it.
including median filtering, Gaussian smoothing, or more Semantics segmentation: It differs from instance
sophisticated techniques like Non-Local Means (NLM), to segmentation in that the former detects individual objects
eliminate undesired noise while keeping crucial details in your (such as distinct lesions) while the latter allocates each pixel to
image. a certain class (such as cervix, lesion, backdrop). Either
Adjustments to Contrast and Brightness: Improving approach could be used, depending on how complicated the
contrast or brightness can help highlight significant details that analysis is.
may be hard to detect in dimly lit or underexposed images,
such as lesions or aberrant tissues. For this, histogram D. MULTIMODAL IMAGE REGISTRATION
equalization is frequently employed.
Image sharpening: By improving the edges of tissues, Images may occasionally be taken at different times or with
anatomical features become more recognizable for various colposcopy methods (e.g., Hinselmann, Schiller).
examination. In order to enable a cogent analysis of the data, image
registration makes sure that these many modalities or temporal
B. PREPROCESSING AND NORMALIZATION images are aligned in a single reference frame.

To provide accurate and consistent analysis of the colposcopy Image Alignment: Under multimodal image registration, two
images, preprocessing and normalization are crucial. By or more images of the same scene taken under various lighting
standardizing the data, these procedures guarantee that the situations are aligned. This procedure adjusts for differences in
images are in the proper format and state for subsequent camera angles, patient movement, or device settings that may
processing. have resulted in spatial disparities.Images may occasionally be
taken at different times or with other colposcopy procedures
Resizing: the image to a uniform dimension is important (e.g., Hinselmann, Schiller). A cogent analysis of the data is
since, depending on the capture device, colposcopy images made possible by image registration, which guarantees that
come in a variety of sizes. Images are usually downsized to these various modalities or temporal images are aligned in a
5

shared reference frame. whereas pale or discolored spots could be an indication of


Feature-based and intensity-based registration: In feature- lesions or aberrant tissue growth.
based registration, important points in each image, including Structural Analysis: The goal of global structural analysis is
anatomical landmarks, are found and matched between to find any significant anomalies or deformities in the overall
modalities. By comparing the pixel intensities of shape of the cervix. This could include anomalous swelling,
corresponding pixels, intensity-based registration minimizes asymmetry, or indentations that could be connected to
the discrepancies between them and aligns the images. underlying diseases.
Rigid vs. Non-Rigid Registration: Rigid registration employs
basic transformations like translation and rotation and is
predicated on the idea that the anatomical features in the 4. PROPOSED METHODOLOGY
pictures do not deform. Complex deformations are taken into
account by non-rigid (or deformable) registration, which is This research investigates the application of state-of-the-
helpful when soft tissues, such as the cervix, alter shape art convolutional neural networks (CNNs) for the
between images. classification of cervical images, specifically focusing on
DenseNet121, ResNet50, and VGG19. Each of these models
E. PATHOLOGICAL FEATURE DETECTION brings distinct advantages to medical image classification.
DenseNet121 is celebrated for its densely connected layers,
Detecting diseased characteristics comes after segmenting the which facilitate efficient feature reuse and mitigate the
necessary anatomical regions and aligning several modalities. vanishing gradient problem, thereby enabling the model to
This entails using certain picture properties to identify learn richer representations with fewer parameters. ResNet50,
aberrant tissues, such as precancerous or cancerous tumors. on the other hand, introduces residual learning, where shortcut
connections allow for deeper networks without performance
Feature Extraction: To detect abnormalities in tissue degradation, excelling at capturing complex visual patterns in
texture, color, or shape, pathological detection algorithms rely cervical images. Lastly, VGG19 is known for its deep yet
on feature extraction techniques. To discern between normal simple architecture, employing small convolution filters to
and pathological tissue, texture-based techniques like extract fine-grained details from images. Although
Histogram of Oriented Gradients (HOG), Local Binary computationally heavier than the other two models, VGG19’s
Patterns (LBP), and others are frequently used. depth and structure allow it to perform well in tasks requiring
Lesion detection: To identify lesions that might be indicative detailed feature extraction. Data augmentation techniques,
of dysplasia or other precancerous diseases, analysis of including random rotation, flipping, and zooming, are
colposcopy pictures is performed. Classifying tissues as employed to enhance model robustness and prevent
normal or abnormal based only on appearance is a common overfitting. The performance of these models is rigorously
use of machine learning or deep learning models that have evaluated using key metrics like accuracy, specificity,
been trained on massively annotated datasets. sensitivity, positive predictive value (PPV), and negative
Edge Identification and Morphology Analysis: It's critical predictive value (NPV). Furthermore, the research explores an
to detect suspicious locations that might require additional ensemble approach, combining the strengths of DenseNet121,
study by looking for sharp edges and irregular morphological ResNet50, and VGG19 to potentially improve classification
traits, such as thicker tissues or atypical growth patterns. accuracy and robustness. The ensemble aims to capitalize on
the diverse feature extraction capabilities of these models,
F. GLOBAL FEATURE CHARACTERIZATION ultimately improving the classification of cervical images.

Beyond localized lesions, global feature characterization 4.1 Methodology I: Classification of Hinselmann Test
offers an overall perspective of the entire colposcopy image, Images Using Pretrained CNN Models
enabling a more comprehensive evaluation of tissue health.
The analysis of broad patterns, textures, and color This approach involves applying pretrained Convolutional
distributions that might point to underlying anomalies is the Neural Networks (CNNs) to classify images from the
main goal of this step. Hinselmann test. Three well-established CNN architectures -
VGG19, ResNet50, and DenseNet121—are chosen for their
Texture Analysis: To examine the cervix's overall surface advanced feature extraction capabilities. Each model is
appearance, global texture features are retrieved. Tissue health modified by removing the top layers and adding new layers
can be shown by the degree of smoothness, roughness, and
tailored for binary classification tasks. The models are trained
regularity, abnormal textures are frequently a sign of
on a dataset of Hinselmann test images, which undergoes
pathological alterations.
preprocessing that includes rescaling, shearing, zooming, and
Color Distribution: Because distinct tones and saturations
may correspond with certain tissue states, the global flipping to improve model robustness. The training and
distribution of colors throughout the image is examined. High validation datasets are converted into TensorFlow
redness, for instance, can be a symptom of inflammation, tf.data.Dataset objects to optimize data processing. After
training the models for 15 epochs, their performance is
6

assessed using metrics such as accuracy, sensitivity.


Evaluation results, including confusion matrices and
classification reports, are generated to compare the
effectiveness of each model in classifying Hinselmann test
images.

Fig.5 A Comparative Display of Hinselmann Colposcopy


Images: Negative and Cancerous Tissue

Table 1 : Performance Comparison of Pretrained CNN


Models on Hinselmann Test Image Classification
_______________________________________________________
Model Accuracy Class Precision Recall F1 score
_________________________________________________
DenseNet121 98% cancer 0.61 0.50 0.55
Negative 0.46 0.57 0.51

ResNet50 88% cancer 0.41 0.58 0.60


Negative 0.31 0.41 0.53

VGG19 66%cancer 0.59 1.00 0.74


Negative 0.00 0.00 0.12
_________________________________________________

4.2 Methodology II: Classification of Schiller Test Images


Using Pretrained CNN Models

This methodology leverages pretrained Convolutional


Neural Networks (CNNs) for the classification of Schiller test
images. The approach involves adapting several advanced
CNN architectures, initially trained on large image datasets,
for the specific task of binary classification. By replacing the
final layers of these models with new dense layers suited for
binary classification, the pretrained networks are fine-tuned to
classify Schiller test images. The images are preprocessed
with
techniques such as rescaling and augmentation to improve the
robustness of the models. The performance of these models is
evaluated based on various metrics, including accuracy,
sensitivity, and specificity, to determine their effectiveness in
accurately categorizing Schiller test images. This methodology
aims to identify the most effective CNN architecture for this
classification task.
7

Fig. A Comparative Display of Schiller Colposcopy


Images: Negative and Cancerous Tissue
Table 2 : Performance Comparison of Pretrained CNN
Models on Schiller Test Image Classification
_______________________________________________________
Model Accuracy Class Precision Recall F1 score
_________________________________________________
DenseNet121 98% cancer 0.61 0.50 0.55
Negative 0.46 0.57 0.51

ResNet50 96% cancer 0.00 0.00 0.00


Negative 0.43 1.00 0.60

VGG19 65%cancer 0.57 1.00 0.73


Negative 0.00 0.00 0.10
_________________________________________________

4.3 Methodology III: Classification of Green Test Images


Using Pretrained CNN Models

In this approach, we leverage pretrained Convolutional


Neural Networks (CNNs) to classify images from the Green
test dataset. The method involves adapting well-known CNN
architectures, which have been previously trained on extensive
image datasets, for the specific task of Green image
classification. Each model is fine-tuned by adding new layers
designed for binary classification. Data augmentation
techniques are applied to improve model robustness, followed
by evaluation to measure the performance of each CNN. This
strategy seeks to identify which pretrained model provides the
best accuracy and efficiency for classifying the Green test
images .
8

Fig.7 A Comparative Display of Green Colposcopy


Images: Negative and Cancerous Tissue

Table 2 : Performance Comparison of Pretrained CNN


Models on Green Test Image Classification
_______________________________________________________
Model Accuracy Class Precision Recall F1 score
_________________________________________________
DenseNet121 99% cancer 0.62 0.73 0.67
Negative 0.53 0.40 0.45

ResNet50 90% cancer 0.60 1.00 0.75


Negative 1.00 0.12 0.21

VGG19 64%cancer 0.57 1.00 0.73


Negative 0.00 0.00 0.10
_________________________________________________

4.4 Formulas and Equations


4.4.1 Confusion Matrix
A Confusion Matrix is a table used to evaluate the
performance of a classification model. It outlines the true
positive (TP), true negative (TN), false positive (FP), and false
negative (FN) outcomes.

True Positives (TP): Correctly predicted positive cases.


True Negatives (TN): Correctly predicted negative cases.
False Positives (FP): Incorrectly predicted positive cases.
False Negatives (FN): Actual positive cases that were
incorrectly predicted as negative.
9

4.4.2 Precision
Precision also known as Positive Predictive Value (PPV),
measures the accuracy of positive predictions.

Precision = TP / TP + FP

4.4.3 Recall (Sensitivity)


Recall also known as Sensitivity or True Positive Rate
(TPR), measures the ability of a model to identify all relevant
positive cases.

Recall = TP / TP + FN

4.4.4 F1-Score
The F1-Score is the harmonic mean of Precision and
Recall, balancing the trade-off between them.

F1-Score = 2 × Precision × Recall / Precision + Recall

5 Experimentation and Result

The study involved classifying cervical images using three


datasets—Hinselmann, Schiller, and Green—each containing
two classes of images. The Hinselmann dataset included 221
training images and 49 testing images, the Schiller dataset had
195 training images and 49 testing images, and the Green
dataset contained 226 training images and 58 testing images.
These datasets were utilized to train and evaluate pretrained
CNN models to determine their effectiveness in classifying the
images. Performance was assessed using metrics like
accuracy, sensitivity, specificity, and precision, providing a
comprehensive evaluation of each model's diagnostic
capability. The results demonstrate the models' potential in
supporting cervical cancer screening through accurate image
classification (Table 4).

Table 4: 6 Conclusion

The comparative analysis of different colposcopy methods


MODEL HINSELMANN SCHILLER GREEN
and CNN models reveals significant variations in model
COLPOSCOPY COLPOSCOPY COLPOSCOPY performance depending on the image type and method used.
METHOD METHOD METHOD DenseNet121 demonstrated the highest accuracy for the
Hinselmann method, suggesting that this architecture is
particularly well-suited for distinguishing cervical
DENSENET121 98% 98% 99%
abnormalities in images captured using this approach. In
contrast, for the Schiller method, InceptionV3 outperformed
RESNET50 88% 96% 90% other models, indicating its strength in processing images that
require enhanced contrast and color differentiation. Similarly,
VGG19 66% 65% 64% ResNet50 emerged as the most effective model for the Green
method, where it demonstrated superior accuracy in handling
images focused on tissue color changes.
The results underscore the need for selecting model
architectures tailored to specific colposcopy techniques, as
different image characteristics can impact model effectiveness.
It also highlights the diverse nature of colposcopy image
datasets and the challenges in achieving uniformly high
performance across different testing conditions. This
variability in performance suggests that integrating multiple
10

models or employing ensemble techniques might further


enhance classification outcomes in real-world clinical
applications. Overall, the study reinforces the importance of
method-specific model selection to optimize diagnostic
accuracy and reliability in cervical cancer detection using deep
learning.

Hinselmann

Green

Schiller
11

References

[1] Kalbhor, M.M., Shinde, S.V. Cervical cancer diagnosis using


convolution neural network: feature learning and transfer learning
approaches. Soft Computing (2023). Link
[2] N. Youneszade, M. Marjani and S. K. Ray, "A Predictive Model to
Detect Cervical Diseases Using Convolutional Neural Network
Algorithms and Digital Colposcopy Images," in IEEE Access, vol. 11,
pp. 59882-59898, 2023. Link
[3] Z. Z. R. Permana and A. W. Setiawan, "Research Challenges in Cervical
Cancer Segmentation and Classification Using Colposcopy
Images," 2023 10th International Conference on Information
Technology, Computer, and Electrical Engineering (ICITACEE),
Semarang, Indonesia, 2023, pp. 371-376, Link
[4] Mathivanan, S., Francis, D., Srinivasan, S. et al. Enhancing cervical
cancer detection and robust classification through a fusion of deep
learning models. Sci Rep 14, 10812 (2024). Link
[5] Hannah Ahmadzadeh Sarhangi, Dorsa Beigifard, Elahe,
Hamidreza Bolhasani Deep learning techniques for cervical cancer
diagnosis based on pathology and colposcopy images Link
[6] N. Youneszade, M. Marjani and C. P. Pei, "Deep Learning in Cervical
Cancer Diagnosis: Architecture, Opportunities, and Open Research
Challenges," in IEEE Access, vol. 11, pp. 6133-6149, 2023 Link
[7] WHO. WHO Guidelines for Screening and Treatment of Precancerous
Lesions for Cervical Cancer Prevention; WHO: Geneva, Switzerland,
2013.
[8] Shefa Tawalbeh, Hiam Alquran, Mohammed Alsalatie, Deep Feature
Engineering in Colposcopy Image Recognition: A Comparative Study
Bioengineering 2023, 10(1), 105; Link
[9] Keshav Rao, M.P. Nanda, M.Suchithra, Detection and classification of
cervical cancer using deep learning techniques AIP Conf. Proc. 3075,
020159 (2024) Link
[10] Sai Prasanthi Neerukonda, Transfer Learning for Cervical Cancer Image
Classification, California State University, Northridge, May 2023, Link

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy