Kista CBCT
Kista CBCT
Kista CBCT
Diagnosis of cystic lesions using panoramic and CBCT images based on deep learning neural
network
Department of Periodontology, Daejeon Dental Hospital, Institute of Wonkwang Dental Research, Wonkwang
University College of Dentistry, Daejeon, Korea
*Correspondence:
Jae-Hong Lee, PhD, Department of Periodontology, Daejeon Dental Hospital, Wonkwang University College of
Dentistry, 77, Dunsan-ro, Seo-gu, Daejeon 35233, Korea
This article has been accepted for publication and undergone full peer review but has not been through the
copyediting, typesetting, pagination and proofreading process, which may lead to differences between this
version and the Version of Record. Please cite this article as doi: 10.1111/ODI.13223
Objectives: The aim of the current study was to evaluate the detection and diagnosis of three types of odontogenic
cystic lesions (OCLs)—odontogenic keratocysts, dentigerous cysts, and periapical cysts—using dental panoramic
radiography and cone beam computed tomographic (CBCT) images based on a deep convolutional neural network
(CNN).
Methods: The GoogLeNet Inception-v3 architecture was used to enhance the overall performance of the detection and
diagnosis of OCLs based on transfer learning. Diagnostic indices (area under the ROC curve [AUC], sensitivity,
specificity, and confusion matrix with and without normalization) were calculated and compared between pretrained
models using panoramic and CBCT images.
Results: The pretrained model using CBCT images showed good diagnostic performance (AUC = 0.914, sensitivity =
96.1%, specificity = 77.1%), which was significantly greater than that achieved by other models using panoramic
images (AUC = 0.847, sensitivity = 88.2%, specificity = 77.0%) (P = 0.014).
Conclusions: This study demonstrated that panoramic and CBCT image datasets, comprising three types of
odontogenic OCLs, are effectively detected and diagnosed based on the deep CNN architecture. In particular, we
found that the deep CNN architecture trained with CBCT images achieved higher diagnostic performance than that
trained with panoramic images.
Odontogenic cystic lesions (OCLs) are a pathological epithelial lined cavities containing fluid, semi-fluid, or solid
contents(Binnie, 1999). The dentigerous and periapical cysts that occupy most OCLs are benign and noninvasive,
whereas the odontogenic keratocyst (OKC) is highly likely to recur and exhibit locally aggressive behavior and has
the potential to undergo malignant transformation (Stoelinga, 2012; Kaczmarzyk et al., 2012). Therefore, early
detection and diagnosis of OCLs are expected to reduce morbidity and mortality through long-term follow-up and
early intervention.
In the dental field, the usefulness of deep learning is assessed for the detection, classification, and segmentation of
anatomical variables for orthodontic landmarks, dental caries, periodontal disease, and osteoporosis, but these
applications are still at very preliminary stages (J. H. Lee et al., 2018b; a; J. S. Lee et al., 2018; Chonho Lee et al.,
2019). In particular, to the best of our knowledge, no published studies have directly compared dental panoramic
radiography and cone beam computed tomographic (CBCT) images, for the identification and classification of major
OCLs based on deep CNN architectures. Therefore, the present study was conducted to evaluate and draw
comparisons between panoramic and CBCT images, for detection and diagnosis among three types of major OCLs
based on a deep CNN architecture that was trained with supervised learning.
2 METHODS
2.1 Datasets
This study was conducted at the Department of Periodontology, Daejeon Dental Hospital, Wonkwang University and
approved by the Institutional Review Board of Daejeon Dental Hospital, Wonkwang University (approval no.
W1908/002-001). Based on histopathological examinations by a board-certified oral pathologist at Daejeon Dental
Hospital, Wonkwang University, a dental panoramic and CBCT image dataset (INFINITT PACS, Infinitt, Seoul,
Korea) containing diagnoses of three OCL types (OKC, dentigerous cyst, and periapical cyst) was acquired between
January 2014 and December 2018 (Figure 1). Even if histologically confirmed, all cases that were difficult to
The dataset consisted of 2,126 images, including 1,140 (53.6%) panoramic and 986 (46.4%) CBCT images. There
were 260 (12.2%) panoramic and 188 (8.8%) CBCT images classified as OKCs, 463 (21.8%) panoramic and 396
(18.6%) CBCT images classified as dentigerous cysts, and 417 (19.6%) panoramic and 402 (18.9%) CBCT images
classified as periapical cysts. We randomly split 80% (panoramic images, n = 912 and CBCT images n = 789) of the
dataset for training (panoramic images, n = 684 and CBCT images n = 592) and validation (panoramic images, n =
228 and CBCT images n = 197), and 20% (panoramic images, n = 228 and CBCT images n = 197) for testing. We
used the training set for deep learning. As a technical and strategic method to avoid overfitting, the training and
validation dataset was randomly augmented 100 times (panoramic images, n = 68,400 and CBCT images n = 59,200)
using horizontal and vertical flipping, rotation (in the range of 20°), width and height shifting (in the range of 0.2),
shearing (in the range of 0.5), and zooming (in the range of 0.8–1.2) (Shin et al., 2016).
In this study, we exploited a pretrained deep CNN architecture derived from the GoogLeNet Inception v3 model,
which was developed by the Google research team and is known for excellent performance in image detection and
classification (Figure 2) (Chollet, 2017). This architecture consists of three convolutional layers, nine inception
modules including various scales of convolution classifiers, two fully connected layers, and softmax functions
(Szegedy et al., 2016). Our algorithm was used for preprocessing to avoid the weakness of small cases and enhance
the overall detection and diagnosis performance of OCLs based on transfer learning (Szegedy et al., 2016). We
optimized the weights by adjusting the hyperparameters including the learning rate (in the range of 0.0001–0.1), batch
size (in the range of 16–64), and dropout rate (in the range of 0.25–0.75) and by using batch normalization (Abadi et
al., 2016). A brief description of the terms related to deep learning is provided in Appendix 1.
Chi-squared tests were used to compare categorical data (sex, age group, and location) in the three OCL groups.
Diagnostic indices (area under the receiver operating characteristic curve [AUC], sensitivity, specificity, and
3 RESULTS
The baseline characteristics of the patients, who were diagnosed with OKCs, dentigerous cysts, and periapical cysts,
are presented in Table 1. A total of 247 patients enrolled in this study, comprising 167 (67.6%) males and 80 (32.4%)
females. In terms of age, the number of individuals in their 20s was the smallest (n = 14; 5.7%), and the number of
those aged 40–59 was the highest (n = 98; 39.7%). The OCLs were most frequently located in the mandibular molar
region (n = 133; 53.8%), and the least frequently in the maxillary premolar region (n = 4; 1.6%).
The pretrained deep CNN architecture using CBCT images provided better diagnostic accuracy and AUC, which were
significantly greater than those achieved by other models using panoramic images (AUC difference = 0.067, 95% CI
0.013–0.122, P = 0.014) (Table 2 and Figure 3). When using panoramic images, the AUC was 0.847 (95% CI 0.760–
0.911), the sensitivity was 88.2%, and the specificity was 77.0%. For CBCT images, the AUC was 0.914 (95% CI
0.841–0.961), the sensitivity was 96.1%, and the specificity was 77.1%.
Figure 4 presents the confusion matrix, with and without normalization, showing the diagnostic results of the OCLs.
When using panoramic images, the total diagnostic accuracy was 84.6%; the diagnostic accuracy was highest for
periapical cysts (87.0%) and lowest for OKCs (81.8%). For CBCT images, the total diagnostic accuracy was 91.4%;
the diagnostic accuracy was highest for periapical cysts (93.7%), and lowest for OKCs (87.2%).
4 DISCUSSION
In the last few years, several deep learning models have been proposed and have achieved significant success in
various fields. In particular, supervised learning of deep CNN architectures have shown more pronounced and
efficient results that surpass human experts in most types of medical imaging (Gulshan et al., 2016; Esteva et al.,
2017). In this paper, we have successfully proved that the deep CNN architecture can be used in the detection and
diagnosis of major OCLs including OKCs, dentigerous cysts, and periapical cysts using panoramic and CBCT images
and have confirmed the possibility of its use in clinical practice with computer-aided diagnostic (CAD) systems.
When the crown or root of an impacted or nonvital tooth is associated with a solitary and unilocular cyst, it is difficult
to clearly distinguish among the three major types of OCLs based only on radiological findings (Scholl et al., 1999;
Borghesi et al., 2018). Nevertheless, our results achieved good and predictable accuracy (total accuracy = 87.8%).
Despite having trained more panoramic images than CBCT images, we found that the CBCT image dataset–based
deep CNN architecture showed higher diagnostic accuracy, sensitivity, and specificity than results achieved using
panoramic images (panoramic images = 84.7%, 95% CI 76.0–91.1; CBCT images = 91.4%, 95% CI 84.1–96.1). This
result indicates that CBCT images have important advantages such as higher detail and fewer artifacts at the
anatomical boundaries of the ROI and background than conventional panoramic images (Scarfe et al., 2006).
OKCs account for approximately 20% of OCLs and occur and reoccur mainly in the posterior mandible of males
(MacDonald-Jankowski, 2011). Approximately 75% of dentigerous cysts occur in the mandibles, mostly in young and
middle-aged adults, whereas periapical cysts mainly occur in middle-aged to older adults (Dunfee et al., 2006). Recent
studies have reported better diagnostic performance and prognostic stratification when applying conventional risk and
sociodemographic factors to the deep learning algorithm (Muhammad et al., 2017; Yala et al., 2019). Therefore,
hybrid deep learning models using both conventional risk and sociodemographic factors and radiological images are
expected to show improved diagnostic accuracy compared with conventional deep CNN architectures.
In this study, we used a pretrained deep CNN architecture derived from the GoogLeNet Inception v3 model and fine-
tuned with various training parameters including learning rate, batch size, dropout rate, and number of fully connected
layers, which were carefully determined during the preliminary trials. Despite the wide variability of the shape, size,
irregular border, and density and viscosity of the cystic contents of OCLs in radiological images, the pretrained deep
CNN architecture showed efficient edge detection and texture feature extraction through convolutional and dense
layers with hierarchical structure representations (Wang, 2016).
Despite its potential application to CAD systems in clinical practice, we identified several limitations in our approach.
First, the absolute size of the training dataset was small. Owing to the data-driven nature of the deep learning
algorithm, performance increases proportionally with the size of the training dataset. Therefore, to extract the essential
local features and avoid overfitting, a large and high-quality dataset is very important for creating a good detection
and diagnosis model. In order to overcome the overfitting problem in the training procedure, we adopted data
augmentation and transfer learning techniques with fine-tuning. Nevertheless, the small dataset is a major limitation of
5 CONCLUSION
The result shows that our radiological image dataset, comprising three types of OCL, is effectively detected and
diagnosed with the presented deep CNN architecture. However, the diagnostic accuracy of OCLs using radiological
assessment alone is less than that using histological examination, and accurate diagnosis with radiological images only
is still challenging. We hope that the current study provides insight for future large dataset–based deep learning
research.
ACKNOWLEDGEMENTS
CONFLICTS OF INTEREST
None to declare.
AUTHOR CONTRIBUTIONS
Jae-Hong Lee conducted a literature review, organized the data and drafted the manuscript. Do-Hyung Kim was
responsible for data interpretation and edited the manuscript. Seong-Nyum Jeong critically reviewed the manuscript.
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., . . . Zheng, X. (2016). TensorFlow:
Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv e-print: arXiv:1603.04467.
Binnie, W. H. (1999). Periodontal cysts and epulides. Periodontology 2000 21: 16-32.
Borghesi, A., Nardi, C., Giannitto, C., Tironi, A., Maroldi, R., Di Bartolomeo, F., & Preda, L. (2018).
Odontogenic keratocyst: imaging features of a benign lesion with an aggressive behaviour. Insights Into
Imaging 9(5): 883-897. https://doi.org/10.1007/s13244-018-0644-z
Chollet, F. (2017). Keras. Available: https://github.com/fchollet/keras.
Dunfee, B. L., Sakai, O., Pistey, R., & Gohel, A. (2006). Radiologic and pathologic characteristics of benign and
malignant lesions of the mandible. Radiographics 26(6): 1751-1768. https://doi.org/10.1148/rg.266055189
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-
level classification of skin cancer with deep neural networks. Nature 542(7639): 115-118.
https://doi.org/10.1038/nature21056
Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., & Bengio, Y. (2013). Maxout networks. arXiv e-
print arXiv:1302.4389.
Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., . . . Webster, D. R. (2016).
Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal
Fundus Photographs. JAMA 316(22): 2402-2410. https://doi.org/10.1001/jama.2016.17216
Hricak, H. (2018). 2016 New Horizons Lecture: Beyond Imaging-Radiology of Tomorrow. Radiology 286(3): 764-
775. https://doi.org/10.1148/radiol.2017171503
Kaczmarzyk, T., Mojsa, I., & Stypulkowska, J. (2012). A systematic review of the recurrence rate for keratocystic
odontogenic tumour in relation to treatment modalities. International Journal of Oral and Maxillofacial
Surgery 41(6): 756-767. https://doi.org/10.1016/j.ijom.2012.02.008
Kim, J. R., Shim, W. H., Yoon, H. M., Hong, S. H., Lee, J. S., Cho, Y. A., & Kim, S. (2017). Computerized
Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency. AJR.
American Journal of Roentgenology 209(6): 1374-1380. https://doi.org/10.2214/AJR.17.18224
Lee, C., Tanikawa, C., Lim, J.-Y., & Yamashiro, T. (2019). Deep Learning based Cephalometric Landmark
Identification using Landmark-dependent Multi-scale Patches. arXiv e-print arXiv:1906.02961.
Lee, J. G., Jun, S., Cho, Y. W., Lee, H., Kim, G. B., Seo, J. B., & Kim, N. (2017). Deep Learning in Medical
Imaging: General Overview. Korean Journal of Radiology 18(4): 570-584.
https://doi.org/10.3348/kjr.2017.18.4.570
Lee, J. H., Kim, D. H., Jeong, S. N., & Choi, S. H. (2018b). Diagnosis and prediction of periodontally
compromised teeth using a deep learning-based convolutional neural network algorithm. Journal of
Periodontal & Implant Science 48(2): 114-123. https://doi.org/10.5051/jpis.2018.48.2.114
Lee, J. S., Adhikari, S., Liu, L., Jeong, H. G., Kim, H., & Yoon, S. J. (2018). Osteoporosis detection in
panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system:
a preliminary study. Dento Maxillo Facial Radiology: 20170344. https://doi.org/10.1259/dmfr.20170344
MacDonald-Jankowski, D. S. (2011). Keratocystic odontogenic tumour: systematic review. Dento Maxillo Facial
Radiology 40(1): 1-23. https://doi.org/10.1259/dmfr/29949053
Muhammad, H., Fuchs, T. J., De Cuir, N., De Moraes, C. G., Blumberg, D. M., Liebmann, J. M., . . . Hood,
D. C. (2017). Hybrid Deep Learning on Single Wide-field Optical Coherence tomography Scans Accurately
Classifies Glaucoma Suspects. Journal of Glaucoma 26(12): 1086-1094.
https://doi.org/10.1097/IJG.0000000000000765
Nurtanio, I., Astuti, E. R., Purnama, I. K. E., Mochamad Hariadi, & Purnomo, M. H. (2013). Classifying Cyst
and Tumor Lesion Using Support Vector Machine Based on Dental Panoramic Images Texture Features.
IAENG International Journal of Computer Science 40(4): 04.
Nurtanio, I., Purnama, I. K. E., Hariadi, M., & Purnomo, M. H. (2011). Cyst and Tumor Lesion Segmentation on
Dental Panoramic Images using Active Contour Models. IPTEK The Journal of Technology and Science
22(3): 152-158.
Scarfe, W. C., Farman, A. G., & Sukovic, P. (2006). Clinical applications of cone-beam computed tomography in
dental practice. Journal (Canadian Dental Association) 72(1): 75-80.
Scholl, R. J., Kellett, H. M., Neumann, D. P., & Lurie, A. G. (1999). Cysts and cystic lesions of the mandible:
clinical and radiologic-histopathologic review. Radiographics 19(5): 1107-1124.
https://doi.org/10.1148/radiographics.19.5.g99se021107
Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., . . . Summers, R. M. (2016). Deep
Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics
and Transfer Learning. IEEE Transactions on Medical Imaging 35(5): 1285-1298.
https://doi.org/10.1109/TMI.2016.2528162
Soffer, S., Ben-Cohen, A., Shimon, O., Amitai, M. M., Greenspan, H., & Klang, E. (2019). Convolutional
Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 290(3): 590-606.
https://doi.org/10.1148/radiol.2018180547
Stoelinga, P. J. (2012). Kaczmarzyc et al.: A systematic review of the recurrence rate for keratocystic odontogenic
tumour in relation to treatment modalities. International Journal of Oral and Maxillofacial Surgery 41(12):
1585-1586; author reply 1586-1587. https://doi.org/10.1016/j.ijom.2012.08.003
Suzuki, K. (2017). Overview of deep learning in medical imaging. Radiological Physics and Technology 10(3): 257-
273. https://doi.org/10.1007/s12194-017-0406-5
TABLES
P-values were calculated using the chi-squared test. Italics denote statistical significance (P < 0.05).
Table 2. AUC, sensitivity, and specificity for the detection and diagnosis of oral cystic lesions using dental panoramic
radiography and cone beam computed tomographic images based on deep learning neural network
FIGURE LEGEND
Figure 1. Dental panoramic radiography and cone beam computed tomographic (CBCT) image dataset, which
included diagnoses of three oral cystic lesion types—keratocysts, dentigerous cysts, and periapical cysts—based on
histopathological examinations.
Figure 2. Simplified overall scheme of pretrained deep CNN architecture derived from the GoogLeNet Inception v3
model.
Figure 4. Confusion matrix with and without normalization, showing the diagnostic results of oral cystic lesions. (A,
C) without normalization, (B, D) with normalization.
odi_13223_f1.tif
odi_13223_f3.tif
odi_13223_f4.tif