Med 15 190

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Open Med.

2020; 15: 190-197

Research Article

Shudong Wang, Liyuan Dong, Xun Wang, Xingguang Wang*

Classification of pathological types of lung cancer


from CT images by deep residual neural networks
with transfer learning strategy
https://doi.org/ 10.1515/med-2020-0028
received August 13, 2019; accepted December 24, 2019
1 Introduction
Lung cancer accounts for more than a quarter of all cancer
Abstract: Lung cancer is one of the most harmful malig-
deaths. It is one of the major threats to human health on
nant tumors to human health. The accurate judgment
a worldwide scale [1]. In pathology, lung cancer can be
of the pathological type of lung cancer is vital for treat-
mainly divided into two groups: small cell lung cancer
ment. Traditionally, the pathological type of lung cancer
(SCLC) and non-small cell lung cancer (NSCLC) [2]. NSCLC
requires a histopathological examination to determine,
includes squamous cell cancer (SCC), large cell cancer and
which is invasive and time consuming. In this work, a
lung adenocarcinoma. Lung adenocarcinoma has two dif-
novel residual neural network is proposed to identify the
ferent types of adenocarcinoma in situ (ISA) and invasive
pathological type of lung cancer via CT images. Due to the
adenocarcinoma (IA). The characteristics and treatments
low amount of CT images in practice, we explored a med-
of different pathology subtypes of lung cancer are differ-
ical-to-medical transfer learning strategy. Specifically, a
ent.
residual neural network is pre-trained on public medical
Correct and timely diagnosis can implement an effec-
images dataset luna16, and then fine-tuned on our intel-
tive treatment plan and prolong patient survival. Nowa-
lectual property lung cancer dataset collected in Shan-
days, histopathology and molecular biology are standard
dong Provincial Hospital. Data experiments show that our
for tumor pathological diagnosis, but usually can only
method achieves 85.71% accuracy in identifying patholog-
be performed on excised tissue specimens such as surgi-
ical types of lung cancer from CT images and outperform-
cal resection or needle biopsy [3]. However, radiology is
ing other models trained with 2054 labels. Our method
a data-centric field involving the extraction and quanti-
performs better than AlexNet, VGG16 and DenseNet,
tative features to quantify the solid tumor radiographic
which provides an efficient, non-invasive detection tool
phenotype [4]. It hypothesizes in [5] that radiographic
for pathological diagnosis.
phenotypes represent underlying pathophysiology, which
Keywords: Pathological type; Lung cancer; Residual shows different features in CT images.
neural network; Transfer learning; CT images Hardware advances in high resolution image acquisi-
tion equipment, coupled with novel artificial intelligence
(AI) algorithms and large amounts of data, have con-
tributed to a proliferation of AI applications in medical
images. Convolutional neural network (CNN) has allowed
for significant gains in the ability to classify images and
detect objects from images [6-11]. In lieu of the often-sub-
jective visual assessment of images by trained clinicians,
*Corresponding author: Xingguang Wang, Department of Respira- the deep learning method can automatically identify
tory Medicine, Shandong Provincial Hospital Affiliated to Shandong complex patterns [12-15]. In [16], a sparse autoencoder
University, Jinan 250021, Shandong, China, E-mail: wangsyun@upc. with a denoising strategy (SDAE) was used to differenti-
edu.cn
ate breast ultrasound lesions and lung CT nodule. A hier-
Shudong Wang, Liyuan Dong, Xun Wang, College of Computer and
Communication Engineering, China University of Petroleum, Qing- archical learning framework with a multi-scale CNN was
dao 266580, Shandong, China proposed to capture various sizes of lung nodules in [17,
Shudong Wang, Xun Wang, School of Electrical Engineering and 18]. After that, a CAD framework with CNN is developed
Automation, Tiangong University, Tianjin 300387, China to classify the breast cancer in [19]. With the disclosure of

Open Access. © 2020 Shudong Wang et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0
License.
Lung Cancer Pathological Types identifying by DRNN 191

lung nodule data sets and various challenges, lung nodule 2.1.1 Luna16
detection, segmentation, and classification algorithms
emerged in an endless stream [20-23]. Luna16 is derived from the LUNA2016 dataset, which
A transfer learning strategy was proposed to deal with was originally created from the publicly available LIDC/
the limitation of data quantity, which is motivated by the IDRI database of lung CT scans. The LIDC/IDRI database
fact that people can intelligently apply knowledge learned contains nodule annotations which are collected during
previously to solve new problems faster or with better a two-phase annotation process using 4 experienced
solutions. We observe many examples of transfer learning radiologists. LUNA2016 has organized much of the LIDC/
in practice. Transfer learning was used in lung medical IDRI database so as to make it readily available to groups
images processing to improve the classification accu- working on medical imaging classifiers. LUNA2016 con-
racy of lung node assisted detection in [24]. Few transfer tains 888 CT scans which include 1186 nodules and pro-
learning strategies focuses on the classification of lung vides 551,065 candidates to be classified. Each candidate
cancer pathological types. To our best knowledge, there has an , and position in coordinates and a classification
is no medical-to-medical transfer learning strategy, only as either non-nodular or nodular . It is noted that there
learned representations usually being transferred from can be multiple candidates per nodule. The dimensions of
general imagery. these images are 512512Z where Z is the varying in length
In this work, we propose a deep learning method with depending on the height of patients scanned. We cut each
transfer learning strategy to identify pathology in types of candidate image to get the ROI region image of 5050 on
lung cancer. Specifically, a novel residual neural network the basis of the coordinates of the candidates.
is proposed, and then a medical-to-medical transfer learn-
ing strategy is developed to process medical images, thus
providing an accurate and timely diagnosis to the pathol- 2.1.2 Our intellectual property lung cancer dataset
ogy type. Our residual neural network is pre-trained on
public medical images dataset luna16, and then fine- We collected 125 chest CT scans cases with labeled tumors
tuned on our intellectual property lung cancer dataset annotated by experienced Respiratory specialists from the
collected in Shandong Provincial Hospital. Data exper- Department of Radiology and the corresponding pathol-
iments show that our method achieves 85.71% accuracy ogy reports in Shandong Provincial Hospital in 2018.
in identifying pathological types of lung cancer from CT It has obtained about 1500 CT images per lung cancer
images and outperforming other models trained with 2054 patient. Before training, each image goes through a tiered
labels. Our method performs better than AlexNet, VGG16 grading system consisting of multiple layers of trained
and DenseNet, which provides an efficient, non-invasive graders of increasing expertise for verification and cor-
detection tool for pathological diagnosis. rection of image labels. The first-level scorer removes all
To our best knowledge, this is the first attempt at low-quality or unreadable scans to screen all CT images
developing a medical-to-medical transfer learning strat- for initial quality control, labeling tumor areas and manu-
egy in classifying pathological types of lung cancer from ally cutting tumor areas to obtain ROI area images of 5050.
CT images. The second tier of graders is composed by three Respira-
tory specialists who independently graded each image
that had passed the first tier. They verified the true labels

2 Methods for each image. Finally, the data were classified into four
types: ISA, SCLC, SCC and IA, which are shown in Figures
1 and 2.
2.1 Materials

We use here two independent CT image datasets of lung 2.2 Residual architecture
cancer. The public dataset Luna16 is used for our model
pre-training, i.e., the transfer learning. The other dataset In residual neural network, the residual learning is
of CT images of lung cancers on our intellectual property applied to every few stacked layers, which can relieve gra-
is collected from Shandong Provincial Hospital, and used dient disappearance and gradient explosion. A residual
for fine-tuning. model is shown in Figure 3, where is the input value, and
known as the residual, is the output after the first layer of
linear change and activation. It is shown in Figure 3 that in
192 Shudong Wang et al.

Figure 1: CT images of lung cancer pathological types: from left to right are ISA (adenocarcinoma in situ), SCLC (small cell lung cancer), SCC
(squamous cell cancer) and IA (invasive adenocarcinoma). The green box areas are ROI areas of tumors.

Figure 2: ROI areas of four types tumors, from left to right are ISA (adenocarcinoma in situ), SCLC (small cell lung cancer), SCC (squamous
cell cancer) and IA (invasive adenocarcinoma).

the residual network, before the second layer is activated,


adds the input value, and then the input of active function
becomes. Adding X before the second activation is called
a short cut connection. Residual networks allow training
of deep networks by constructing the network through
modules called residual models. Residual connections
significantly boost the performance of deep neural net-
works.
For lung cancer pathological types identification, we
propose a novel depth residual network model, whose
topological structure is shown in Figure 4.
Our model has a convolution layer with filters,16
residual units with filters, 2 fully connection layers along
with their associated padding, pooling and dropout layers,
and the structure of each residual unit is consistent with
the structure shown in Figure 4. ReLU nonlinear activa-
tion function is used in all convolution, and then two fully Figure 3: Schematic diagram of a residual block
connected layers is adopted to extract global features.
Our model differs from the traditional resnet-34 in the 5×5. And in the experiment, we found that kernels of 5×5
following three aspects: perform better than kernels of 7×7.
(1) We use kernels of size 5×5 for the first convolu- (2) Our model has two full connection layers. The orig-
tional layer, while the original resnet-34 uses kernels of inal resnet-34 has only one full connection layer. Since we
7×7 for the first convolutional layer. This is due to the size use the transfer learning strategy, it needs to design two
of images in our database (50×50), so we chose kernels of full connection layers for improving the migration ability
Lung Cancer Pathological Types identifying by DRNN 193

Figure 4: Architecture of our model which is based on residual blocks with corresponding kernel size, number of feature maps for each
convolutional layer.

of the model. Full connection layers are seen as a “fire- ing, the model parameters are initialized to the weights
wall” for model representation capabilities. For fine-tun- saved after pre-training. Specially, the first 27 convolu-
ing, the result of network fine-tuning without a full con- tional layers of model are frozen with loaded pre-trained
nection layer is worse than that of a network with a full weights, the remaining convolutional layers and fully
connection layer. In order to improve the ability of model connection layers are retrained to recognize our classes
migration, we designed two full connection layers here. from scratch. In identifying lung cancer pathological
(3) In our model, the SIGMOD activation function is types from Our intellectual property lung cancer dataset,
used in pre-training, and SoftMax activation function is the first 27 convolutional layers are frozen and used as
used in fine-training in the full connection layer. fixed feature extractors. The previous 27 layers are used
The final output layer predicts the type of input image, to extract generic features (such as edge detection and
as an input to the class loss term and the centering loss color detection) of images. The latter layers are utilized to
term during training. In dense layers, we have two options abstract features related to a particular category.
for activation function: Our transfer learning strategy attempted to fine-tune
the last 8 layers. The convolutional “bottlenecks” are the
(1) SoftMax activation function with categorical cross-en- values of each training and testing images after they have
tropy loss function and one-hot encoded inputs passed through the frozen layers of our model. Since the
convolutional weights cannot be updated, such values are
(2) Sigmoid activation function with binary cross-en- initially calculated and stored in order to reduce redun-
tropy loss function. dant processes and speed up training. The newly initial-
ized network takes the image bottlenecks as input and
In pre-training, we select the sigmoid activation function retrains to classify our specific categories, see Figure 5.
with binary cross-entropy as loss function, thus output- Attempts at ‘fine-tuning’ model by unfreezing and updat-
ting an integer value of 0 or 1 depending on the predicted ing the pre-trained weights using backpropagation tend
output. In fine-tune, we use the SoftMax activation func- to decrease model performance due to overfitting on our
tion with categorical cross-entropy loss function and medical images.
one-hot encoded inputs, to output the probabilities of
pathological types. The retrained model with the least
validation loss is obtained on our own dataset. 2.4 Model details

Since the size of the Luna16 dataset is substantial, it is


2.3 The transfer learning strategy impossible to load the whole set into memory. We opted
for batch processing of the “.png” files using Keras’s
Our model is trained with the transfer learning strategy ImageGenerator function while generating from directory.
on a public dataset. In detail, our model is pre-trained There are two-fold advantages: image size is verified and
on the luna16 dataset by using keras. In pre-training, resized before feeding into the network and no manual
the model parameters are randomly initialized, then the loading is required to feed pixels into the neural network.
whole model is trained and the weights are saved. It has The main drawback is the complexity of cross-validation
achieved accuracy rate 96.69% on luna16. In fine-tun-
194 Shudong Wang et al.

method testing is performed after every step by delivering


images to network without performing gradient descent
and back propagation. The model with the best perfor-
mance is kept for analysis.

Informed consent: Informed consent has been obtained


from all individuals included in this study.

Ethical approval: The research related to human use has


been complied with all the relevant national regulations,
institutional policies and in accordance the tenets of
the Helsinki Declaration, and has been approved by the
authors' institutional review board or equivalent commit-
tee.

3 Results
The dataset of images for training and testing images is
tabulated in Table 2. Specifically, 2054 images are used
for training and the remaining 168 images are used for
testing. The training set includes 67 SCLC images, 98 SCC
Figure 5: The general framework of the transfer learning strategy.
The upper part is pre-training, and the lower part is fine-tuning.
images, 1818 IA images and 71 ISA images. The test set
When we do fine-tune process, we update the weights of some holds 29 SCLC images, 43 SCC images, 65 IA images and
layers. 31 ISA images.

Table 1: Parameters during pre-training and fine-tuning


because it did not support automatic partitioning of the
data. Partitioning for validation is done manually here.
The lung cancer dataset is unbalanced in each type Pre-train Fine-tune
of data. After data preprocessing, 67 SCLC images, 98 SCC
images, 1818 IA images and 71 ISA images are obtained for NVIDIA GTX 2080 8Gb NVIDIA GTX 1080 4Gb
Hardware
training. Since the number of IA images is about 20 times GPU GPU
that of other categories, we divided the 1818 IA images
Batch size 32 20
into 20 groups, each of which has about 90 images to alle-
viate the imbalance. In each epoch, only one of the 20 sets Epoch 100 100
is randomly chosen as a part of training set. Learning rate 0.001 0.0001
Our model is pre-trained on Ubuntu 16.04 OS with 2
Intel Xeon CPUs, using a NVIDIA GTX 2080 8Gb GPU for
Table 2: The data-based image for training and testing
training and testing, 256Gb RAM, NVIDIA GTX 1080 4Gb
GPU for fine-tune.
Pre-training of layers is performed by a stochastic gra- Class Training Testing
dient descent in batches of 32 images per step using an
Adam Optimizer with a learning rate of 0.001. Since the SCLC 67 29
pre-trained weights are better than the random initializa-
tion weights, we set a smaller learning rate. Fine-tuning SCC 98 43
of layers is performed by stochastic gradient descent in
IA 1818 65
batches of 20 images per step using an Adam Optimizer
with a learning rate of 0.0001. Pre-training and fine-tune
ISA 71 31
on all categories are both run for 100 epochs, since train-
ing of the final layers can converge for all classes. Holdout Total Images 2054 168
Lung Cancer Pathological Types identifying by DRNN 195

The average accuracy is 85.71% after 5 runs. The test showed that the model has a slightly higher ability to dis-
results of one run of our method are illustrated by a confu- tinguish IA from other categories, which may be due to the
sion matrix, as shown in Figure 6. fact that IA type data is much more than other types of
For the SCLC images, 29 images are used for testing, data.
among which 25 images are correctly predicted, 3 images In Figure 7, it provides an analysis of the accuracy and
are misclassified as ISA, and 1 image is misclassified as loss values for the network architecture during the train-
IA. For SCC images, 43 images are used for testing, among ing period with the transfer learning. With the benefit of
which 33 images can be correctly predicted, and 4 images transfer learning, our model achieved high precision at
are misclassified as SCLC, 5 images are misclassified as early epochs.
ISA and 1 image is misclassified as IA. For IA images, Binary classifiers are also implemented to compare
65 images are used for testing, among which 58 images SCLC/ squamous cell cancer/ invasive lung adenocar-
are correctly predicted as IA, 2 images are misclassified cinoma from each other. The same datasets are used to
as SCLC and 5 images are misclassified as SCC. For ISA determine a breakdown of the model’s performance. The
images, 31 images are tested, in which 28 images are cor- classifier distinguishes SCLC images from squamous cell
rectly predicted as ISA, 2 images are misclassified as SCLC carcinoma images achieved an accuracy of 94.5%, with a
and 1 image is misclassified as SCC. sensitivity of 100.0% and specificity of 89.65%. It distin-
From the results, it was found that our model is guishes SCLC images from invasive lung adenocarcinoma
capable of predicting the medical conditions of lung images achieved an accuracy of 93.25%, with a sensitivity
cancer. It is shown in Table 2 the accuracy level of the lung of 93.1% and specificity of 93.84%. It achieved an accuracy
cancer image classification rate of the proposed method. of 95.75%, with a sensitivity of 95.34% and specificity of
Our transfer learning strategy can identify the four lung 96.92% to distinguish squamous cell carcinoma images
cancer pathological types from CT images. The results from invasive lung adenocarcinoma images.
Table 3: Proposed CT lung image classification with transfer learning We compare our approach with some classical statis-
results tical machine-learning classifiers and recent state-of-the-
Table 4: Results of binary classifiers based our model
Type Recall Precision

Binary classifiers accuracy sensitivity specificity


ISA 0.8923 0.8787

SCLC 0.8621 0.7575 SCLC/SCC 0.9450 1.00 0.8965

SCC 0.7674 0.8461 SCLC/IA 0.9325 0.9310 0.9384

IA 0.9032 0.9333 SCC/IA 0.9575 0.9534 0.9692

Figure 7: Training accuracy and cross-entropy loss are plotted


against the training epoch. Plots were normalized with a smoothing
Figure 6: Confusion matrix of test result. factor of 0.5 to clearly visualize trends.
196 Shudong Wang et al.

Table 5: Accuracy (%) on our dataset over 5 runs

Ours without transfer


AlexNet[26] VGG16[27] DenseNet[28] Ours with transfer learning
learning

63.980.10 78.42 80.7.89 79.532.64 85.71.29

art methods to prove the superiority of our method. The Our AI model was trained and validated on our intel-
AlexNet used in this experiment includes 5 convolution lectual property lung cancer dataset collected in Shan-
layers and 3 fully connection layers. The DenseNet has 121 dong Provincial Hospital, but the Digital Imaging and
layers with 4 dense blocks. The original VGG16 Network Communications in medicine standards cause inconsist-
includes 13 convolution layers with filters and 3 full con- encies in CT images from different manufacturers. Future
nection layers. We find that 5 runs is sufficient to prove studies could entail the use of images from different man-
the performance of the model. All the comparative models ufacturers in both the training and testing datasets so
were trained using our lung cancer data without transfer that the system will be universally useful. Moreover, the
learning. The dataset of images for training and testing efficacy of the transfer learning technique for image anal-
images is tabulated in Table 2. It is shown in Figure 4 that ysis very likely extends beyond the realm of CT images
AlexNet trained on our own dataset resulted in the rela- and lung cancer. In principle, the techniques we have
tively low accuracy of 63.98%, DenseNet resulted in the described here could potentially be employed in a wide
accuracy of 80.7%, VGG16 resulted in accuracy of 78.42%. range of medical images across multiple disciplines.
Our model without transfer learning gets 79.53% accuracy.
These results indicate that the misdiagnosis rate was Conflict of interest: Authors state no conflict of interest.
high and that a number of lung cancer cases were not
discriminated. In conditions with a limited training set,
classical machine-learning models cannot be effectively
trained. These results show that our method resulted in
References
better performance than the others. [1] Hoffman P.C., Mauer A.M., Vokes E.E., Lung cancer,
Lancet, 2000, 355(9202), 479-485; DOI: 10.1016/
S0140-6736(00)82038-3

4 Discussion
[2] Travis W.D., Pathology of lung cancer, Clin. Chest Med., 2011,
32(4), 669-692;DOI: 10.1016/j.ccm.2011.08.005
[3] Song T, Alfonso Rodríguez-Patón , Pan Z., Zeng X., Spiking
This study used CT images to analyze and identify the Neural P Systems With Colored Spikes, IEEE Transactions on
pathological types of lung cancer, which is non-invasive, Cognitive and Developmental Systems, 2018. DOI 10.1109/
TCDS.2017.2785332
specific and reproducible. We describe a general AI model
[4] Hugo J.W.L.A., Emmanuel R.V., Ralph T.H.L., Chintan P.,
for the diagnosis of pathological subtypes of lung cancer. Patrick G., Sara C., et al., Decoding tumour phenotype
By employing a transfer learning strategy, our model by noninvasive imaging using a quantitative radiomics
demonstrated competitive performance of CT image analy- approach, Nat. Commun., 2014, 5, 4006; DOI: 10.1038/
sis without the need for a highly specialized deep learning ncomms5006
machine and without a database of millions of example [5] Lambin P., Rios-Velazquez E., Leijenaar R., Carvalho S., Aerts
H. J. W. L., Radiomics: extracting more information from
images. Moreover, the model’s performance in diagnosing
medical images using advanced feature analysis, Eur. J.
lung cancer CT images was comparable to that of human Cancer (Oxford, England: 1990), 2012, 48(4), 441-446; DOI:
experts with significant clinical experience with lung 10.1016/j.ejca.2011.11.036
cancer, outperforming other methods. When the model [6] Zeiler M.D., Fergus R., Visualizing and understanding
was trained with a much smaller number of images(about convolutional networks, Lecture Notes in Computer Science,
2013, 8689; DOI: 10.1007/978-3-319-10590-1_53
100 images of each class), it retained high performance in
[7] Lecun Y., Bottou L., Bengio Y., Haffner P., Gradient-based
accuracy, sensitivity, specificity for achieving the correct learning applied to document recognition, P. Ieee., 1998,
diagnosis , thereby illustrating the power of the transfer 86(11), 2278-2324; DOI: 10.1109/5.726791
learning system to make highly effective classifications, [8] Song T., Zeng X., Pan Z., Jiang M, Alfonso Rodriguez-Paton,A
even with a very limited training dataset. Parallel Workflow Pattern Modelling Using Spiking Neural P
Lung Cancer Pathological Types identifying by DRNN 197

Systems With Colored Spikes,IEEE Transactions on Nanobio- [18] Suk H.I., Lee S.W., Shen D., Latent feature representation with
science,DOI 10.1109/TNB.2018.2873221 stacked auto-encoder for ad/mci diagnosis, Brain Struct.
[9] Song T., Pan L., Wu T., Pan Z., Wong, M. L. Dennis and Funct., 2013, 220(2), 841-859; DOI: 10.1007/s00429-013-
Rodriguez-Paton, Alfonso, Spiking Neural P Systems with 0687-3
Learning Functions, IEEE Trans Nanobioscience, 2019, DOI: [19] Chougrad H., Zouaki H., Alheyane O., Deep convolutional
10.1109/TNB.2019.2896981 neural networks for breast cancer screening, Comput.
[10] Ballester P., Araujo R.M., On the performance of googlenet Meth. Prog. BIO., 2018, 157, 19–22; DOI: 10.1016/j.
and alexnet applied to sketches, In: P. L. Ballester(Ed.), AAAI cmpb.2018.01.011
Conference on Artificial Intelligence (12–17 February 2016, [20] Liu X., Hou F., Qin H., Hao A., Multi-view multi-scale cnns
Phoenix, Arizona USA), AAAI, 2016 for lung nodule type classification from CT images, Pattern
[11] Song T, Pan Z. Dennis Mouling Wong, Wang X, Design of Logic Recogn., 2018, S0031320317305186; DOI: 10.1016/j.
Gates Using Spiking Neural P Systems with Homogeneous patcog.2017.12.022
Neurons and Astrocytes-like Control, Information Sciences, [21] Chen Z., Ying C., Lin C., Liu S., Li W., Multi-view vehicle
372, 2016, Pages 380–391 type recognition with feedback-enhancement multi-branch
[12] Litjens G., Kooi T., Bejnordi B.E., Setio A.A.A., Ciompi F., cnns, Ieee. T. Circ. Syst. Vid., 2017, 1-1; DOI: 10.1109/
Ghafoorian M., et al., A survey on deep learning in medical TCSVT.2017.2737460
image analysis, Med. Image Anal., 2017, 42, 60-88; DOI: [22] Özgün Ç., Abdulkadir A., Lienkamp S.S., Brox T., Ronneberger
10.1016/j.media.2017.07.005 O., 3D U-Net: learning dense volumetric segmentation
[13] Cruz-Roa A., Gilmore H., Basavanhally A., Feldman M., from sparse annotation, In:S. Ourselin(Ed.), International
Ganesan S., Shih N.N.C., et al., Accurate and reproducible Conference on Medical Image Computing and Comput-
invasive breast cancer detection in whole-slide images: a er-Assisted Intervention. Springer(17-21 October 2016,
deep learning approach for quantifying tumor extent, Sci. Athens,Greece), MICCAI, 2016, 424-432
Rep-UK., 2017, 7, 46450; DOI: 10.1038/srep46450 [23] Lo S.C.B., Chan H.P., Lin J.S., Li H., Freedman M.T., Mun
[14] Esteva A., Kuprel B., Novoa R.A., Ko J., Swetter S.M., Blau S.K., Artificial convolution neural network for medical
H.M., et al., Dermatologist-level classification of skin cancer image pattern recognition, Neural Networks, 1995, 8(7-8),
with deep neural networks, Nature, 2017, 542(7639), 115-118; 1201-1214; DOI: 10.1016/0893-6080(95)00061-5
DOI: 10.1038/nature21056 [24] Yin X., Han J., Yang J., Yu P., Efficient classification across
[15] Gulshan V., Peng L., Coram M., Stumpe M.C., Wu D., multiple database relations: a crossmine approach, Ieee.
Narayanaswamy A., et al., Development and validation of a T. Knowl. Data. E., 2016, 18(6), 770---783; DOI: 10.1109/
deep learning algorithm for detection of diabetic retinopathy TKDE.2006.94
in retinal fundus photographs, Jama.,2016, 316(22), [25] Kermany D.S., Goldbaum M., Cai W., Valentim C.C.S., Zhang
2402-2410; DOI: 10.1001/jama.2016.17216 K., Identifying medical diagnoses and treatable diseases by
[16] Cheng J.Z., Ni D., Chou Y.H., Qin J., Tiu C.M., Chang Y.C., et al., image-based deep learning, Cell, 2018, 172(5), 1122-1131.e9;
Computer-aided diagnosis with deep learning architecture: DOI: 10.1016/j.cell.2018.02.010
applications to breast lesions in us images and pulmonary [26] Krizhevsky A., Sutskever I., Hinton G., Imagenet classification
nodules in CT scans, Sci. Rep-UK., 2016, 6, 24454; DOI: with deep convolutional neural networks, Conference and
10.1038/srep24454 Workshop on Neural Information Processing Systems(3-8
[17] Shen W., Zhou M., Multi-scale convolutional neural Dec 2012, Harrahs and Harveys, Lake Tahoe), Massachusetts
networks for lung nodule classification, In:S. Ourselin(Ed.), Institute of Technology Press, 2012
International Conference on Information Processing in [27] Pang S, Ding T, Alfonso Rodríguez-Patón, Song T, Pan Zheng,
Medical Imaging(28-3 July 2015, Sabhal Mor Ostaig, Isle of A Parallel Bioinspired Framework for Numerical Calculations
Skye, United Kingdom), Springer International Publishing, Using Enzymatic P System with an Enzymatic Environment,
2015 DOI, 10.1109/ACCESS.2018.2876364
[28] Huang G., Liu Z., Laurens V.D.M., Weinberger K.Q., Densely
connected convolutional networks, IEEE Conference on
Computer Vision & Pattern Recognition (22-25 July 2017,
Honolulu, Hawaii), IEEE Computer Society, 2017

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy