1807 10584fhwf

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

UNCERTAINTY MODELING AND INTERPRETABILITY IN CONVOLUTIONAL NEURAL

NETWORKS FOR POLYP SEGMENTATION

Kristoffer Wickstrøm*, Michael Kampffmeyer, Robert Jenssen

UiT The Arctic University of Norway


Dept. of Physics and Technology
UiT Machine Learning Group
arXiv:1807.10584v1 [cs.CV] 16 Jul 2018

ABSTRACT to enable effective use of such methods, medical staff should


be able to understand why the model believes that a specific
Convolutional Neural Networks (CNNs) are propelling ad-
region contains a polyp (interpretability) and there should be
vances in a range of different computer vision tasks such as
some notion of uncertainty in predictions.
object detection and object segmentation. Their success has
The last couple of years have seen several works on auto-
motivated research in applications of such models for medi-
matic procedures based on Deep Convolutional Neural Net-
cal image analysis. If CNN-based models are to be helpful
works (CNN) for health domain tasks such as interstitial lung
in a medical context, they need to be precise, interpretable,
disease classification [5], cell detection [6], estimation of car-
and uncertainty in predictions must be well understood. In
diothoracic ratio [7], and polyp detection [8]. CNNs have
this paper, we develop and evaluate recent advances in un-
improved the state of the art in a number of computer vision
certainty estimation and model interpretability in the context
tasks such as image classification [9], object detection [10]
of semantic segmentation of polyps from colonoscopy im-
and semantic segmentation [11, 12]. Recently, CNNs have
ages. We evaluate and enhance several architectures of Fully
shown promising performance for the task of polyp segmen-
Convolutional Networks (FCNs) for semantic segmentation
tation [2]. However, despite the promising results obtained
of colorectal polyps and provide a comparison between these
on polyp segmentation, model interpretability and uncertainty
models. Our highest performing model achieves a 76.06%
quantification have been lacking, and recent advances in deep
mean IOU accuracy on the EndoScene dataset, a considerable
learning have not been incorporated [2].
improvement over the previous state-of-the-art.
In this paper, we enhance and evaluate two recent CNN
Index Terms— Polyp Segmentation, Deep Learning, architectures for pixel-to-pixel based segmentation of col-
Fully Convolutional Networks, Uncertainty Modeling, CNN orectal polyps, referred to as Fully Convolutional Networks
interpretability (FCNs). Furthermore, we incorporate and develop recent
advances in the field of deep learning for semantic segmen-
1. INTRODUCTION tation of colorectal polyps in order to model uncertainty and
increase model interpretability leading to novel uncertainty
Colon cancer prevention is currently primarily done with the maps [13, 14] in a polyp segmentation context as well as the
help of regular colonoscopy screenings. However, depend- visualization of descriptive regions in the input image using
ing on the size and type, roughly 8 − 37% of polyps are Guided Backpropagation [15]. To the author’s knowledge,
missed during the process [1]. Missed polyps can have fatal these techniques have not been previously explored in the
consequences, as they are potential precursors to colon can- field of semantic segmentation of colorectal polyps.
cer, which causes the third most cancer deaths globally [2].
Hence, increasing the detection rate of polyps is an impor- 2. ENHANCED FULLY CONVOLUTIONAL
tant topic of research. Towards this end, automated detection NETWORKS FOR POLYP SEGMENTATION
procedures have been proposed [3, 4], which have the advan-
tage of not being influenced by factors such as the fatigue of We choose two architectures for the task of polyp segmenta-
medical personnel towards the end of long operations. How- tion, namely the Fully Convolutional Network 8 (FCN-8) [11]
ever, they present additional novel challenges. For instance, and the more recent SegNet [16]. Previous use of FCNs for
polyp segmentation have shown promising results, and we
This work is partially funded by the Norwegian Research Council
FRIPRO grant no. 239844 on developing the Next Generation Learning Ma-
hypothesized that the inclusion of recent advances in deep
chines. learning would improve these results further. SegNet has been
*Corresponding author: kwi030@uit.no shown to achieve comparative results to FCNs in some appli-
cation domains but is a less memory intensive approach with
fewer parameters to optimize.
The FCN is a CNN architecture particularly well suited to
tackle per pixel prediction problems like semantic segmenta-
tion. FCNs employ an encoder-decoder architecture and are
capable of end-to-end learning. The encoder extracts use-
ful features from an image and maps it to a low-resolution
representation. The decoder is tasked with mapping the low-
resolution representation back into the same resolution as the
input image. Upsampling in FCNs is performed using either
bi-linear interpolation or by transposed (or fractional strided)
convolutions, where the convolution filters are learned as part
of the optimization procedure. Learned upsample filters add
additional parameters to the network architecture, but tend to
provide better overall results [11]. Upsampling can further
be improved by including skip connections, which combine
coarse level semantic information with higher resolution seg-
mentation from previous network layers. Due to the lack of
Fig. 1. An illustration of the Enhanced Fully Convolutional
fully connected layers, inference can be performed on images
Network-8. Color codes description: Blue - Convolution
of arbitrary size.
(3x3), Batch Normalization and ReLU; Yellow - Upsampling;
The SegNet architecture builds on the general idea of Pink - Summing; Red - Pooling (2x2); Green - Soft-max.
FCNs but proposes a novel approach to upsampling. In- Dropout was included according to [19].
stead of learning the upsampling, SegNet utilizes the pooling
indices from the encoder to upsample activations in the de-
coder, thereby producing sparse feature maps. These sparse but not included in recent work on segmentation of colorectal
representations are then processed by additional convolu- polyp using FCNs [2].
tional layers to produce dense activations and predictions.
The advantage of SegNet compared to FCN is the reduction
in learnable parameters, only 29.5 vs. 134.5 million, as the 3. UNCERTAINTY AND INTERPRETABILITY IN
upsampling filters in FCNs tend to be large. FULLY CONVOLUTIONAL NETWORKS FOR
For our experiments, we propose an Enhanced FCN-8 POLYP SEGMENTATION
(EFCN-8) architecture which leverages recent development
in the deep learning field. The architecture is depicted in Despite their success on a number of different tasks, CNNs
Figure 1. The second architecture that we propose is an En- are not without their flaws. Most CNNs are unable to provide
hanced variant of SegNet (ESegNet), depicted in Figure 2. To any notion of uncertainty in their prediction and determining
improve overall performance, we propose to include several what features in the input affect its prediction is challenging
recent advances in deep learning which were not present in as a result of the complexity of the model. Such limitations
the original architectures. For the FCN-8, we make use of have not stopped CNNs from being applied on many com-
Batch Normalization [17] after each layer. Batch Normaliza- puter vision tasks, yet they become especially apparent when
tion is a procedure that normalizes the output of each layer, developing CNNs for medical applications. Most physicians
allowing for a larger learning rate that accelerate the training will be reluctant to make a diagnosis based on a single seg-
procedure. For ESegNet, we include Dropout [18] after the mentation map with no notion of uncertainty and indication as
three central encoders and decoders inspired by [13]. Dropout to what features were the basis of the prediction. This section
is a regularization technique that randomly set units in a layer will describe two recently proposed methods which address
to zero and can be interpreted as an ensemble of several net- the limitations just outlined.
works. Including Dropout serves two purposes. It regularizes
the model which encourage better generalization capabilities 3.1. Uncertainty
and, as we will see in Section 3, enable estimation of uncer-
tainty in the model’s prediction. The encoder of both models Modeling uncertainty is crucial for designing trustworthy au-
corresponds to the network VGG16 [19], which allows us to tomatic procedures, yet CNNs have no natural way of provid-
initialize the encoder with pretrained weights from a VGG16 ing such uncertainties to accompany its prediction. In con-
model that was previously trained on the ImageNet dataset, trast, Bayesian models provide a framework which naturally
an approach referred to as transfer learning. Utilizing pre- includes uncertainty by modeling posterior distribution for
trained weights was incorporated in the original architectures, the quantities in question. Given a dataset X = {x1 , ..., xN }
deems important for identifying polyps. We choose Guided
Backpropagation as it is known to produce clear visualiza-
tions of salient input pixels and is more straightforward to
employ compared to other methods.
The central idea of Guided Backpropagation is the inter-
pretation of the gradients of the network with respect to an
input image. In [23] they noted that, for a given image, the
Fig. 2. Depiction of the standard SegNet architecture ob- magnitude of the gradients indicate which pixels in the input
tained from [16]. Our implementation include Dropout in the image need to be changed the least to affect the prediction the
three central encoders and decoders for regularization and to most. By utilizing backpropagation, they obtained the gradi-
enable uncertainty estimation. ents corresponding to each pixel in the input such that they
could visualize what features the network considers essential.
with labels Y = {y1 , ..., yN }, the predictive distribution of a In [15] they argued that positive gradients with a large mag-
Bayesian neural network can be modeled as: nitude indicate pixels of high importance while negative gra-
dients with a large magnitude indicate pixels which the net-
Z works want to suppress which, if included in the visualization
of important pixels, can result in noisy images. To avoid this,
p(y|x, X, Y) = p(y|x, W)p(W|x, X, Y)dW (1)
Guided Backpropagation alters the backward pass of a neural
network such that negative gradients are set to zero in each
where W refers to the weights of the model, p(y|x, W) layer, thus allowing only positive gradients to flow backward
is the softmax function applied to the output of the model, through the network and highlighting pixels which the system
denoted by f W (x), and p(W|x, X, Y) is the posterior over finds important.
the weights which capture the set of plausible models param-
eters for the given data. Obtaining p(y|x, W) only requires a
forward pass of the network, but the inability to evaluate the 4. RESULTS
posterior of the weights analytically makes Bayesian neural
networks computationally unfeasible. To sidestep the prob- In this section, we present quantitative and qualitative results
lematic posterior of the weights, [20] proposed to incorpo- on semantic segmentation of colorectal polyps for both archi-
rate Dropout [18] as a method for sampling sets of weights tectures, along with details regarding the training of the two
from the trained network to approximate the posterior of the models. We also present the results of using Monte Carlo
weights. The predictive distribution in Equation 1 can then be Dropout to model the uncertainty associated with the predic-
approximated using Monte Carlo integration as follows: tions and the results of using Guided Backpropagation to vi-
sualize which pixels are considered important.
T
1X
p(y|x, X, Y) ≈ Softmax(f Wt (x)) (2)
c
T t=1 4.1. Training Approach

where T is the number of sampled sets of weights and We evaluate our methods on the EndoScene [2] dataset for
Wt is a set of sampled weights. In practice, the predictive
c semantic segmentation of colorectal polyps, which consists
distribution from Equation 2 can be estimated by running T of 912 RGB images obtained from colonoscopies from 36
forward passes of a model with Dropout applied to produce patients. For each of the input images comes a correspond-
T predictions, which in turn can be used to estimate the un- ing annotated image provided by physicians, where pixels be-
certainty associated with the sample in question. The authors longing to a polyp are marked in white and pixels belonging to
of [20] refer to this method of sampling from the posterior of the colon are marked in black. The first row of Figure 3 and 4
the predictive distribution as Monte Carlo Dropout. display examples from the dataset. We consider the two-class
problem, where the task is to classify each pixel as polyp or
part of the colon (background class). Following the approach
3.2. Interpretability
of [2] we separate the dataset into training/validation/test set,
Another desirable property that CNNs lack is interpretability, where the training set consists of 20 patients and 547 images,
i.e. being able to determine what features induces the network the validation set consists of 8 patients and 183 images, and
to produce a particular prediction. However, several recent the test set consists of 8 patients and 182 images. All RGB
works have proposed different methods to increase network input image are normalized to range [0, 1].
interpretability [21, 22]. In this paper, we evaluate and de- For performance evaluation, we calculate Intersection
velop the Guided Backpropagation [15] technique for FCNs over Union (IoU) and global accuracy (per-pixel accuracy)
on the task of semantic segmentation of colorectal polyp in on the test set. For a given class c, prediction ŷi and ground
order to assess which pixels in the input image the network truth yi , the IoU is defined as
P
(ŷi == c ∧ yi == c)
IoU(c) = Pi (3)
i (ŷi == c ∨ yi == c)

where ∧ is the logical and operation and ∨ is the logical


or operation.
We initialize the decoder weights of both EFCN-8 and
ESegNet using HeNormal initialization [24] and employ pre-
trained weights for the encoders, as mentioned previously. All (a) input image (b) Ground truth
models were trained using ADAM [25] with a batch size of
10 and cross-entropy loss [11]. We use the validation set to
apply early stopping and monitor polyp IoU score with a pa-
tience of 30. Class balancing was not applied as it gave no
significant improvement and no weight decay was used.
Data augmentation was applied according to best prac-
tices to increase the number of training images artificially. We
utilize a dynamic augmentation scheme that applies cropping,
rotation, zoom, and shearing. During training, we crop im-
ages into 224x224 patches randomly chosen from the center (c) EFCN-8 prediction (d) ESegNet prediction
or one of the corners, following the example of [9]. We apply
random rotation between -90 and 90 degrees, random zoom
from 0.8-1.2 and random shearing from 0-0.4.

4.2. Quantitative and Qualitative Results

In this section, we present the results for both architectures.


Table 1 presents the quantitative results for our EFCN-8 and
ESegNet on the test set along with previous results on this (e) EFCN-8 uncertainty (f) ESegNet uncertainty
dataset obtained for a vanilla FCN-8 [2] and a previously
state-of-the-art methods based on non-deep learning meth-
ods [4]. Row two of Figure 3 and 4 displays predictions from
both models based on samples from the test set.

Model # P(M) IoU B IoU P IoU M Acc M


SDEM [4] << 1 0.739 0.221 0.480 0.756
FCN-8 [2] 134.5 0.946 0.509 0.728 0.949
ESegNet 29.5 0.933 0.522 0.728 0.935
EFCN-8 134.5 0.946 0.587 0.767 0.949 (g) EFCN-8 interpretability (h) ESegNet interpretability

Table 1. Results for the two-class problem of the EndoScene Fig. 3. Qualitative results on the Endoscene test set. The
dataset. Abbreviations are: # P(M) = number of parameters in first row, from left to right, displays input image and its cor-
millions, IoU (Background, Polyp and Mean) and Accuracy responding ground truth. The second row displays the predic-
Mean. tion of both models. The third row displays the uncertainty
associated with the prediction for both models and the fourth
row displays which features are highlighted as important for
4.3. Uncertainty and Interpretability Results both models.

The third row of Figure 3 and 4 show the estimated standard


deviation of each pixel in the predictions of both models. Us- 5. DISCUSSION
ing Monte Carlo Dropout, we obtain ten predictions which
we used to estimate the standard deviation of each pixel. Row From Table 1 it is evident that the deep learning based models
four of Figure 3 and 4 displays the results of using Guided provide predictions of higher precision compared to previous
Backpropagation to highlight what pixels in the input image methods based on traditional machine learning techniques.
both models consider important to their prediction. Also, the difference in performance between [2] FCN-8 and
and the EFCN-8. This might imply that the increased com-
plexity of the EFCN-8 is beneficial to performance.
For the qualitative results, Figure 3 displays an example
where both models produce a correct prediction while Fig-
ure 4 shows an example where both models correctly segment
the polyp present in the input image, but ESegNet also pre-
dicts a polyp where there is none. We have included the first
example to illustrate that both models are capable of produc-
(a) input image (b) Ground truth ing precise and correct predictions. The second example is
included to highlight the difficulty when comparing models.
For instance, given the predictions in Figure 4 and no ground
truth, which prediction would you trust? We know the EFCN-
8 achieved a higher performance on the test set, but that does
not necessarily mean that it is correct in this particular case.
Without further information, it would be difficult to choose
between the two models without consulting a medical expert
to asses the images.
However, if we could say something about the uncertainty
(c) EFCN-8 prediction (d) ESegNet prediction
of the two predictions we might choose the prediction with the
lowest uncertainty associated with it. For the uncertainty esti-
mates shown in row three of Figure 3, where both models suc-
cessfully segment the polyp in the image, we see that the only
pixels associated with high uncertainty are those around the
border of the prediction. Such uncertainties are understand-
able as even physicians are unable to state precisely where the
colon ends and the polyp starts. However, the uncertainty esti-
mates which are shown in row three of Figure 4 tell a different
(e) EFCN-8 uncertainty (f) ESegNet uncertainty story. Notice that both models exhibit similar uncertainty for
the region where they both correctly segment a polyp, but ES-
egNet also has a large area of uncertainty for pixels associated
with a ridge going along the colon. The falsely segmented
polyp lies on this ridge of uncertainty, which might indicate
that we should be careful about trusting the polyp prediction
toward the bottom right of the image.
Another interesting question is, what pixels in the input
image is influencing these predictions? In row four of Fig-
ure 3, we see that pixels associated with the edges of the polyp
(g) EFCN-8 interpretability (h) ESegNet interpretability are highlighted, indicating that the models are leveraging edge
information to identify the polyp. Also, notice that the EFCN-
Fig. 4. Qualitative results on the Endoscene test set. The 8 considers the entire top edge of the polyp while ESegNet
first row, from left to right, displays input image and its cor- only considers the left edge of the polyp, which might imply
responding ground truth. The second row displays the predic- that the EFCN-8 has obtained a deeper understanding of the
tion of both models. The third row displays the uncertainty shape and form of polyp and can extract more useful informa-
associated with the prediction for both models and the fourth tion from the input image. In row four of Figure 4 we again
row displays which features are highlighted as important for see that both models are reacting to edges in the input image.
both models. But while the EFCN-8 can correctly identify pixels which be-
long to an actual polyp, ESegNet is also considering pixels
which correspond to ridges of the colon.
our EFCN-8 indicated that the inclusion of recent techniques Visualizing important pixels and modeling uncertainty is
such as transfer learning and Batch Normalization is vital to not just important to design automatic procedures which are
increase the capabilities of deep models. Furthermore, ESeg- trustworthy but also, as we have seen, allows for model anal-
Net can achieve comparable results to the FCN-8 even though ysis and model comparison. Of course, the results of such
it has far fewer parameters. However, when recent techniques methods are still somewhat open to interpretation, and deep
are included, we get a gap in performance between ESegNet learning would benefit from a more theoretical framework for
analyzing models, yet including techniques such as Monte [9] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet
Carlo Dropout and Guided Backpropagation can lead towards classification with deep convolutional neural networks,” in Advances
in Neural Information Processing Systems, 2012, pp. 1097–1105.
a better understanding of CNNs.
[10] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster
r-cnn: Towards real-time object detection with region proposal net-
6. CONCLUSION works,” IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, vol. 39, no. 6, pp. 1137–1149, 2017.
In this paper, we improved and applied two established CNN [11] Evan Shelhamer, Jonathan Long, and Trevor Darrell, “Fully convo-
lutional networks for semantic segmentation,” IEEE Transactions on
architectures for pixel-wise segmentation, evaluated their per- Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651,
formance on colorectal polyp segmentation and conclude that 2017.
CNNs can achieve high performance in this context of med- [12] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Mur-
ical image analysis. We also argued that modeling the un- phy, and Alan L Yuille, “Deeplab: Semantic image segmentation with
certainty of the networks output and visualizing descriptive deep convolutional nets, atrous convolution, and fully connected crfs,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
image regions in the input image can increase interpretability 40, no. 4, pp. 834–848, 2018.
and make models based on deep learning more applicable for [13] Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla, “Bayesian
medical personnel. segnet: Model uncertainty in deep convolutional encoder-decoder ar-
The field of deep learning is continually improving and chitectures for scene understanding,” arXiv preprint arXiv:1511.02680,
2015.
several recent architectures for semantic segmentation such
[14] Michael Kampffmeyer, Arnt-Borre Salberg, and Robert Jenssen, “Se-
as [26] show promising results. Increasing network inter- mantic segmentation of small objects and modeling of uncertainty in ur-
pretability is an active field of research where Relevance ban remote sensing images using deep convolutional neural networks,”
Propagation [22] is a particularly interesting approach for fu- in Proceedings of the IEEE Conference on Computer Vision and Pat-
ture experimentation. Post-processing procedures have also tern Recognition Workshops, 2016, pp. 1–9.
shown to improve performance of CNNs and including Con- [15] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Mar-
tin Riedmiller, “Striving for simplicity: The all convolutional net,”
ditional Random Fields could improve spatial coherence [12]. arXiv preprint arXiv:1412.6806, 2014.
[16] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla, “Segnet:
7. REFERENCES A deep convolutional encoder-decoder architecture for image segmen-
tation,” IEEE Transactions on Pattern Analysis and Machine Intelli-
[1] Jeroen C Van Rijn, Johannes B Reitsma, Jaap Stoker, Patrick M
gence, , no. 12, pp. 2481–2495, 2017.
Bossuyt, Sander J Van Deventer, and Evelien Dekker, “Polyp miss
rate determined by tandem colonoscopy: a systematic review,” The [17] Sergey Ioffe and Christian Szegedy, “Batch normalization: Acceler-
American journal of gastroenterology, vol. 101, no. 2, pp. 343, 2006. ating deep network training by reducing internal covariate shift,” in
International Conference on Machine Learning, 2015, pp. 448–456.
[2] David Vázquez, Jorge Bernal, F Javier Sánchez, Gloria Fernández-
Esparrach, Antonio M López, Adriana Romero, Michal Drozdzal, and [18] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever,
Aaron Courville, “A benchmark for endoluminal scene segmentation of and Ruslan Salakhutdinov, “Dropout: A simple way to prevent neural
colonoscopy images,” Journal of Healthcare Engineering, vol. 2017, networks from overfitting,” Journal of Machine Learning Research,
2017. vol. 15, pp. 1929–1958, 2014.
[19] Karen Simonyan and Andrew Zisserman, “Very deep convolutional
[3] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora
networks for large-scale image recognition,” International Conference
Gil, Cristina Rodrı́guez, and Fernando Vilariño, “Wm-dova maps for
on Learning Representations, 2015.
accurate polyp highlighting in colonoscopy: Validation vs. saliency
maps from physicians,” Computerized Medical Imaging and Graph- [20] Yarin Gal and Zoubin Ghahramani, “Dropout as a bayesian approxi-
ics, vol. 43, pp. 99–111, 2015. mation: Representing model uncertainty in deep learning,” in Interna-
tional Conference on Machine Learning, 2016, pp. 1050–1059.
[4] Jorge Bernal, Joan Manel Núñez, F Javier Sánchez, and Fernando Vi-
lariño, “Polyp segmentation method in colonoscopy videos by means [21] Matthew D. Zeiler and Rob Fergus, “Visualizing and understanding
of msa-dova energy maps calculation,” in Workshop on Clinical Image- convolutional networks,” CoRR, vol. abs/1311.2901, 2013.
Based Procedures. Springer, 2014, pp. 41–49. [22] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick
Klauschen, Klaus-Robert Müller, and Wojciech Samek, “On pixel-wise
[5] Q. Li, W. Cai, X. Wang, Y. Zhou, D. D. Feng, and M. Chen, “Med-
explanations for non-linear classifier decisions by layer-wise relevance
ical image classification with convolutional neural network,” in 2014
propagation,” PloS one, vol. 10, no. 7, pp. e0130140, 2015.
13th International Conference on Control Automation Robotics Vision
(ICARCV), Dec 2014, pp. 844–848. [23] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman, “Deep in-
side convolutional networks: Visualising image classification models
[6] Yao Xue and Nilanjan Ray, “Cell detection with deep convo-
and saliency maps,” CoRR, vol. abs/1312.6034, 2013.
lutional neural network and compressed sensing,” arXiv preprint
arXiv:1708.03307, 2017. [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Delving
deep into rectifiers: Surpassing human-level performance on imagenet
[7] Nanqing Dong, Michael Kampffmeyer, Xiaodan Liang, Zeya Wang, classification,” CoRR, vol. abs/1502.01852, 2015.
Wei Dai, and Eric P Xing, “Unsupervised domain adaptation for au-
tomatic estimation of cardiothoracic ratio,” in International Confer- [25] Diederik P. Kingma and Jimmy Ba, “Adam: A method for stochastic
ence on Medical Image Computing and Computer Assisted Interven- optimization,” CoRR, vol. abs/1412.6980, 2014.
tion. Springer, 2018. [26] Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, and
Yoshua Bengio, “The one hundred layers tiramisu: Fully convolutional
[8] E. Ribeiro, A. Uhl, and M. Hfner, “Colonic polyp classification with
densenets for semantic segmentation,” in Computer Vision and Pattern
convolutional neural networks,” in 2016 IEEE 29th International Sym-
Recognition Workshops (CVPRW), 2017 IEEE Conference on. IEEE,
posium on Computer-Based Medical Systems (CBMS), 2016, pp. 253–
2017, pp. 1175–1183.
258.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy