GCNet
GCNet
GCNet
1
ity
[ ]
ar
sp
di
... * * *
...
height
width
Shared Weights Shared Weights
it y
[ ]
ar
isp
...
d
* * * ...
height
width
Input Stereo Images 2D Convolution Cost Volume Multi-Scale 3D Convolution 3D Deconvolution Soft ArgMax Disparities
Figure 1: Our end-to-end deep stereo regression architecture, GC-Net (Geometry and Context Network).
components in more detail. In Section 4 we evaluate our is the Semi-Global Matching (SGM) of Hirschmüller [24],
model on the synthetic Scene Flow dataset [36] and set where dynamic programming optimizes a pathwise form of
a new state-of-the-art benchmark on the KITTI 2012 and the energy function in many directions.
2015 datasets [14, 35]. Finally, in Section 4.3 we present In addition to providing a basis for comparing stereo al-
evidence that our model has the capacity to learn semantic gorithms, the ground truth depth data from these datasets
reasoning and contextual information. provides the opportunity to use machine learning for im-
proving stereo algorithms in a variety of ways. Zhang and
2. Related Work Seitz [52] alternately optimized disparity and Markov ran-
dom field regularization parameters. Scharstein and Pal [38]
The problem of computing depth from stereo image pairs
learn conditional random field (CRF) parameters, and Li
has been studied for quite some time [5]. A survey by
and Huttenlocher [29] train a non-parametric CRF model
Scharstein and Szeliski [39] provides a taxonomy of stereo
using the structured support vector machine. Learning can
algorithms as performing some subset of: matching cost
also be employed to estimate the confidence of a traditional
computation, cost support aggregation, disparity computa-
stereo algorithm, such as the random forest approach of
tion and optimization, or disparity refinement. This survey
Haeusler et al. [19]. Such confidence measures can improve
also described the first Middlebury dataset and associated
the result of SGM as shown by Park and Yoon [37].
evaluation metrics, using structured light to provide ground
truth. The KITTI dataset [14, 35] is a larger dataset from Deep convolutional neural networks can be trained to
data collected from a moving vehicle with ground truth sup- match image patches [46]. A deep network trained to match
plied by LIDAR. These datasets first motivated improved 9 × 9 image patches, followed by non-learned cost aggre-
hand-engineered techniques for all components of stereo, gation and regularization, was shown by Žbontar and Le-
of which we mention a few notable examples. Cun [47, 49] to produce then state-of-the-art results. Luo
The matching cost is a measure of pixel dissimilarity for et al. presented a notably faster network for computing lo-
potentially corresponding image locations [25], of which cal matching costs as a multi-label classification of dispar-
absolute differences, squared differences, and truncated dif- ities using a Siamese network [33]. A multi-scale embed-
ferences are examples. Local descriptors based on gra- ding model from Chen et al. [9] also provided good local
dients [16] or binary patterns, such as CENSUS [45] or matching scores. Also noteworthy is the DeepStereo work
BRIEF [7, 22], can be employed. Instead of aggregating of Flynn et al. [12], which learns a cost volume combined
neighboring pixels equally as patch-based matching costs with a separate conditional color model to predict novel
do, awareness of the image content can more heavily incor- viewpoints in a multi-view stereo setting.
porate neighboring pixels possessing similar appearance, Mayer et al. created a large synthetic dataset to train
under the assumption that they are more likely to come from a network for disparity estimation (as well as optical
the same surface and disparity. A survey of these techniques flow) [34], improving the state-of-the-art. As one variant
is provided by Tombari et al. [43]. Local matching costs of the network, a 1-D correlation was proposed along the
may also be optimized within a global framework, usually disparity line which is a multiplicative approximation to the
minimizing an energy function combining a local data term stereo cost volume. In addition, this volume is concatenated
and a pairwise smoothness term. Global optimization can with convolutional features from a single image and suc-
be accomplished using graph cuts [27] or belief propaga- ceeded by a series of further convolutions. In contrast, our
tion [26], which can be extended to slanted surfaces [6]. A work does not collapse the feature dimension when com-
popular and effective approximation to global optimization puting the cost volume and uses 3-D convolutions to incor-
2
Layer Description Output Tensor Dim.
porate context.
Input image H×W×C
Though the focus of this work is on binocular stereo, it Unary features (section 3.1)
is worth noting that the representational power of deep con- 1 5×5 conv, 32 features, stride 2 1⁄ H×1⁄ W×F
2 2
single monocular image [10]. Deep learning is combined add layer 1 and 3 features (residual connection) 1⁄2H× ⁄2W×F
1
In our work, we apply no post-processing or regulariza- 19 3-D conv, 3×3×3, 32 features 1⁄ D×1⁄ H×1⁄ W×F
2 2 2
tion. Our network can explicitly reason about geometry by 20 3-D conv, 3×3×3, 32 features 1⁄ D×1⁄ H×1⁄ W×F
2 2 2
21 From 18: 3-D conv, 3×3×3, 64 features, stride 2 1⁄ D×1⁄ H×1⁄ W×2F
4 4 4
forming a fully differentiable cost volume. Our network 22 3-D conv, 3×3×3, 64 features 1⁄ D×1⁄ H×1⁄ W×2F
4 4 4
learns to incorporate context from the data with a 3-D con- 23 3-D conv, 3×3×3, 64 features 1⁄ D×1⁄ H×1⁄ W×2F
4 4 4
24 From 21: 3-D conv, 3×3×3, 64 features, stride 2 1⁄ D×1⁄ H×1⁄ W×2F
8 8 8
volutional architecture. We don’t learn a probability distri- 25 3-D conv, 3×3×3, 64 features 1⁄ D×1⁄ H×1⁄ W×2F
8 8 8
bution, cost function, or classification result. Rather, our 26 3-D conv, 3×3×3, 64 features 1⁄ D×1⁄ H×1⁄ W×2F
8 8 8
27 From 24: 3-D conv, 3×3×3, 64 features, stride 2 1⁄ D×1⁄ H×1⁄ W×2F
16 16 16
network is able to directly regress a sub-pixel estimate of 28 3-D conv, 3×3×3, 64 features 1⁄ D×1⁄ H×1⁄ W×2F
16 16 16
disparity from a stereo image pair. 29 3-D conv, 3×3×3, 64 features 1⁄ D×1⁄ H×1⁄ W×2F
16 16 16
30 From 27: 3-D conv, 3×3×3, 128 features, stride 2 1⁄ D×1⁄ H×1⁄ W×4F
32 32 32
3. Learning End-to-end Disparity Regression 32 3-D conv, 3×3×3, 128 features 1⁄ D×1⁄ H×1⁄ W×4F
32 32 32
hand, we would like to learn an end-to-end mapping from add layer 34 and 26 features (residual connection) 1⁄ D×1⁄ H×1⁄ W×2F
8 8 8
an image pair to disparity maps using deep learning. We 35 3×3×3, 3-D transposed conv, 64 features, stride 2 1⁄ D×1⁄ H×1⁄ W×2F
4 4 4
data. Additionally, this approach promises to reduce much add layer 36 and 20 features (residual connection) 1⁄2D× ⁄2H× ⁄2W×F
1 1
3
style operation which decimates the feature dimension [32]. tive field while reducing computation. However, it also re-
This allows us to learn to incorporate context which can op- duces spatial accuracy and fine-grained details through the
erate over feature unaries (Section 3.3). We find that form- loss of resolution. For this reason, we add each higher reso-
ing a cost volume with concatenated features improves per- lution feature map before up-sampling. These residual lay-
formance over subtracting features or using a distance met- ers have the benefit of retaining higher frequency informa-
ric. Our intuition is that by maintaining the feature unaries, tion, while the up-sampled features provide an attentive fea-
the network has the opportunity to learn an absolute rep- ture map with a larger field of view.
resentation (because it is not a distance metric) and carry Finally, we apply a single 3-D transposed convolution
this through to the cost volume. This gives the architecture (deconvolution), with stride two and a single feature out-
the capacity to learn semantics. In contrast, using a dis- put. This layer is necessary to make dense prediction in the
tance metric restricts the network to only learning relative original input dimensions because the feature unaries were
representations between features, and cannot carry absolute sub-sampled by a factor of two. This results in the final,
feature representations through to cost volume. regularized cost volume with size H×W×D.
which are designed for dense prediction tasks get around sof t argmin := d × σ(−cd ) (1)
their computational burden by encoding sub-sampled fea- d=0
ture maps, followed by up-sampling in a decoder [3]. We
extend this idea to three dimensions. By sub-sampling the This operation is fully differentiable and allows us to train
input with stride two, we also reduce the 3-D cost volume and regress disparity estimates. We note that a similar func-
size by a factor of eight. We form our 3-D regularization tion was first introduced by [4] and referred to as a soft-
network with four levels of sub-sampling. As the unaries attention mechanism. Here, we show how to apply it for the
are already sub-sampled by a factor of two, the features are stereo regression problem.
sub-sampled by a total factor of 32. This allows us to ex- However, compared to the argmin operation, its output
plicitly leverage context with a wide field of view. We apply is influenced by all values. This leaves it susceptible to
two 3×3×3 convolutions in series for each encoder level. multi-modal distributions, as the output will not take the
To make dense predictions with the original input resolu- most likely. Rather, it will estimate a weighted average
tion, we employ a 3-D transposed convolution to up-sample of all modes. To overcome this limitation, we rely on the
the volume in the decoder. The full architecture is described network’s regularization to produce a disparity probability
in Table 1. 1 Note that if we wished for our network to learn probabilities, rather
Sub-sampling is useful to increase each feature’s recep- than cost, this function could easily be adapted to a soft argmax operation.
4
6 10 10
4 8 8
2 6 6
Cost
Cost
Cost
0 4 4
2 2 2
4 0 0
6 2 2
0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
6 2 2
Probability
Probability
Probability
4 0 0
2 2 2
0 4 4
2 6 6
4 8 8
6 10 10
0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
Softmax
Softmax
0.20 0.5
0.15 0.4
0.15 0.3
0.10 0.10 0.2
0.05 0.05 0.1
0.00 0.00 0.0
0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
60 60 60
50 50 50
Softmax * Indices Indices
4.0 6 16
3.5 14
3.0
2.5
True ArgMin 5
4 True ArgMin 12
10
True ArgMin
2.0
1.5 Soft ArgMin 3
2
Soft ArgMin 8
6 Soft ArgMin
1.0 1 4
0.5 2
0.0 0 0
0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60
Disparities [px] Disparities [px] Disparities [px]
(a) Soft ArgMin (b) Multi-modal distribution (c) Multi-modal distribution with prescaling
Figure 2: A graphical depiction of the soft argmin operation (Section 3.4) which we propose in this work. It is able to
take a cost curve along each disparity line and output an estimate of the argmin by summing the product of each disparity’s
softmax probability and it’s disparity index. (a) demonstrates that this very accurately captures the true argmin when the curve
is uni-modal. (b) demonstrates a failure case when the data is bi-modal with one peak and one flat region. (c) demonstrates
that this failure may be avoided if the network learns to pre-scale the cost curve, because the softmax probabilities will tend
to be more extreme, producing a uni-modal result.
5
Model > 1 px > 3 px > 5 px MAE (px) RMS (px) Param. Time (ms)
1. Comparison of architectures
Unaries only (omitting all 3-D conv layers 19-36) w Regression Loss 97.9 93.7 89.4 36.6 47.6 0.16M 0.29
Unaries only (omitting all 3-D conv layers 19-36) w Classification Loss 51.9 24.3 21.7 13.1 36.0 0.16M 0.29
Single scale 3-D context (omitting 3-D conv layers 21-36) 34.6 24.2 21.2 7.27 20.4 0.24M 0.84
Hierarchical 3-D context (all 3-D conv layers) 16.9 9.34 7.22 2.51 12.4 3.5M 0.95
2. Comparison of loss functions
GC-Net + Classification loss 19.2 12.2 10.4 5.01 20.3 3.5M 0.95
GC-Net + Soft classification loss [32] 20.6 12.3 10.4 5.40 25.1 3.5M 0.95
GC-Net + Regression loss 16.9 9.34 7.22 2.51 12.4 3.5M 0.95
GC-Net (final architecture with regression loss) 16.9 9.34 7.22 2.51 12.4 3.5M 0.95
Table 2: Results on the Scene Flow dataset [36] which contains 35, 454 training and 4, 370 testing images of size 960 ×
540px from an array of synthetic scenes. We compare different architecture variants to justify our design choices. The
first experiment shows the importance of the 3-D convolutional architecture. The second experiment shows the gain in
performance we get from using a regression loss.
ideas in this paper; using a regression loss over a classifi- Soft Classification GC-Net Hard Classification
cation loss, and learning 3-D convolutional filters for cost 85.0
volume regularization. We use the synthetic Scene Flow 82.5
dataset [36] for these experiments, which contains 35, 454
80.0
training and 4, 370 testing images. We use this dataset for % < 1 px Disparity Error
two reasons. Firstly, we know perfect, dense ground truth 77.5
from the synthetic scenes which removes any discrepan- 75.0
cies due to erroneous labels. Secondly, the dataset is large
enough to train the model without over-fitting. In contrast, 72.5
the KITTI dataset only contains 200 training images, and 70.0
we observe that the model is susceptible to over-fitting to 67.5
this very small dataset. With tens of thousands of training
images we do not have to consider over-fitting in our evalu- 65.0
0 20000 40000 60000 80000 100000 120000
ation. Training Iterations
6
(a) KITTI 2012 test data qualitative results. From left: left stereo input image, disparity prediction, error map.
(b) KITTI 2015 test data qualitative results. From left: left stereo input image, disparity prediction, error map.
(c) Scene Flow test set qualitative results. From left: left stereo input image, disparity prediction, ground truth.
Figure 4: Qualitative results. By learning to incorporate wider context our method is often able to handle challenging
scenarios, such as reflective, thin or texture-less surfaces. By explicitly learning geometry in a cost volume, our method
produces sharp results and can also handle large occlusions.
7
>2 px >3 px >5 px Mean Error Runtime
Non-Occ All Non-Occ All Non-Occ All Non-Occ All (s)
SPS-st [44] 4.98 6.28 3.39 4.41 2.33 3.00 0.9 px 1.0 px 2
Deep Embed [8] 5.05 6.47 3.10 4.24 1.92 2.68 0.9 px 1.1 px 3
Content-CNN [32] 4.98 6.51 3.07 4.29 2.03 2.82 0.8 px 1.0 px 0.7
MC-CNN [50] 3.90 5.45 2.43 3.63 1.64 2.39 0.7 px 0.9 px 67
PBCP [40] 3.62 5.01 2.36 3.45 1.62 2.32 0.7 px 0.9 px 68
Displets v2 [18] 3.43 4.46 2.37 3.09 1.72 2.17 0.7 px 0.8 px 265
GC-Net (this work) 2.71 3.46 1.77 2.30 1.12 1.46 0.6 px 0.7 px 0.9
(a) KITTI 2012 test set results [14]. This benchmark contains 194 train and 195 test gray-scale image pairs.
Table 3: Comparison to other stereo methods on the test set of KITTI 2012 and 2015 benchmarks [14, 35]. Our method
sets a new state-of-the-art on these two competitive benchmarks, out performing all other approaches.
son, we pre-train our model on the large synthetic dataset, SceneFlow. However, our method outperforms this archi-
Scene Flow [36]. This helps to prevent our model from tecture by a notable margin for all test pixels. DispNetC
over-fitting the very small KITTI training dataset. We hold uses a 1-D correlation layer along the disparity line as an
out 40 image pairs as our validation set. approximation to the stereo cost volume. In contrast, our
Table 3a and 3b compare our method, GC-Net architecture more explicitly leverages geometry by formu-
(Geometry and Context Network), to other approaches on lating a full cost volume by using 3-D convolutions and a
the KITTI 2012 and 2015 datasets, respectively2 . Our soft argmin layer, resulting in an improvement in perfor-
method achieves state of the art results for both KITTI mance.
benchmarks, by a notable margin. We improve on state-
of-the-art by 9% and 22% for KITTI 2015 and 2012 re- 4.3. Model Saliency
spectively. Our method is also notably faster than most
In this section we present evidence which shows our
competing approaches which often require expensive post-
model can reason about local geometry using wider con-
processing. In Figure 4 we show qualitative results of our
textual information. In Figure 5 we show some examples
method on KITTI 2012, KITTI 2015 and Scene Flow.
of the model’s saliency with respect to a predicted pixel’s
Our approach outperforms previous deep learning patch
disparity. Saliency maps [41] shows the sensitivity of the
based methods [48, 32] which produce noisy unary poten-
output with respect to each input pixel. We use the method
tials and are unable to predict with sub-pixel accuracy. For
from [51] which plots the predicted disparity as a function
this reason, these algorithms do not use end-to-end learn-
of systematically occluding the input images. We offset the
ing and typically post-process the unary output with SGM
occlusion in each stereo image by the point’s disparity.
regularization [11] to produce the final disparity maps.
The closest method to our architecture is DispNetC [34], These results show that the disparity prediction for a
which is an end-to-end regression network pre-trained on given point is dependent on a wide contextual field of view.
For example, the disparity on the front of the car depends on
2 Full leaderboard: www.cvlibs.net/datasets/kitti/ the input pixels of the car and the road surface below. This
8
For future work we are interested in exploring a more
explicit representation of semantics to improve our disparity
estimation, and reasoning under uncertainty with Bayesian
convolutional neural networks.
9
[18] F. Guney and A. Geiger. Displets: Resolving stereo ambiguities us- [37] M. G. Park and K. J. Yoon. Leveraging stereo matching with
ing object knowledge. In Proceedings of the IEEE Conference on learning-based confidence measures. Proceedings of the IEEE Com-
Computer Vision and Pattern Recognition, pages 4165–4175, 2015. puter Society Conference on Computer Vision and Pattern Recogni-
8 tion, 07-12-June:101–109, 2015. 2
[19] R. Haeusler, R. Nair, and D. Kondermann. Ensemble Learning for [38] D. Scharstein and C. Pal. Learning conditional random fields for
Confidence Measures in Stereo Vision. Computer Vision and Pat- stereo. Proceedings of the IEEE Computer Society Conference on
tern Recognition (CVPR), 2013 IEEE Conference on, pages 305– Computer Vision and Pattern Recognition, 2007. 2
312, 2013. 2 [39] D. Scharstein and R. Szeliski. A Taxonomy and Evaluation of Dense
[20] R. Hartley and A. Zisserman. Multiple view geometry in computer Two-Frame Stereo Correspondence Algorithms. International Jour-
vision. Cambridge university press, 2003. 3 nal of Computer Vision, 47(1):7–42, 2002. 2, 3
[21] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for [40] A. Seki and M. Pollefeys. Patch based confidence prediction for
image recognition. In In Proc. IEEE Conf. on Computer Vision and dense disparity map. In British Machine Vision Conference (BMVC),
Pattern Recognition, 2016. 3 2016. 8
[22] P. Heise, B. Jensen, S. Klose, and A. Knoll. Fast Dense Stereo Corre- [41] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside con-
spondences by Binary Locality Sensitive Hashing. ICRA, pages 1–6, volutional networks: Visualising image classification models and
2015. 2 saliency maps. arXiv preprint arXiv:1312.6034, 2013. 8
[42] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient
[23] H. Hirschmuller. Accurate and efficient stereo processing by semi-
by a running average of its recent magnitude. COURSERA: Neural
global matching and mutual information. In 2005 IEEE Computer
networks for machine learning, 4(2), 2012. 5
Society Conference on Computer Vision and Pattern Recognition
(CVPR’05), volume 2, pages 807–814. IEEE, 2005. 1 [43] F. Tombari, S. Mattoccia, L. D. Stefano, and E. Addimanda. Classi-
fication and evaluation of cost aggregation methods for stereo corre-
[24] H. Hirschmüller. Stereo processing by semiglobal matching and mu-
spondence. 26th IEEE Conference on Computer Vision and Pattern
tual information. IEEE Transactions on Pattern Analysis and Ma-
Recognition, CVPR, 2008. 2
chine Intelligence, 30(2):328–341, 2008. 2
[44] K. Yamaguchi, D. McAllester, and R. Urtasun. Efficient joint seg-
[25] H. Hirschmüller and D. Scharstein. Evaluation of Cost Functions for mentation, occlusion labeling, stereo and flow estimation. In Eu-
Stereo Matching. In 2007 IEEE Conference on Computer Vision and ropean Conference on Computer Vision, pages 756–771. Springer,
Pattern Recognition, 2007. 2 2014. 8
[26] A. Klaus, M. Sormann, and K. Karner. Segment-based stereo match- [45] R. Zabih and J. Woodfill. Non-parametric Local Transforms for
ing using belief propagation and a self-adapting dissimilarity mea- Computing Visual Correspondence. In Proceedings of European
sure. Proceedings - International Conference on Pattern Recogni- Conference on Computer Vision, (May):151–158, 1994. 2
tion, 3:15–18, 2006. 2 [46] S. Zagoruyko and N. Komodakis. Learning to compare image
[27] V. Kolmogorov and R. Zabih. Computing visual correspondences patches via convolutional neural networks. Proceedings of the
with occlusions using graph cuts. In International Conference on IEEE Computer Society Conference on Computer Vision and Pattern
Computer Vision (ICCV), 2001. 2 Recognition, 07-12-June(i):4353–4361, 2015. 2
[28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classifica- [47] J. Žbontar and Y. Le Cun. Computing the stereo matching cost with
tion with deep convolutional neural networks. In Advances in neural a convolutional neural network. Proceedings of the IEEE Computer
information processing systems, pages 1097–1105, 2012. 1 Society Conference on Computer Vision and Pattern Recognition, 07-
[29] Y. Li and D. P. Huttenlocher. Learning for stereo vision using the 12-June(1):1592–1599, 2015. 2
structured support vector machine. In 2008 IEEE Conference on [48] J. Zbontar and Y. LeCun. Computing the stereo matching cost with
Computer Vision and Pattern Recognition, 2008. 2 a convolutional neural network. In Proceedings of the IEEE Con-
[30] F. Liu, C. Shen, G. Lin, and I. Reid. Learning Depth from Single ference on Computer Vision and Pattern Recognition, pages 1592–
Monocular Images Using Deep Convolutional Neural Fields. Pattern 1599, 2015. 1, 8
Analysis and Machine Intelligence, page 15, 2015. 3 [49] J. Žbontar and Y. LeCun. Stereo Matching by Training a Con-
[31] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional net- volutional Neural Network to Compare Image Patches. CoRR,
works for semantic segmentation. In Proceedings of the IEEE Con- abs/1510.0(2002), 2015. 2
ference on Computer Vision and Pattern Recognition, pages 3431– [50] J. Zbontar and Y. LeCun. Stereo matching by training a convolu-
3440, 2015. 1 tional neural network to compare image patches. Journal of Machine
[32] W. Luo, A. G. Schwing, and R. Urtasun. Efficient deep learning for Learning Research, 17:1–32, 2016. 8
stereo matching. In Proceedings of the IEEE Conference on Com- [51] M. D. Zeiler and R. Fergus. Visualizing and understanding convolu-
puter Vision and Pattern Recognition, pages 5695–5703, 2016. 1, 3, tional networks. In European conference on computer vision, pages
6, 8 818–833. Springer, 2014. 8
[33] W. Luo, A. G. Schwing, and R. Urtasun. Efficient Deep Learning for [52] L. Zhang and S. M. Seitz. Estimating optimal parameters for {MRF}
Stereo Matching. CVPR, 2016. 2 stereo from a single image pair. IEEE Transactions on Pattern Anal-
ysis and Machine Intelligence, 29(2):331–342, 2007. 2
[34] N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovit-
skiy, and T. Brox. A Large Dataset to Train Convolutional Networks
for Disparity, Optical Flow, and Scene Flow Estimation. CoRR,
abs/1510.0(2002), 2015. 2, 8
[35] M. Menze and A. Geiger. Object scene flow for autonomous vehicles.
In Conference on Computer Vision and Pattern Recognition (CVPR),
2015. 2, 5, 6, 8
[36] N.Mayer, E.Ilg, P.Häusser, P.Fischer, D.Cremers, A.Dosovitskiy, and
T.Brox. A large dataset to train convolutional networks for disparity,
optical flow, and scene flow estimation. In IEEE International Con-
ference on Computer Vision and Pattern Recognition (CVPR), 2016.
arXiv:1512.02134. 2, 5, 6
10