Deep Geometric Prior For Surface Reconstruction
Deep Geometric Prior For Surface Reconstruction
Deep Geometric Prior For Surface Reconstruction
Francis Williams1 , Teseo Schneider1 , Claudio Silva 1 , Denis Zorin1 , Joan Bruna1 , and Daniele Panozzo1
1
New York University
francis.williams@nyu.edu, teseo.schneider@nyu.edu, csilva@nyu.edu, dzorin@cs.nyu.edu,
bruna@cims.nyu.edu, panozzo@nyu.edu
arXiv:1811.10943v2 [cs.CV] 5 Apr 2019
1
2. Related work nomenclature. They separate techniques into four main cat-
egories: indicator function, point-set surfaces, multi-level
Geometric Deep Learning A variety of architectures partition of unity, and scattered point meshing.
were proposed for geometric applications. A few work with
Indicator function techniques define a scalar function in
point clouds as input; in most cases however, these methods
space that can be used for testing if a given point is in-
are designed for classification or segmentation tasks. One
side or outside the surface. There are multiple ways to
of the first examples are PointNet [33] and PointNet++ [34]
define such a function from which a surface is generated
originally designed for classification and segmentation, us-
by isocontouring. Poisson surface reconstruction (Pois-
ing a set-based neural architecture [39, 36]. PCPNet [14]
son) [21] inverts the gradient operator by solving a Pois-
is version of PointNet architecture, for estimation of local
son equation to define the indicator function. Fourier sur-
shape properties. A number of learning architectures for
face reconstruction (Fourier) [20] represents the indicator
3D geometry work with voxel data converting input point
function in the Fourier basis, while Wavelet surface recon-
clouds to this form (e.g., [38]). The closest problems these
struction (Wavelet) [27] employs a Haar or a Daubechies
types of networks solve is shape completion and point cloud
(4-tap) basis. Screened Poisson surface reconstruction
upsampling.
(Screened) [22] is an extension of [21] that incorporates
Shape completion is considered, e.g., in [9], where vol- point constraints to avoid over smoothing of the earlier tech-
umetric CNN is used to predict a very course shape com- nique. This technique is not considered in [5], but is con-
pletion, which is then refined using data-driven local shape sidered by us.
synthesis on small volumetric patches. [16], follows a
Point-set surfaces [2] define a projection operator that
somewhat similar approach, combining multiview and vol-
moves points in space to a point on the surface, where the
umetric low-resolution global data at a first stage, and using
surface is defined to be the collection of stationary points of
a volumetric network to synthesize local patches to obtain
the projection operator. Providing a definition of the projec-
higher resolution. Neither of these methods aims to achieve
tion operators are beyond the scope of our paper (see [5]).
high-quality surface reconstruction.
In our experiments, we have used simple point set surfaces
PU-Net, described in [41], is to the best of our knowl- (SPSS) [1], implicit moving least squares (IMLS) [24], and
edge, the only learning-based work addressing point cloud algebraic point set surfaces (APSS) [13].
upsampling directly. The method proceeds by splitting in-
Edge-Aware Point Set Resampling (EAR) [18] (also not
put shapes into patches and learning hierarchical features
considered in [5], but considered by us) works by first com-
using PointNet++ type architecture. Then feature aggrega-
puting reliable normals away from potential singularities,
tion and expansion is used to perform point set expansion in
followed by a resampling step with a novel bilateral filter
feature space, followed by the (upsampled) point set recon-
towards surface singularities. Reconstruction can be done
struction.
using different techniques on the resulting augmented point
In contrast to other methods, the untrained networks in set with normals.
our method take parametric coordinates in square paramet-
Multi-level Partition of Unity defines an implicit func-
ric domains as inputs and produce surface points as output.
tion by integrating weight function of a set of input points.
An important exception is the recent work [12] defining an
The original approach of [29] (MPU) uses linear functions
architecture, AtlasNet, in which the decoder part is simi-
as low-order implicits, while [28] (MPUSm) defines differ-
lar to ours, but with some important distinctions discussed
ential operators directly on the MPU function. The method
in Section 3. Finally, [4] studied the ability of neural net-
of [31] (RBF) uses compactly-supported radial basis func-
works to approximate low-dimensional manifolds, showing
tions.
that even two-layer ReLU networks have remarkable abil-
ity to encode smooth structures with near-optimal number Scattered Point Meshing [30] (Scattered) grows
of parameters. In our setting, we rely on overparametrisa- weighted spheres around points in order to determine the
tion and leverage the implicit optimization bias of gradient connectivity in the output triangle mesh.
descent. The work in [32] uses a manifold-based approach to a
direct construction of a global parametrization from a set
of range images (a point cloud, or any other surface repre-
Surface Reconstruction is an established research area sentation, can be converted to such a set by projection to
dating back at least to the early 1990s (e.g, [17]); [6] is multiple planes). It uses range images as charts with pro-
a recent comprehensive survey of the area. We focus our jections as chart maps; our method computes chart maps by
discussion on the techniques that we use for comparison, fitting. [40] jointly optimizes for connectivity and geome-
which are a superset of those included in the surface re- try to produce a single mesh for an entire input point cloud.
construction benchmark of Berger et al [5]. Berger tested In contrast, our method produces a global chart map using
10 different techniques in their paper; we will follow their only a local optimization procedure.
X1 We propose to approximate ϕ using a neural network,
X2 φ(·; θp ) : V → R3 , where θp is a vector of parameters, that
we fit so that the image of φ approximates Up ∩ S. For that
~
L(µ )
purpose, we first consider a sample {v1 , . . . , vn } of n =
1
X X3 |Xp | points in V using a Poisson disk distribution, and the
~ associated Earth Movers or 2-Wasserstein Distance (EMD)
L(µ 2
)
between {φ(vi ; θp ) : i = 1 . . . n} and Xp = {x1 . . . , xn }:
Á(v; µ1)
~
L(µ )
X
3
L(θ) = inf kφ(vi ; θ) − xπ(i) k2 . (1)
Á(v; µ2) π∈Πn
i≤n
Deep Image Prior Our approach is inspired, in part, by where Pn is the set of n × n bi-stochastic
P matrices and
the deep image prior. [37] demonstrated that an untrained H(P ) is the entropy, H(P ) = − i,j Pij log Pij . This
deep network can be overfitted to input data producing distance provably approximates the Wasserstein metric as
a remarkably high-quality upsampling and even hole fill- λ → ∞ and can be computed in near-linear time [3]. Fig-
ing without training, with the convolutional neural network ure 13 in the supplemental material shows the effect of vary-
structure acting as a regularizer. Our approach to surface re- ing the regularization parameter λ on the results.
construction is similar, in that we use untrained networks to We choose φ to be a MLP with the half-rectified activa-
represent individual chart embedding functions. However, tion function:
an important difference is that our loss function measures
geometric similarity. φ(v; θ) = θK ReLU(θK−1 ReLU . . . ReLU(θ1 v)),
3.2e−1
1.0e−1
Global Consistency. As described in Section 3.2, recon-
3.2e−2 structing an entire surface from local charts requires consis-
Loss
1.0e−2
3.2e−3
tent transitions, leading to the formulation in (3) and (4)
1.0e−3
0 100 200 300 400 500
which reinforces consistency across overlapping patches.
Iteration
1.5%
1%
0.5%
0%
Anchor Dancing Children Daratech Gargoyle Lord Quas
Figure 4. Fitted points on the different models of the benchmark, the color illustrates the error with respect to the ground truth.
Frequency
Frequency
Screen Poisson Screen Poisson
Rbf Rbf
Scattered Scattered
4.0e−1 Spss 4.0e−1 Spss
Wavelet Wavelet
Our Our
2.0e−1 2.0e−1
0.0e+0 0.0e+0
0 0.001 0.002 0.003 0.004 0.005 0 0.001 0.002 0.003 0.004 0.005
Error Error
Anchor Gargoyle
First patches Final patches Figure 8. Percentage of fitted vertices (y-axis) to reach a given er-
Figure 5. Initial patch fitting (left), we clearly see the “un-mixed” ror (drec→GT ), x-axis) for different methods. The errors are mea-
colors due to the disagreeing patches. After optimization (right) sured as distance from the input data to the fitted surface. The
all patches are overlapping producing a mix of color. plots for the remaining models of the dataset are provided in the
supplementary document.
dinp!rec drec!GT
1.0e−1
Ear
Fourier
1.0e−1
Ear
Fourier
EAR Ours
Imls Imls
Mpu Mpu
1.0e−2 1.0e−2
Mpusmooth Mpusmooth
Poisson Poisson
Figure 9. EAR (left) versus Ours (right). The input point cloud
Frequency
Frequency
1.0e−6
Our
1.0e−6
Our
preserving sharp features quite well) but interpolated by EAR. The
1.0e−7 1.0e−7 result is that EAR produces spurious points and visible artifacts.
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Error Error
Anchor Gargoyle
Noise Chamfer
Figure 10. Example of reconstruction on extreme conditions for Wasserstein (w/ Sinkhorn)
our method versus the most representative traditional methods.
Figure 11. Surface reconstruction of stacked cubes using a sin-
The red dots in the Reference are the input points for the recon-
gle chart, with two different choices of metric. We verify
struction.
that the Wasserstein metric, even with the Sinkhorn entropy-
regularization, provides a more uniform parametrisation, as well
as bijective correspondences between overlapping charts.
tion. Figure 9 illustrates that our deep geometric prior pre-
serves the sharp structures while successfully denoising the
input point cloud. The mathematical analysis of such im-
Net is a data-driven reconstruction method using an autoen-
plicit regularisation is a fascinating direction of future work.
coder architecture. While their emphasis was on leverag-
ing semantic similarity of shapes and images on several 3D
Noise and Sharp Features. As discussed above, the be- tasks, we focus on high-fidelity point cloud reconstruction
havior of surface reconstruction methods is particularly in data-sparse regimes, i.e. in absence of any training data.
challenging in the presence of sharp features and noise in Our main contribution is to show that in such regimes, an
the input measurements. We performed two additional ex- even simpler neural network yields state-of-the-art results
periments to compare the behaviour of our architecture with on surface reconstruction. We also note the following es-
the most representative traditional methods on both noisy sential differences between our method and AtlasNet.
point clouds and models with sharp features. Schreened
Poisson Surface Reconstruction [22] (Figure 10, left) is very • No Learning: Our model does not require any training
robust to noise, but fails at capturing sharp features. EAR data, and, as a result, we do not need to consider an
(Figure 10, middle) is the opposite: it captures sharp fea- autoencoder architecture with specific encoders.
tures accurately, but being an interpolatory method fails
at reconstructing noisy inputs, thus introducing spurious • Transition Functions: Since we have pointwise cor-
points and regions during the reconstruction. Our method respondences, we can define a transition function be-
(Figure 10, right) does not suffer from these limitations, ro- tween overlapping patches Vp and Vq by consistently
bustly fitting noisy inputs and capturing sharp features. triangulating corresponding parametric points in Vp
and Vq . In contrast, AtlasNet does not have such corre-
Generating a Triangle Mesh. Our method generates spondences and thus does not produce a true manifold
a collection of local charts, which can be sampled atlas.
at an arbitrary resolution. We can generate a trian- • Patch Selection: We partition the input into point-sets
gle mesh by using off-the-shelf-techniques such as Pois- Xp that we fit separately. While it is theoretically at-
son Surface Reconstruction [22] on our dense point tractive to attempt to fit each patch to the whole set as
clouds. We provide meshes reconstructed in this way it is done in AtlasNet, and let the algorithm figure out
for all the benchmark models at https://github.com/ the patch partition automatically, in practice the diffi-
fwilliams/deep-geometric-prior. culty of the optimization problem leads to unsatisfac-
tory results. In other words, AtlasNet is approximating
Comparison with AtlasNet [12]. Our atlas construction a global matching whereas our model only requires lo-
is related to the AtlasNet model introduced in [12]. Atlas- cal matchings within each patch.
Atlasnet Atlasnet
1.0e+0
5.0e−1
2.0e−1
1.0e−1
Our
1.0e+0
8.0e−1
Our
5. Discussion
5.0e−2
2.0e−2
1.0e−2 6.0e−1
Neural networks – particularly in the overparametrised
Frequency
Frequency
5.0e−3
2.0e−3
1.0e−3
5.0e−4
2.0e−4
4.0e−1
regime – are remarkably efficient at curve fitting in high-
1.0e−4
5.0e−5
2.0e−5
2.0e−1
dimensional spaces. Despite recent progress in understand-
1.0e−5
0 0.001 0.002 0.003 0.004 0.005 ing the dynamics of gradient descent in such regimes, their
Error Error
Figure 13. Effect of the parameter λ on the reconstruction and the loss W2 .
A. Supplementary Experiments
A.1. Effect of the parameter λ
In Figure 13, we demonstrate the effect of varying the
Sinkhorn regularization parameter on the final reconstruc-
tion of a surface. Smaller values of λ yield a better approxi-
mation of the Wasserstein distance, and thus, produce better
reconstructions of the original points.
A.2. Kinect reconstruction
To demonstrate the effectiveness of our technique on re-
constructing point clouds with large quantities of noise and
highly non-uniform sampling, we reconstruct a raw point
cloud acquired with a Kinect V2 (Figure 14). In spite of
the challenging input, we are still able to produce a smooth
reconstruction approximating the geometry of the original
object.
1.0e+0 Apss 1.0e+0 Apss
Ear Ear
Fourier Fourier
1.0e−1 1.0e−1
Imls Imls
Mpu Mpu
1.0e−2 1.0e−2
Mpusmooth Mpusmooth
Poisson Poisson
Frequency
Frequency
1.0e−3 Screen Poisson 1.0e−3 Screen Poisson
Rbf Rbf
1.0e−4 Scattered 1.0e−4 Scattered
Spss Spss
1.0e−5 Wavelet Wavelet
1.0e−5
Our Our
1.0e−6 1.0e−6
1.0e−7 1.0e−7
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Error Error
Anchor Gargoyle
Frequency
Frequency
1.0e−3 Screen Poisson 1.0e−3 Screen Poisson 1.0e−3 Screen Poisson
Rbf Rbf Rbf
Scattered 1.0e−4 Scattered 1.0e−4 Scattered
1.0e−4
Spss Spss Spss
Wavelet 1.0e−5 Wavelet Wavelet
1.0e−5 1.0e−5
Our Our Our
1.0e−6 1.0e−6
1.0e−6
1.0e−7 1.0e−7
1.0e−7
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Frequency
0.0e+0 0.0e+0
0 0.001 0.002 0.003 0.004 0.005 0 0.001 0.002 0.003 0.004 0.005
Error Error
Anchor Gargoyle
Frequency
Frequency
0 0.001 0.002 0.003 0.004 0.005 0 0.001 0.002 0.003 0.004 0.005 0 0.001 0.002 0.003 0.004 0.005
Anchor
Mpusmooth 9.50e-06 9.13e-04 9.13e-04 1.06e-02 Mpusmooth 4.81e-06 1.87e-03 1.87e-03 3.66e-02
Poisson 1.09e-05 1.63e-03 1.63e-03 1.17e-01 Poisson 8.75e-06 2.27e-03 2.27e-03 6.59e-02
Screen Poisson 6.99e-06 7.45e-04 7.45e-04 1.79e-02 Screen Poisson 7.12e-06 2.15e-03 2.15e-03 6.59e-02
Rbf 1.46e-05 8.61e-04 8.61e-04 9.89e-03 Rbf 1.62e-06 2.98e-03 2.98e-03 6.60e-02
Scattered 1.33e-05 8.18e-04 8.18e-04 1.06e-02 Scattered 4.30e-06 2.16e-03 2.16e-03 6.57e-02
Spss 8.67e-06 1.03e-03 1.03e-03 1.06e-02 Spss 4.15e-06 4.39e-03 4.39e-03 9.00e-02
Wavelet 8.30e-06 2.19e-03 2.19e-03 6.27e-02 Wavelet 5.36e-06 3.01e-03 3.01e-03 6.59e-02
Our 4.91e-06 7.21e-04 7.21e-04 2.55e-02 Our 3.82e-06 1.53e-03 1.53e-03 9.69e-03
Apss 9.98e-06 7.87e-04 7.87e-04 1.05e-02 Apss 2.18e-06 1.68e-03 1.68e-03 2.16e-02
Ear 1.34e-09 2.79e-07 2.79e-07 8.18e-07 Ear 3.15e-06 1.50e-03 1.50e-03 1.68e-02
Fourier 7.86e-06 1.06e-03 1.06e-03 1.94e-02 Fourier 4.68e-06 2.39e-03 2.39e-03 5.98e-02
Imls 5.73e-06 8.35e-04 8.35e-04 1.05e-02 Imls 3.46e-06 1.79e-03 1.79e-03 2.36e-02
Mpu 5.33e-06 8.47e-04 8.47e-04 8.55e-03 Mpu 5.25e-06 2.06e-03 2.06e-03 4.11e-02
Daratech
Daratech
Mpusmooth 9.87e-06 9.31e-04 9.31e-04 1.87e-02 Mpusmooth 5.98e-06 2.14e-03 2.14e-03 4.92e-02
Poisson 1.28e-05 1.58e-03 1.58e-03 3.18e-02 Poisson 3.25e-06 2.98e-03 2.98e-03 6.83e-02
Screen Poisson 3.80e-06 6.98e-04 6.98e-04 1.72e-02 Screen Poisson 4.30e-06 2.42e-03 2.42e-03 6.47e-02
Rbf 2.12e-06 7.52e-04 7.52e-04 1.08e-02 Rbf 2.28e-06 3.59e-03 3.59e-03 6.44e-02
Scattered 7.48e-06 6.97e-04 6.97e-04 1.70e-02 Scattered 5.40e-06 1.60e-03 1.60e-03 1.81e-02
Spss 8.36e-06 1.12e-03 1.12e-03 1.12e-02 Spss 5.99e-06 2.73e-03 2.73e-03 7.66e-02
Wavelet 6.13e-06 1.88e-03 1.88e-03 2.27e-02 Wavelet 4.04e-06 3.29e-03 3.29e-03 5.04e-02
Our 5.30e-06 4.23e-04 4.23e-04 1.79e-02 Our 1.96e-06 1.51e-03 1.51e-03 1.67e-02
Apss 6.20e-06 4.98e-04 4.98e-04 1.45e-02 Apss 5.53e-06 1.68e-03 1.68e-03 2.87e-02
Ear 9.81e-10 3.18e-07 3.18e-07 8.28e-07 Ear 2.54e-06 1.32e-03 1.32e-03 2.46e-02
Fourier 7.69e-06 6.23e-04 6.23e-04 2.65e-02 Fourier 3.51e-06 1.60e-03 1.60e-03 2.53e-02
Imls 9.09e-06 5.88e-04 5.88e-04 1.43e-02 Imls 6.12e-06 1.75e-03 1.75e-03 2.93e-02
Mpu 1.07e-05 5.54e-04 5.54e-04 7.08e-03 Mpu 3.98e-06 1.53e-03 1.53e-03 3.84e-02
Mpusmooth 8.10e-06 6.14e-04 6.14e-04 2.75e-02 Mpusmooth 3.51e-06 1.47e-03 1.47e-03 1.64e-02
Dc
Dc
Poisson 6.76e-06 1.02e-03 1.02e-03 2.63e-02 Poisson 5.86e-06 1.87e-03 1.87e-03 2.53e-02
Screen Poisson 7.12e-06 4.34e-04 4.34e-04 2.70e-02 Screen Poisson 3.96e-06 1.49e-03 1.49e-03 2.25e-02
Rbf 9.15e-06 6.40e-04 6.40e-04 2.77e-02 Rbf 1.15e-06 1.55e-03 1.55e-03 3.01e-02
Scattered 4.45e-06 5.20e-04 5.20e-04 2.69e-02 Scattered 5.26e-06 1.89e-03 1.89e-03 5.27e-02
Spss 3.76e-06 7.66e-04 7.66e-04 1.55e-02 Spss 5.12e-06 1.96e-03 1.96e-03 2.93e-02
Wavelet 1.76e-05 1.82e-03 1.82e-03 2.68e-02 Wavelet 4.70e-06 2.63e-03 2.63e-03 2.56e-02
Our 6.10e-06 3.98e-04 3.98e-04 2.48e-02 Our 3.10e-06 1.31e-03 1.31e-03 1.51e-02
Apss 7.80e-06 5.62e-04 5.62e-04 6.92e-03 Apss 2.64e-06 1.50e-03 1.50e-03 3.41e-02
Ear 1.73e-09 3.18e-07 3.18e-07 7.52e-07 Ear 4.04e-06 1.22e-03 1.22e-03 8.81e-03
Fourier 3.94e-06 7.02e-04 7.02e-04 2.55e-02 Fourier 2.29e-06 1.37e-03 1.37e-03 2.15e-02
Imls 8.39e-06 6.09e-04 6.09e-04 6.49e-03 Imls 2.11e-06 1.71e-03 1.71e-03 4.37e-02
Mpu 1.10e-05 6.27e-04 6.27e-04 6.75e-03 Mpu 4.60e-06 1.57e-03 1.57e-03 2.98e-02
Gargoyle
Gargoyle
Mpusmooth 4.57e-06 7.25e-04 7.25e-04 8.81e-03 Mpusmooth 1.20e-06 1.37e-03 1.37e-03 2.17e-02
Poisson 1.20e-05 1.05e-03 1.05e-03 2.73e-02 Poisson 6.48e-06 1.57e-03 1.57e-03 2.17e-02
Screen Poisson 1.04e-05 4.87e-04 4.87e-04 1.81e-02 Screen Poisson 2.82e-06 1.30e-03 1.30e-03 2.09e-02
Rbf 6.44e-06 7.30e-04 7.30e-04 5.86e-03 Rbf 3.19e-06 7.66e-03 7.66e-03 9.17e-02
Scattered 7.73e-06 5.78e-04 5.78e-04 1.03e-02 Scattered 2.48e-06 1.36e-03 1.36e-03 2.17e-02
Spss 5.30e-06 7.74e-04 7.74e-04 1.34e-02 Spss 5.03e-06 1.85e-03 1.85e-03 4.65e-02
Wavelet 1.14e-05 1.56e-03 1.56e-03 2.73e-02 Wavelet 4.79e-06 1.89e-03 1.89e-03 2.11e-02
Our 5.40e-06 4.50e-04 4.50e-04 1.81e-02 Our 2.34e-06 1.19e-03 1.19e-03 1.45e-02
Apss 8.64e-06 4.76e-04 4.76e-04 7.55e-03 Apss 3.90e-06 1.24e-03 1.24e-03 2.14e-02
Ear 1.05e-09 3.22e-07 3.22e-07 8.91e-07 Ear 2.26e-06 1.14e-03 1.14e-03 7.49e-03
Fourier 1.15e-05 5.64e-04 5.64e-04 1.79e-02 Fourier 5.01e-06 1.30e-03 1.30e-03 1.96e-02
Imls 8.29e-06 5.29e-04 5.29e-04 8.60e-03 Imls 3.30e-06 1.31e-03 1.31e-03 2.36e-02
Mpu 7.94e-06 5.44e-04 5.44e-04 5.01e-03 Mpu 2.30e-06 1.35e-03 1.35e-03 2.94e-02
Lord Quas
Lord Quas
Mpusmooth 1.07e-05 5.70e-04 5.70e-04 8.18e-03 Mpusmooth 4.70e-06 1.28e-03 1.28e-03 1.54e-02
Poisson 5.70e-06 8.24e-04 8.24e-04 4.38e-02 Poisson 1.91e-06 1.39e-03 1.39e-03 1.94e-02
Screen Poisson 4.74e-06 4.29e-04 4.29e-04 1.08e-02 Screen Poisson 2.07e-06 1.24e-03 1.24e-03 1.62e-02
Rbf 1.01e-05 6.48e-04 6.48e-04 7.27e-03 Rbf 2.26e-06 1.29e-03 1.29e-03 1.80e-02
Scattered 6.72e-06 4.88e-04 4.88e-04 1.69e-02 Scattered 4.48e-06 1.17e-03 1.17e-03 1.32e-02
Spss 6.66e-06 6.03e-04 6.03e-04 7.66e-03 Spss 1.49e-06 1.38e-03 1.38e-03 2.28e-02
Wavelet 9.27e-06 1.49e-03 1.49e-03 4.71e-02 Wavelet 4.42e-06 1.87e-03 1.87e-03 1.52e-02
Our 1.38e-06 4.15e-04 4.15e-04 2.14e-02 Our 3.94e-06 1.14e-03 1.14e-03 8.74e-03
Table 1. Distance from the input to the reconstruction. Table 2. Distance from the reconstruction to the input.