0% found this document useful (0 votes)
36 views12 pages

Poisson Net Hector 2308.01766

This paper introduces PoissonNet, a neural network architecture for 3D shape reconstruction from point clouds. PoissonNet uses Fourier Neural Operators (FNOs) to solve the Poisson equation and reconstruct a mesh from oriented point cloud measurements. FNOs allow PoissonNet to efficiently train on low-resolution data while achieving comparable performance at high resolutions, enabling one-shot super-resolution. Additionally, PoissonNet surpasses existing approaches in reconstruction quality while being differentiable. The paper provides background on Poisson surface reconstruction and FNOs, and describes how PoissonNet leverages these to improve upon limitations of classical neural networks for shape reconstruction.

Uploaded by

fercho120
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views12 pages

Poisson Net Hector 2308.01766

This paper introduces PoissonNet, a neural network architecture for 3D shape reconstruction from point clouds. PoissonNet uses Fourier Neural Operators (FNOs) to solve the Poisson equation and reconstruct a mesh from oriented point cloud measurements. FNOs allow PoissonNet to efficiently train on low-resolution data while achieving comparable performance at high resolutions, enabling one-shot super-resolution. Additionally, PoissonNet surpasses existing approaches in reconstruction quality while being differentiable. The paper provides background on Poisson surface reconstruction and FNOs, and describes how PoissonNet leverages these to improve upon limitations of classical neural networks for shape reconstruction.

Uploaded by

fercho120
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

PoissonNet: Resolution-Agnostic 3D Shape Reconstruction

using Fourier Neural Operators

Hector Andrade-Loarca Aras Bacho


Ludwig-Maximilians-Universität München Ludwig-Maximilians-Universität München
Akademiestraße 7, 80799 München Akademiestraße 7, 80799 München
andrade@math.lmu.de bacho@math.lmu.de
arXiv:2308.01766v1 [cs.CV] 3 Aug 2023

Julius Hege Gitta Kutyniok


Ludwig-Maximilians-Universität München Ludwig-Maximilians-Universität München
Akademiestraße 7, 80799 München Akademiestraße 7, 80799 München
hege@math.lmu.de University of Tromsø
Hansine Hansens veg 18, 9019 Tromsø
kutyniok@math.lmu.de

Abstract 1. Introduction
We introduce PoissonNet, an architecture for shape re- Shape reconstruction, the process of creating a three-
construction that addresses the challenge of recovering 3D dimensional mesh of an object from 2D images, plays a
shapes from points. Traditional deep neural networks face crucial role in computer vision. It is an active research area
challenges with common 3D shape discretization techniques with significant applications in robotics, virtual reality, and
autonomous driving. A common subproblem in this field is
due to their computational complexity at higher resolutions.
the task of reconstructing a surface from noisy point clouds,
To overcome this, we leverage Fourier Neural Operators
(FNOs) to solve the Poisson equation and reconstruct a which are typically acquired using techniques such as pho-
mesh from oriented point cloud measurements. PoissonNet togrammetry (multiple images) or depth sensors.
exhibits two main advantages. First, it enables efficient A plethora of shape reconstruction algorithms have been
training on low-resolution data while achieving compara- developed, as surveyed in [1]. Recently, deep learning ap-
proaches trained on large datasets of 3D objects, such as
ble performance at high-resolution evaluation, thanks to the
Convolutional Occupancy Networks [19] and POCO [2],
resolution-agnostic nature of FNOs. This feature allows for
one-shot super-resolution. Second, our method surpasses have emerged, promising improved reconstruction quality.
existing approaches in reconstruction quality while being dif- However, these methods often face challenges in generaliza-
ferentiable. Overall, our proposed method not only improves tion when faced with models or noise outside their training
upon the limitations of classical deep neural networks in distribution [21].
Another challenge faced by deep learning approaches is
shape reconstruction but also achieves superior results in
scalability, especially when dealing with complex models
terms of reconstruction quality, running time, and resolution
flexibility. Furthermore, we demonstrate that the Poisson at high resolutions. Convolutional neural networks have
surface reconstruction problem is well-posed in the limit achieved widespread success in computer vision and can
case by showing a universal approximation theorem for the straightforwardly generalize to three dimensions. However,
solution operator of the Poisson equation with distributional they operate on regular grids which limits their ability to
represent fine details accurately.
data utilizing the Fourier Neuronal Operator, which pro-
To overcome this limitation, various strategies have been
vides a theoretical foundation for our numerical results. The
code to reproduce the experiments is available on: https: proposed. Some solutions involve working with irregular
//github.com/arsenal9971/PoissonNet. grids [19], representing shapes as implicit functions [4], or
employing multi-grid approaches in a coarse-to-fine manner
[20] or sliding-window fashion [15].
Our aim in this work is to enhance reconstruction quality
by integrating deep learning into the well-established Pois-

1
son surface reconstruction method, thereby leveraging ro- for t = 0, . . . , T −1, where F denotes the Fourier transform,
bustness and generalization capabilities. In order to achieve R ∈ Ckmax ×dv ×dv represents the weight tensor, P : A →
that, we introduce a neural network architecture whose back- Rdv is a lifting operator, Q : Rdv → U is a projection
bone is a Fourier Neural Operator [11], along with custom operator, W : Rdv → Rdv is a linear transformation, and σ :
point cloud pre-processing and mesh post-processing. The R → R is a nonlinear activation function applied component-
choice is motivated by Fourier Neural Operators’ ability to wise.
approximate partial differential equation solutions (see Sec-
tion Sec. 3). The next sections introduce Poisson Surface Fig. 1 shows the schematic representation of the architecture
Reconstruction and Fourier Neural Operators in detail. of a Fourier Neural Operator for four Fourier blocks. The
aim of the FNO is to learn the weights R. One can interpret
1.1. Poisson Surface Reconstruction R as the Fourier representation of a convolutional kernel,
where the formula F −1 (R(F ·)) in Eq. (1.3) is obtained by
We start by introducing the Poisson surface reconstruction
the Fourier convolution theorem. Now, in order to implement
problem. In this setting, we are given a point cloud sampled
this method, we need to discretize the layer Eq. (1.3).
from the surface of that object and the corresponding nor-
mal vectors, and aim to recover the indicator function that 1.2.1 Discretization
implicitly represents the shape of a (three-dimensional) ob- We assume that our domain Ω ⊂ R3 , where the input func-
ject. Assuming that we have access to the full normal vector
R R
field V : Ω ⊂ 3 → 3 , our goal is to reconstruct/learn a
tion a ∈ A is evaluated at, is discretized into da ∈ N points
and we wish to evaluate the output function at a higher reso-
function χ : Ω → {0, 1} such that lution with du ∈ N points. Thus, the lifting and projection
operator can be represented by the matrices P ∈ Rdv ×da
∇χ(x) = V (x) for all x ∈ Ω, (1.1)
and Q ∈ Rdu ×dv , respectively. Furthermore, we have that
where χ denotes the indicator function representing the ob- vt ∈ Rda ×dv and F(vt ) ∈ Cda ×dv . One may notice that
ject. Taking the divergence of Eq. (1.1), we obtain the Pois- Fourier layers are discretization-invariant, since parameters
son equation are learned directly in Fourier space, resolving the functions
in physical space simply amounts to projecting on the basis
∆χ(x) = ∇ · V (x) for all x ∈ Ω. (1.2) of wave functions which are well-defined everywhere on the
space. This allows us to transfer among discretization. If
Notice that the indicator function is unique up to a constant. implemented with standard FFT, then it will be restricted to
The uniqueness of a solution can be ensured by imposing uniform mesh, but still resolution-invariant. This feature of
sparse constraints. Here, the domain Ω can be chosen to the Fourier Neural Operator is crucial in the context of shape
be Lipschitz and large enough so that the object is fully since 3D data requires a lot of memory.
contained in Ω, i.e., χ(x) = 0 on ∂Ω. Thus, we face a In this work, we also assume the discretization to be
boundary value problem given by Eq. (1.2) supplemented uniform, thus F can be replaced by the Fast Fourier Trans-
with homogeneous Dirichlet boundary conditions. This is form (FFT). Furthermore, following [11], we truncated the
an ill-posed inverse problem since for any given finite point higher Fourier modes of the FFT by keeping the first kmax
cloud there exists multiple possible solutions. However, Fourier modes. The multiplication by the weight tensor
in the limit case (infinitely dense samples), the problem R ∈ Ckmax ×dv ×dv is then given by
becomes well-posed as shown in Sec. 3.1.
dv
X
1.2. Fourier Neural Operator (R · (Fvt ))k,l = Rk,l,j (Fvt )k,j , (1.4)
j=1
Fourier Neural Operators (FNO) were first introduced by Li
et al. in [11] as a novel technique to learn the solution of k = 1, . . . , kmax , j = 1, . . . , kmax .
PDEs in the form of operators. In general terms, the aim
1.3. Main contribution
of an FNO is to learn a mapping N : A → U between
two infinite-dimensional function spaces by using a finite We present a new approach for solving the problem of 3D
collection of observations {(ai , ui )}N
i=1 , where N (ai ) = ui shape Poisson surface reconstruction. Our method signif-
for all i. icantly outperforms existing state-of-the-art methods, par-
ticularly in low sampling scenarios where the input point
Definition 1.1 (Fourier Neural Operator, [11]). A Fourier cloud consists of approximately 3, 000 to 25, 000 points. In
Neural Operator is given by the iterative architecture addition, in high-sampling regimes, it performs as well as
the classical method which presents close-to-perfect recon-
a → P a =: v0 7→ · · · 7→ vT 7→ QvT =: u,
(1.3) struction given sufficiently many points (around 250, 000
vt+1 (x) := σ(W vt (x) + F −1 (R · (Fvt )))(x)) points in our case).

2
Our approach is based on a deep neural network architec- 2. Method
ture formed by Fourier blocks and can thus be considered a
Fourier Neural Operator, which is used to solve the underly- Although the backbone of our method is a neural network
ing Poisson equation. It takes a directed point cloud as input architecture based on Fourier Neural Operators, we also
and outputs a 3D reconstruction in the form of a voxel grid. perform a set of pre-processing and post-processing steps
We further threshold the output using Otsu’s method [17] that play an important role in the quality of our results. In
this section, we will explain in detail all the steps that our
and finally perform marching cubes [13] to extract a mesh.
method follows to learn how to compute a watertight mesh
In addition to outperforming existing approaches, our
method also inherits the one-shot super-resolution capabil- from a given oriented point cloud.
ities of Fourier neural operators. We are able to train the 2.1. Data Pre-Processing
method on voxel grids of size 64 × 64 × 64 while evaluating
it on grids of size 128 × 128 × 128 with similar perfor- As discussed in the previous section, the input of our method
mance, in contrast to classical CNNs where the error grows is regarded as a directed point cloud, consisting of point co-
exponentially with the resolution of the input. ordinates and normal directions associated with each point.
Finally, we show that in the continuous setting, the Pois- The Fourier block (see Fig. 1) we use to construct our Fourier
son surface reconstruction problem can be exactly solved Neural Operator architecture, PoissonNet, assumes the in-
using Fourier Neural Operators. This result was established put to be defined in a uniform grid. In order to transform
by extending the universal approximation theorem, as proven our input point cloud into a uniform grid representation we
in [10], to operators defined on a set of distributions. This perform point rasterization of the divergence field ∇ · V
demonstrates that Fourier Neural Operators can effectively from Eq. (1.2) from [16]. The point rasterization is done in
approximate operators defined on distributional data, thereby two steps, we first rasterize the point locations in a voxel
explaining their high performance in Poisson surface recon- grid of a given resolution (e.g. 64 × 64 × 64), where we
struction. This applies not only to the solution operator for perform a smoothing step using a Gaussian filter of standard
the Poisson equation but also to any other operator defined deviation σ = 2. The second step consists of incorporating
on distributions. the normal information of every given point to compute the
divergence field with finite differences. Finally, we perform
1.4. Related work a final smoothing step on the resulting divergence field using
the filter from the first step.
The use of the Poisson equation to solve a 3D shape recon-
struction problem was first introduced by Kazhdan et al. in 2.2. Preparation of the Training Data
2006 [8]. Later, the same authors proposed an improvement
by adding additional boundary conditions named screened To train our model, we utilize the ShapeNet dataset [3] (see
Poisson surface reconstruction (SPSR) [9]. https://shapenet.org/). Since the standard output
More recently, Poisson surface reconstruction has been of a Poisson surface reconstruction algorithm is a voxel grid
further explored, first in the form of a differentiable Poisson of a given dimension, we use the voxelized meshes provided
solver (Peng et al. [18]) which can be used to either optimize in ShapeNet to obtain clean ground truths. Additionally,
a Poisson surface reconstruction or learn it. In addition, Hue we generate ground truth data for the oriented point cloud
et al. introduced in 2022 an iterative method that can perform samples by performing marching cubes [13] on the voxel
the reconstruction without the need of normals, the iterative grids and sampling points on the resulting surface mesh using
Poisson surface reconstruction (iPSR) [7]. Their method the Poisson disk sampling technique proposed by Yuksel in
initializes the normals of a given point cloud randomly and [23].
denoises them by iteratively applying the Poisson surface Since we extract the meshes using marching cubes, we
reconstruction. have access to the normals of their faces. All sampled points
In the past years, Fourier Neural Operators (FNOs) have are on some triangle of the ground truth mesh. We calculate
emerged as a potent tool for differentiably solving complex, the normals of the sampled points using the average over the
high-dimensional PDEs [11], thanks to their training conve- neighboring faces, weighted by their barycentric coordinates.
nience and resolution-agnostic attributes. Here, we leverage We trained our method using four different sampling sce-
these qualities to address the Poisson equation within the narios, with 3, 000, 10, 000, 25, 000, and 250, 000 sampling
realm of surface reconstruction. Notably, this study pioneers points. These experiments demonstrate how our method out-
the application of a Fourier Neural Operator in computer performs other methods in the low sampling regime (3, 000-
vision tasks, highlighting its prospective significance in this 25, 000) while producing close-to-perfect reconstructions in
field. the high sampling regime (250, 000), comparable to other
methods, including the classical Poisson surface reconstruc-
tion [8].

3
2.3. Architecture and Training Procedure
The full pipeline of our approach is composed of a Fourier
Neural Operator with four 3D Fourier blocks (see Fig. 1)
as well as a pre-processing and a post-processing step. Fol-
lowing each Fourier block, we incorporate GELU (Gaussian Figure 2. PoissonNet, full pipeline.
Error Linear Unit, [6]) non-linearities and dropout layers
after the first three Fourier blocks. Furthermore, we employ
20 frequency modes on the Fourier transform. Specifically, we demonstrate that our (distributional) Pois-
PoissonNet is trained on a representative subset of 10, 000 son equation problem has a unique solution, and the Fourier
randomly selected examples from the ShapeNet dataset, Neural Operator architecture we employed can universally
which accounts for a fifth of the full dataset. The preparation approximate these solutions. Notably, our setting differs
of the data is detailed in Sec. 2.2, outlining our compre- from the original paper [10], as our solution χ is non-smooth
hensive approach. The complete pipeline is visualized in and our data ∇ · V is a distribution (since V is not differ-
Fig. 2. entiable). Despite these differences, we establish that our
To assess our model’s performance, we use the mean method’s solution converges to the ground truth in the sam-
squared error on the voxel grid as the loss function, consis- pling limit, meaning that using higher sampled point clouds
tent with prior works on 3D FNOs [11, 22]. Furthermore, we results in improved reconstructions. This theoretical analysis
evaluate and test the network on two distinct sets, each con- is a crucial aspect of our work, as it justifies the observed
taining 2, 000 examples. Throughout training, we employ effectiveness of Fourier Neural Operators (FNOs) in solving
batches of 20 data points for approximately 10, 000 training the Poisson equation within the context of shape reconstruc-
steps.” tion. By demonstrating the uniqueness of the solution and
the universal approximation property of FNOs, we confirm
the robustness and versatility of our approach.
3.1. Well-Posedness of the Poisson Equation
In the following, we prove the existence of very weak solu-
tions to this problem. The problem has very weak solutions
since it involves (weak) derivatives of distributions. In order
to find the correct framework, we note that ∆χ lies in the
space (H10 (Ω) ∩ H2 (Ω))∗ . This can be proved as following:
Let v ∈ H10 (Ω) ∩ H2 (Ω). Then employing Green’s first
Figure 1. Fourier Neural Operator with four Fourier Blocks, the identity and taking into account the zero boundary values,
backbone of PoissonNet, where F represents the Fourier transform. we obtain
Z Z
∆χ(x)v(x)dx = − ∇χ(x) · ∇v(x)dx
2.4. Prediction Post-Processing Ω
Z Ω

To refine PoissonNet’s output, we employ a thresholding = χ(x)∆v(x)dx ≤ ∥χ∥L2 ∥∆v∥L2 = ∥χ∥L2 |v|H10 ∩H2

step as a post-processing technique. The threshold value is (3.1)
computed through Otsu’s thresholding method [17]. Next, where we used the fact that ∥∆ · ∥L2 defines an equivalent
we generate a surface mesh through the marching cubes norm on the space H10 ∩ H2 : = H10 (Ω) ∩ H2 (Ω). Notice the
algorithm. The complete pipeline is illustrated in Fig. 2. derivatives depicted in Eq. (3.1) are weak derivatives, this
Our method involves solving a Poisson equation using a applies to the rest of the derivatives presented in this section.
Fourier Neural Operator. In the following section, we utilize Eq. (3.1) shows that ∆χ ∈ (H10 (Ω) ∩ H2 (Ω))∗ . Since H20 ⊂
the theoretical machinery of Fourier Neural Operators to H10 (Ω) ∩ H2 (Ω), this implies also ∆χ ∈ H−2 . Therefore,
demonstrate that the given equation is well-posed, and we the solution space is naturally given by the Sobolev space
establish that these operators can universally approximate its H10 (Ω) ∩ H2 (Ω).
solutions. These essential results provide formal support for Next, we denote V := H10 (Ω) ∩ H2 (Ω) and H := {u ∈
the effective utilization of FNOs in our task. 2
L (Ω) : Γu = 0} equipped with the norm ∥∆ · ∥L2 and
the standard L2 norm, respectively. Here, Γ : L2 (Ω) →
3. Theoretical Foundation H −1/2 (∂Ω) denotes the unique continuous linear trace op-
erator, see, e.g., [12]. We note that the spaces V, H and V ∗
In this section, we provide a comprehensive theoretical foun- d,c d,c
dation for our approach, supporting our numerical results. form a Gelfand triplet, i.e., we have V ,→ H ,→ V ∗ with

4
dense and compact embeddings. Now we want to study the for all x ∈ Ω and K ⊊ Ω.
well-posedness for the existence of weak solutions to the We note that the regularization fσ is defined on the entire
initial value problem Euclidean space Rn . However, in order to avoid technical
difficulties, we restrict the regularization to the domain Ω.
∆u = f in Ω, Γu = 0. (3.2) In the next result, we show that solutions of the Poisson
The weak formulation of the boundary value problem in equations for the right-hand side fσ converge to a solution of
Equation (1.2) reads as follows: the Poisson equations for f as the regularization parameter
vanishes.
Problem: For f ∈ V ∗ find u ∈ H such that Lemma 3.2. Let K ∗ ⊂ V ∗ be a compact set defined by
Z Eq. (3.5). Furthermore, denote the solution operator that
u(x)∆v(x)dx = ⟨f, v⟩V ∗ ×V ∀v ∈ Cc∞ (Ω). (3.3) maps f ∈ V ∗ = (H10 (Ω) ∩ H2 (Ω))∗ to the unique solution

of Eq. (3.2) by S. Then, we have
In fact, we are not asking for weak solutions but for very
weak solutions as there are no derivatives applied to the lim sup ∥Sf − Sfσ ∥L2 (Ω) = 0.
σ→0 f ∈K ∗
solution in Eq. (3.3). In our next result, we show that the
weak formulation is well-posed in the sense of Hadamard, 3.2. Finite Differences
i.e., there exists a unique solution and the solution is contin-
uously dependent on the data. In practice, it is possible to further approximate the Laplacian
of the Gaussian filter by approximating the partial derivatives
Theorem 3.1. Let f ∈ V ∗ . Then, there exists a unique by finite differences
solution u ∈ H such that Eq. (3.3) holds. Furthermore, the
solution operator S : V ∗ → H which maps the right-hand ∂ 2 gσ (x) gσ (x + hei ) − 2gσ (x) − gσ (x − hei )
side f to the unique solution u is a linear and bounded, 2 ≈ , (3.6)
∂xi h
hence a Lipschitz continuous operator.
where ei is the i−th canonical basis vector. Similarly, one
Notice that in this case, we assume that we have access can show that the solution corresponding to the step size
to the full data f . If we replace f with a sampled version of h and the parameter σ converges to the solution of unregu-
it (e.g. a sampled point cloud from the normal vector field), larized Poisson equation (3.2) as σ, h → 0. In fact, defin-
this is however not well-posed. ing the discrete Laplacian ∆h in terms of Eq. (3.6), it is
easy to see that fh,σ = χK ∗ ∆h gσ converges uniformly to
3.1.1 Regularization of the Data fσ = χK ∗ ∆gσ as h → 0, where ∆χK ∈ K ∗ . Then, one
Since, in general, distributions are not functions, we need to can prove the following result:
approximate the distributions by functions in order to repre- Lemma 3.3. Let K ∗ ⊂ V ∗ be a compact set defined by
sent them on a digital computer. This is done by convolving Eq. (3.5). Furthermore, denote with S the solution operator
the distributions with a smooth function. From distribution that maps f ∈ V ∗ = (H10 (Ω) ∩ H2 (Ω))∗ to the unique
theory, this yields a smooth function. In our case, we choose solution of Eq. (3.2). Then, we have
a Gaussian filter
1 − |x|22 lim sup ∥Sf − Sfh,σ ∥L2 (Ω) = 0. (3.7)
gσ (x) = e 2σ , x ∈ Rn (3.4) σ,h→0 f ∈K ∗
2πσ 2
to regularize the distribution, where the standard deviation The two-parameter limit in Eq. (3.7) exists due to the
σ is the regularization parameter. From now on, let, for a uniform convergence of fh,σ to fσ when h → 0.
family I of indices, K ∗ ⊂ V ∗ be a set of distributions given
by 3.2.1 Universal Approximation Theorem for Distribu-
tional Data

K ∗ = {∆χKi : Ki ⊊ Ω, i ∈ I}, (3.5) The following result from [10, Theorem 9] shows a universal
approximation theorem for Fourier Neural Operators. In
where χK denotes the indicator function on the set K. It contrast to the classical approximation results, the Fourier
is readily seen that the set K ∗ of distributions has compact Neural Operator aims to approximate infinite-dimensional
support. Then, the set of regularized distributions Kε∗ is operators.
defined by the functions
Theorem 3.4. Let s, s′ ≥ 0 and let Ω ⊂ Rd a do-
fσ (x) = (f ∗ gσ )(x) = (∆χK ∗ gσ )(x) = (χK ∗ ∆gσ )(x) main with Lipschitz domain such that Ω ⊂ (0, 2π)d . Let

5

G : H s (Td ; Rda ) → H s (Td ; Rdu ) be a continuous oper- was also compared on the same data with other four ex-
ator. Let K ⊂ H s (Td ; Rda ) be a compact subset. Then, isting methods, Screened Poisson Surface Reconstruction
there exists a continuous linear operator E : H s (Ω; Rda ) → (SPSR) [9], Iterative Poisson Surface Reconstruction (iPSR)
H s (Td ; Rda ) such that E(a)|Ω = a for all a ∈ H s (Ω; Rda ). [7], Point2Mesh [5] and Shape as Points (SAP) [18]. To
Furthermore, for any ε > 0, there exists a FNO N : assess the quality of reconstructions, we utilized metrics

H s (Td ; Rda ) → H s (Td ; Rdu ), of the form, continuous as introduced by Peng et al. [18], which include Chamfer L1

an operator H s → H s , such that loss (measuring point cloud distances), F-Score (evaluat-
ing point estimation classification accuracy), and Normal
sup ∥G(a) − (N ◦ E)(a)∥H s′ < ε. Consistency (accounting for normal orientation). All ex-
a∈K
periments were performed on the ShapeNet Core dataset
However, unfortunately, this approximation theorem can- (https://shapenet.org/).
not be applied to the solution operator of the Poisson equa- Our method was trained for approximately 1000 epochs.
tion for very weak solutions from Theorem 3.1, since such The numerical results for the different sampling rates are
operator maps the space (H10 (Ω) ∩ H2 (Ω))∗ ⊂ H−2 (Ω) to presented in Tables 1 to 8. Notably, PoissonNet consistently
L2 (Ω). For the image space, we can choose s′ = 0, but outperformed the alternatives in all sampling rates, with
the result does not cover the case s < 0. Still, combining particularly significant improvements observed in the very
Theorem 3.4 and Theorem 3.2, we obtain low sampling regime (3, 000). In contrast, other methods
exhibited poorer performance in such scenarios.
Theorem 3.5. Let S : (H10 (Ω) ∩ H2 (Ω))∗ → L2 (Ω) While our approach may not be the fastest (classical SPSR
be the solution operator from Theorem 3.1. Let K ⊂ holds that distinction), it maintains a constant running time
(H10 (Ω) ∩ H2 (Ω))∗ be a compact subset given by Theo- relative to the sampling rate. This is because most of the
rem 3.2. Then, there exists a continuous linear opera- point cloud processing, such as rasterization, occurs during
tor E : L2 (Ω) → L2 (Td ) such that E(a)|Ω = a for all pre-processing at the training stage, enabling straightforward
a ∈ L2 (Ω). Furthermore, for any ε > 0, there exists σ > 0 scalability.
and a FNO N : L2 (Ω) → L2 (Ω), continuous, such that Finally, Fig. 3 to 5 showcases three examples of recon-
structions at different sampling rates, visually confirming
sup ∥S(f ) − (N ◦ E ◦ Cσ )(f )∥L2 < ε. (3.8) the superior performance of our method over the others.
f ∈K

where Cσ denotes the convolution operator.


Remark 3.6. Even though we cannot feed the FNO with
distributions, Theorem 3.5 implies that we can learn the
solution operator S arbitrarily close by regularizing the in-
put data with a sufficiently small regularization parameter.
The empirical results show us that choosing the regulariza-
tion parameter sufficiently small is good enough to obtain
already very good results.

The proofs for this section’s results are provided in the


Appendix.

4. Experiments
In this section, we present the numerical results as well as
some reconstruction examples. In addition, we also per-
formed a number of ablation studies, which are presented
in this section as well. All the code and necessary instruc-
tions to reproduce our experiments can be found in: https: Figure 3. Result on a ShapeNet example with different sampling
//github.com/arsenal9971/PoissonNet. rates, the first row corresponds to 3, 000 points.
4.1. Results
Experiments with four different sampling rates, 3, 000, 4.2. Ablation Studies
10, 000, 25, 000, and 250, 000 sampled points (and normals) We conducted various ablation studies to optimize our
were conducted. In addition, the performance of our method pipeline and model setup. Initially, the input was computed

6
Methods Chamfer-L1 (↓) F-Score (↑) Methods Norm. Consist. (↑) Runtime (s)
SPSR [9] 0.430 0.552 SPSR [9] 0.753 0.31
iPSR [7] 0.554 0.488 iPSR [7] 0.688 452.33
Point2Mesh [5] 0.644 0.392 Point2Mesh [5] 0.623 3890.82
SAP [18] 0.492 0.516 SAP [18] 0.715 0.45
PoissonNet 0.081 0.895 PoissonNet 0.976 0.52

Table 1. Table corresponding to the reconstruction metrics for Table 6. Table corresponding to the reconstruction metrics for
3, 000 sampled points, averaged over all ShapeNet voxel models. 25, 000 sampled points, averaged over all ShapeNet voxel models.

Methods Norm. Consist. (↑) Runtime (s) Methods Chamfer-L1 (↓) F-Score (↑)
SPSR [9] 0.525 0.31 SPSR [9] 0.022 0.954
iPSR [7] 0.465 325.25 iPSR [7] 0.031 0.961
Point2Mesh [5] 0.381 2215.64 Point2Mesh [5] 0.649 0.614
SAP [18] 0.495 0.45 SAP [18] 0.015 0.974
PoissonNet 0.875 0.52 PoissonNet 0.008 0.99

Table 2. Table corresponding to the reconstruction metrics for Table 7. Table corresponding to the reconstruction metrics for
3, 000 sampled points, averaged over all ShapeNet voxel models. 250, 000 sampled points, averaged over all ShapeNet voxel models.

Methods Chamfer-L1 (↓) F-Score (↑) Methods Norm. Consist. (↑) Runtime (s)
SPSR [9] 0.265 0.722 SPSR [9] 0.925 0.31
iPSR [7] 0.372 0.548 iPSR [7] 0.930 11015.84
Point2Mesh [5] 0.485 0.522 Point2Mesh [5] 0.622 43850.11
SAP [18] 0.338 0.852 SAP [18] 0.971 0.45
PoissonNet 0.042 0.954 PoissonNet 0.991 0.52

Table 3. Table corresponding to the reconstruction metrics for Table 8. Table corresponding to the reconstruction metrics for
10, 000 sampled points, averaged over all ShapeNet voxel models. 250, 000 sampled points, averaged over all ShapeNet voxel models.

Methods Norm. Consist. (↑) Runtime (s)


SPSR [9] 0.715 0.31
iPSR [7] 0.625 410.41
Point2Mesh [5] 0.578 3015.91
SAP [18] 0.681 0.45
PoissonNet 0.912 0.52

Table 4. Table corresponding to the reconstruction metrics for


10, 000 sampled points, averaged over all ShapeNet voxel models.

Methods Chamfer-L1 (↓) F-Score (↑)


SPSR [9] 0.230 0.751
iPSR [7] 0.348 0.583
Point2Mesh [5] 0.412 0.512
SAP [18] 0.315 0.88
PoissonNet 0.029 0.983

Table 5. Table corresponding to the reconstruction metrics for


25, 000 sampled points, averaged over all ShapeNet voxel models. Figure 4. Result on a ShapeNet example with different sampling
rates, the first row corresponds to 3, 000 points.

without incorporating the normal information, relying solely role of normal field information in our model. Subsequently,
on rasterization and smoothing of the point cloud, which led we experimented with different parameters for the smooth-
to unsatisfactory performance. This highlighted the crucial ing Gaussian used in the input, ultimately discovering that a

7
5. Conclusion
This paper introduces PoissonNet, a novel approach for 3D
shape reconstruction that exhibits remarkable performance
compared to state-of-the-art techniques, especially in low-
sampling scenarios involving point clouds containing ap-
proximately 3, 000 to 25, 000 points. Additionally, in high-
sampling regimes, PoissonNet achieves comparable results
to classical methods, excelling in generating nearly perfect
reconstructions when a substantial number of points (around
250, 000) are available.
PoissonNet’s strength lies in its deep neural network ar-
chitecture, incorporating Fourier layers, known as Fourier
Neural Operators. These layers effectively address the Pois-
son equation, generating a voxel grid as the 3D reconstruc-
tion output. The method also employs Otsu’s thresholding
on the reconstruction and utilizes marching cubes to extract
a mesh representation.
An essential advantage of PoissonNet is its ability to
leverage the one-shot super-resolution capabilities inherited
Figure 5. Result on a ShapeNet example with different sampling from Fourier Neural Operators, allowing for training on low-
rates, the first row corresponds to 3, 000 points. resolution voxel grids (64 × 64 × 64) while achieving com-
parable performance during evaluation on higher-resolution
grids (128 × 128 × 128). This stands in contrast to clas-
standard deviation of σ = 2 yielded the best results across sical convolutional neural networks, where errors escalate
all sampling rates. exponentially with input dimension.
In the pursuit of identifying the optimal Fourier modes,
different values were tested, finding that 20 Fourier modes 6. Outlook
achieved the best performance. Considering the model archi-
tecture design, we trained models with varying numbers of The research presented in this paper lays the groundwork
layers. Notably, the configuration with four Fourier blocks for various promising avenues of exploration in the future.
and GeLU activation function demonstrated superior perfor- While our approach offers evident advantages, such as res-
mance. olution agnosticism, a key drawback, common to similar
To explore alternatives, we also attempted using a U-FNO methods for solving partial differential equations, lies in the
[22], a U-Net architecture with Fourier blocks replacing need to determine the resolution of training data beforehand.
conventional convolutional layers. Although we initially This requirement results in reducing all training data to the
considered U-FNO as a substitute for PoissonNet, it faced predetermined resolution, causing the loss of fine details
limitations due to its model size, restricting the number of within the images. However, if computational resources per-
Fourier modes to a maximum of 10 and allowing training mit, one can consider training at a higher resolution using
batches with a maximum of 10 data points. As a result, the larger GPUs.
performance suffered. Furthermore, inspired by the success Future investigations can concentrate on optimizing and
of U-Nets in denoising tasks, we tried to denoise Poisson- refining PoissonNet to enhance its performance and gener-
Net’s outputs by jointly training both architectures. However, alization. Exploring alternative architectures, incorporating
the significantly higher computational costs associated with additional regularization techniques, or investigating differ-
this approach prevented successful denoising. ent loss functions could address specific challenges in shape
Overall, our ablation studies confirmed the relative op- reconstruction. Particularly intriguing is extending our ap-
timality of our pipeline regarding hyperparameter changes, proach beyond regular grid representations by employing
emphasizing the significance of incorporating normal field graph neural operators to handle highly-detailed meshes.
information and utilizing specific parameter settings for the Finally, we are keen on investigating the use of Fourier
smoothing Gaussian and Fourier modes. The chosen archi- Neural Operators to enhance radiance field approximations.
tecture with four Fourier blocks and GeLU activation func- For instance, we aim to replace the traditional MLP in
tion exhibited the most favorable performance, contributing NeRF [14] with an FNO. This shift to learning weights
to the effectiveness of our approach. in the Fourier domain has the potential to enhance NeRF’s
positional encoding, boosting its capability to learn high-

8
frequency features. Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf:
Representing scenes as neural radiance fields for view synthe-
References sis. Commun. ACM, 65(1):99–106, 2021. 8
[15] Andrew-Hieu Nguyen and Zhaoyang Wang. Accurate 3d
[1] Matthew Berger, Andrea Tagliasacchi, Lee M Seversky, Pierre shape reconstruction from single structured-light image via
Alliez, Gael Guennebaud, Joshua A Levine, Andrei Sharf, fringe-to-fringe network. Photonics, 8:459, 10 2021. 1
and Claudio T Silva. A survey of surface reconstruction from [16] A. Michael Noll. Scanned-display computer graphics. Com-
point clouds. In Computer graphics forum, volume 36, pages munications of the ACM, 14:143 – 150, 1971. 3
301–329. Wiley Online Library, 2017. 1 [17] Nobuyuki Otsu. A threshold selection method from gray-
[2] Alexandre Boulch and Renaud Marlet. Poco: Point con- level histograms. IEEE Transactions on Systems, Man, and
volution for surface reconstruction. In Proceedings of the Cybernetics, 9(1):62–66, 1979. 3, 4
IEEE/CVF Conference on Computer Vision and Pattern [18] Songyou Peng, Chiyu Jiang, Yiyi Liao, Michael Niemeyer,
Recognition, pages 6302–6314, 2022. 1 Marc Pollefeys, and Andreas Geiger. Shape as points: A dif-
[3] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat ferentiable poisson solver. In Advances in Neural Information
Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Processing Systems, volume 34, pages 13032–13044, 2021.
Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and 3, 6, 7
Fisher Yu. ShapeNet: An Information-Rich 3D Model Repos- [19] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc
itory. Technical Report arXiv:1512.03012 [cs.GR], Stanford Pollefeys, and Andreas Geiger. Convolutional occupancy
University — Princeton University — Toyota Technological networks. In Computer Vision–ECCV 2020: 16th European
Institute at Chicago, 2015. 3 Conference, Glasgow, UK, August 23–28, 2020, Proceedings,
[4] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Part III 16, pages 523–540. Springer, 2020. 1
Implicit functions in feature space for 3d shape reconstruc- [20] Andrei Sharf, Thomas Lewiner, Ariel Shamir, Leif Kobbelt,
tion and completion. In Proceedings of the IEEE/CVF con- and Daniel Cohen-Or. Competing fronts for coarse–to–fine
ference on computer vision and pattern recognition, pages surface reconstruction. Comput. Graph. Forum, 25:389–398,
6970–6981, 2020. 1 09 2006. 1
[5] Rana Hanocka, Gal Metzer, Raja Giryes, and Daniel Cohen- [21] Raphael Sulzer, Loic Landrieu, Renaud Marlet, and Bruno
Or. Point2mesh: A self-prior for deformable meshes. ACM Vallet. A survey and benchmark of automatic surface recon-
Trans. Graph., 39(4), aug 2020. 6, 7 struction from point clouds, 2023. 1
[6] Dan Hendrycks and Kevin Gimpel. Gaussian error linear [22] Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima
units (gelus). arXiv preprint arXiv:1606.08415, 2016. 4 Anandkumar, and Sally M. Benson. U-fno—an enhanced
[7] Fei Hou, Chiyu Wang, Wencheng Wang, Hong Qin, Chen fourier neural operator-based deep-learning model for multi-
Qian, and Ying He. Iterative poisson surface reconstruction phase flow. Advances in Water Resources, 163:104180, 2022.
(ipsr) for unoriented points. ACM Trans. Graph., 41(4), 2022. 4, 8
3, 6, 7 [23] Cem Yuksel. Sample elimination for generating poisson disk
[8] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Pois- sample sets. Computer Graphics Forum, 34, 05 2015. 3
son surface reconstruction. In Proceedings of the Fourth Eu-
rographics Symposium on Geometry Processing, SGP ’06,
page 61–70, Goslar, DEU, 2006. Eurographics Association. 3
[9] Michael Kazhdan and Hugues Hoppe. Screened poisson
surface reconstruction. ACM Trans. Graph., 32(3), jul 2013.
3, 6, 7
[10] Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra.
On universal approximation and error bounds for Fourier
neural operators. J. Mach. Learn. Res., 22:Paper No. [290],
76, 2021. 3, 4, 5
[11] Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzade-
nesheli, Burigede liu, Kaushik Bhattacharya, Andrew Stuart,
and Anima Anandkumar. Fourier neural operator for paramet-
ric partial differential equations. In International Conference
on Learning Representations, 2021. 2, 3, 4
[12] J.-L. Lions and E. Magenes. Non-homogeneous boundary
value problems and applications. Vol. I. Die Grundlehren der
mathematischen Wissenschaften, Band 181. Springer-Verlag,
New York-Heidelberg, 1972. Translated from the French by
P. Kenneth. 4
[13] William E. Lorensen and Harvey E. Cline. Marching cubes:
A high resolution 3d surface construction algorithm. SIG-
GRAPH Comput. Graph., 21(4):163–169, 1987. 3
[14] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik,

9
PoissonNet: Resolution-Agnostic 3D Shape Reconstruction
using Fourier Neural Operators
Supplementary Material
7. Appendix It is easily seen that the solution operator is linear and that
by linearity of the solution operator, we obtain
7.1. Proofs
In this section, we provide the proof of the theoretical results. ∥uε − uε′ ∥L2 ≤ ∥fε − fε′ ∥V ∗ . (7.4)

Proof of Theorem 3.1. Let f ∈ V ∗ . Then, using the dense Since fε is convergent, thus a Cauchy sequence in V ∗ ,
embedding given by the Gelfand triplet (see Sec. 3.1), we Eq. (7.4) implies that uε is also a Cauchy sequence in L2 (Ω)
can approximate f by a sequence of functions fε ∈ L2 (Ω) and therefore convergent with a unique limit denoted by
such that u ∈ L2 (Ω). Passing to the limit in the weak formulation
Eq. (3.3) as ε → 0, we obtain that u is a very weak solu-
∥f − fε ∥V ∗ < ε. (7.1) tion of the Poisson equation. By the continuity of the trace
operator, we obtain
Since C0∞ (Ω) is a dense subset of L2 (Ω), the functions fε
can be chosen arbitrarily smooth. Then, we consider the Γu = lim Γuε = Γ lim uε = 0,
ε→0 ε→0
boundary value problem
® thus u satisfying the boundary condition in the sense of a
∆uε = fε in Ω trace operator which shows that u ∈ H. With the same
(7.2)
uε = 0 on ∂Ω. arguments, one can show then

By the Lax-Milgram theorem, there exists a unique weak ∥Sf ∥H = ∥u∥L2 ≤ ∥f ∥V ∗ .


solution uε ∈ H10 (Ω) such that
Z Z i.e., the continuity of the solution operator. This completes
− ∇uε (x) · ∇v(x)dx = fε (x)v(x)dx (7.3) the proof.
Ω Ω
Proof of Theorem 3.2. Since S is Lipschitz continuous, it
for all v ∈ H10 . By a regularity result for strongly elliptic remains to show that
operators, Theorem 9.15 in Gilbarg and Trudinger, the so-
lution is even in H10 (Ω) ∩ H3 (Ω). Restricting the set of test lim sup ∥f − fσ ∥V ∗ = 0.
σ→0 f ∈K ∗
functions by functions in V = H10 (Ω) ∩ H2 (Ω), we see that
uε is also a solution to Eq. (3.3), i.e., a very weak solution of In order to prove the latter convergence, we note that by
Eq. (7.2). Since the Laplace operator is by Lax-Milgram an continuity of the solution operator, the set K := S(K ∗ ) ⊂
isometric isomorphism from H10 (Ω) ∩ H2 (Ω) to L2 (Ω), we L20 (Ω) is compact and that the set K is a set of characteristic
can define the inverse (∆)−1 : L2 (Ω) → H10 (Ω) ∩ H2 (Ω). functions on a compact set. Then, the compactness of K
Then, by definition of a very weak solution, there holds implies that it is totally bounded: for every η > 0, there exist
Z a number Nη ∈ N and compact sets Ki ⊊ Ω, i = 1, . . . , Nη ,
uε (x)v(x)dx = ⟨fε , (∆)−1 v⟩V ∗ ×V such that


v ∈ L2 (Ω). Choosing v = uε and since
[
for
R all K⊂ BL2 (χKi , η)
Ω ε
u2
(x)dx == ⟨fε , (∆)−1 uε ⟩V ∗ ×V , we obtain i=1
Z
u2ε (x)dx ≤ ∥fε ∥V ∗ ∥(∆)−1 uε ∥V where BL2 (χKi , η) denotes a ball in L2 (Ω) with radius
Ω η and center χKi . Let f ∈ K ∗ and v ∈ V such that
= ∥fε ∥V ∗ ∥∆(∆)−1 uε ∥L2 ∥v∥V = 1. Furthermore, denote Bi := BL2 (χKi , η) and
Iη := {1, . . . , Nη }. Then, it holds
= ∥fε ∥V ∗ ∥uε ∥L2

Hence, we obtain

∥uε ∥L2 ≤ ∥fε ∥V ∗ .

1
For the second term I2 , we obtain with Hölder’s inequality

sup ⟨f − fσ , v⟩V ∗ ×V |I2 |2


f ∈K ∗ Z Z
= | sup (χKi (x) − χKi (x − y))gσ (y)dy∆v(x)dx|2
Z
= sup (χK (x) − (χK ∗ gσ )(x))∆v(x)dx i∈Iη Ω R3
f ∈K ∗ Ω
Z Z Z Z 2

= sup (χK (x) − χK (x − y)gσ (y)dy)∆v(x)dx ≤ sup (χKi (x) − χKi (x − y))gσ (y)dy dx,
i∈Iη Ω R3
χK ∈K Ω R3
Z Z Z Z √ 2
2
= sup (χK (x) − χK (x − y))gσ (y)dy∆v(x)dx = sup (χKi (x) − χKi (x − z 2σ))e−z dy dx,
χK ∈K Ω R3 i∈Iη Ω R3
Z Z
= sup (χK (x) − χK (x − y))gσ (y)dy∆v(x)dx where we again made use of the fact that ∥v∥V = 1. Then,
χK ∈Bi Ω R3 by the dominated convergence theorem, we obtain
i∈Iη
Z Z
= sup (χK (x) − χKi (x))gσ (y)dy∆v(x)dx lim |I2 |2
σ→0
χK ∈Bi Ω R3
i∈Iη Z Z √ 2
2
Z Z ≤ lim (χK (x) − χK (x − z 2σ))e−z dy dx
σ→0
+ sup (χKi (x) − χKi (x − y))gσ (y)dy∆v(x)dx Ω R3
i∈Iη Ω R3 Z Z √ 2
2
Z Z = lim (χK (x) − χK (x − z 2σ))e−z dy dx
+ sup (χKi (y) − χK (y))gσ (x − y)dy∆v(x)dx Ω σ→0 R3
χK ∈Bi Ω R3 = 0.
i∈Iη
Z
= sup (χK (x) − χKi (x))∆v(x)dx For the last term I3 , we make use the following general
χK ∈Bi Ω estimate: denote with hσ the regularization of a function
i∈Iη
Z Z h ∈ L2 (Ω). Setting h to zero outiside Ω, we have with
+ sup (χKi (x) − χKi (x − y))gσ (y)dy∆v(x)dx Fubini
i∈Iη Ω R3 Z
|hσ (x)|2
Z Z
(7.5)
+ sup (χKi (y) − χK (y))gσ (x − y)dy∆v(x)dx Ω
χK ∈Bi Ω R3 Z
i∈Iη
≤ |hσ (x)|2
= I1 + I2 + I3 , R3
Z ÅZ ã2
where we also performed also a change of variables y 7→ = |h(x − y)|gσ (y)dy dx
Ω R3
x − y in the third integral. Now, we want to estimate each Z ÅZ ã2
term separately. For I1 , we observe that = |h(x − y)|gσ1/2 (y)gσ1/2 (y)dy dx
3 R3
Z ZR Z
|I1 | = | sup (χK (x) − χKi (x))∆v(x)dx| ≤ |h(x − y)|2 gσ (y)dydx
χK ∈Bi Ω R3 R3
i∈Iη Z Z
≤ sup ∥χK − χKi ∥L2 (Ω) ∥v∥V = |h(y)|2 gσ (x − y)dydx
χK ∈Bi R3 R3
i∈Iη
Z Z
2
= |h(y)| gσ (x − y)dxdy
≤ sup ∥χK − χKi ∥L2 (Ω) 3 3
χK ∈Bi ZR ZR
i∈Iη
≤ |h(y)|2 gσ (x − y)dxdy
<η for all σ > 0. ZR3 R3

= |h(x)|2 . (7.6)

We note that inequality (7.5) shows that the convolution op-


erator is a linear bounded operator from L2 to L2 . Thus, it
maps compact sets to compact sets for each σ > 0. Employ-
ing inequality (7.5) and the fact that ∥v∥V = 1, we obtain

2
again with Hölder’s inequality

|I3 |2
Z Z
= | sup (χKi (y) − χK (y))gσ (x − y)dy∆v(x)dx|2
χK ∈Bi Ω R3
i∈Nη
Z ÅZ ã2
≤ sup (χKi (y) − χK (y))gσ (x − y)dy dx
χK ∈Bi Ω R3
i∈Iη

≤ sup ∥χKi − χK ∥2L2 (Ω)


χK ∈Bi
i∈Iη

< η2

Altogether, we obtain

lim sup ∥Sf − Sfσ ∥L2 (Ω) < 2η


σ→0 f ∈K ∗

Since η > 0 can be chosen arbitrarily small, this shows the


desired result.
Proof of Theorem 3.5. Let σ > 0 sufficiently small such
that by Theorem 3.2, there holds

sup ∥S(f ) − S(fσ )∥L2 (Ω) < ε/2. (7.7)


f ∈K

Next, we recall that the set of regularized distributions Kσ


is C ∞ . Then, by Theorem 3.1, the solution operator S maps
Kσ ⊂ H2 (Ω) to L20 (Ω). Furthermore, the solution operator
S maps the compact set K to the compact set S(K) ⊂
L2 (Ω) containing a family of characteristics. Since Kσ ⊂
L2 (Ω) is given by characteristic functions convoluted with
∆gσ , it can be shown that it is also compact in L2 (Ω) for
fixed σ > 0. Next, by Theorem 3.4, there exists a continuous
linear operator E : L2 (Ω) → L2 (Td ) such that E(a)|Ω = a
for all a ∈ H 2 (Ω). Furthermore, for any ε > 0, there exists
σ > 0 and a FNO N : H 2 (Td ) → L2 (Td ), of the form,
continuous as an operator H 2 → L2 , such that

sup ∥S(f ) − (N ◦ E)(f )∥L2 < ε/2 (7.8)


f ∈Kε

Combining estimates Eq. (7.7) and Eq. (7.8), we obtain

sup ∥S(f ) − (N ◦ E)(fσ )∥L2 (Ω


f ∈K

≤ sup ∥S(f ) − S(fσ )∥L2 (Ω)


f ∈K

+ sup ∥S(fσ ) − (N ◦ E)(fσ )∥L2 (Ω)


f ∈K

< ε/2 + ε/2 = ε

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy