0% found this document useful (0 votes)
9 views

ReportMinimalGraph

This document discusses the mathematical formulation and numerical methods for solving minimal surface problems, which are prevalent in various fields of mathematics and physics. It presents two primary formulations of the problem (graph and parametric) and explores finite element methods (FEM) for their numerical approximation, focusing on techniques such as Picard iteration and Newton's method. The paper outlines the process of discretizing the domain, assembling matrices, and applying boundary conditions to find solutions to the nonlinear partial differential equations involved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

ReportMinimalGraph

This document discusses the mathematical formulation and numerical methods for solving minimal surface problems, which are prevalent in various fields of mathematics and physics. It presents two primary formulations of the problem (graph and parametric) and explores finite element methods (FEM) for their numerical approximation, focusing on techniques such as Picard iteration and Newton's method. The paper outlines the process of discretizing the domain, assembling matrices, and applying boundary conditions to find solutions to the nonlinear partial differential equations involved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

M2 Mathematiques de la Modelisation

Finite Element Methods

A (convex) nonlinear elliptic problem

Author:
Lucio Antonio Rosi

February 21, 2025


1 INTRODUCTION

1 Introduction
Minimal surface problems arise in many areas of mathematics and physics, where one seeks to determine a
surface that minimizes the area under given constraints.
From a mathematical standpoint, minimal surfaces have been extensively studied in the calculus of variations,
where one seeks to find a function that minimizes a certain energy functional.
There are two primary ways to formulate the minimal surface problem:

• The graph formulation: In this case, the minimal surface is described as the graph of a function
𝑓 : Ω → R over a domain Ω ⊂ R2 . The goal is to find 𝑓 that minimizes the surface area functional
under certain given boundary conditions.

• The parametric formulation: The minimal surface is given as a parametric mapping


𝑢 : 𝐵 → R3 , where 𝐵 is a reference domain (typically a disk or another simple shape). The objective
is to find 𝑢 such that the corresponding surface minimizes the area while mapping the boundary of 𝐵
to a prescribed Jordan curve in R3 .

Both formulations lead to nonlinear partial differential equations (PDEs), which generally do not have closed-
form solutions and must be solved numerically. An approach to solving such problems is the finite element
method (FEM), which discretizes the domain and converts the continuous problem into a solvable algebraic
system. In this work, some FEM-based techniques to approximate minimal surfaces in both the graph and
parametric settings will be explored.
This article is structured as follows: first, the mathematical formulations of the minimal surface problem in
the graph setting are presented using the FEM-based discretization techniques, and later the problem in the
parametric settings is discussed using a solver given the unitary disk as domain. At last, some numerical
examples and remarks for further possible implementations are discussed.

1
2 MINIMAL SURFACE AS A GRAPH

2 Minimal Surface as a Graph


Given a domain Ω ⊂ R2 and a function 𝑢 : Ω → R, the surface defined as 𝑧 = 𝑢(𝑥, 𝑦) has area given by:
∫ √︁
𝐴(𝑢) = 1 + |∇𝑢| 2 (1)
Ω

The graph of u is said to be a minimal surface / minimal graph if:

𝐴(𝑢) ≤ 𝐴( 𝑓 ) ∀𝑓 : Ω → R s.t. 𝑢 = 𝑓 on 𝜕Ω (2)

Minimizing this functional leads to the Euler-Lagrange equation:


!
∇𝑢
𝐹 (𝑢) = ∇ · √︁ = 0 on Ω (3)
1 + |∇𝑢| 2

This equation can be written as:

∇ · (𝑎(𝑢)∇𝑢) = 0 on Ω (4)
1
𝑎(𝑢) = √︁ (5)
1 + |∇𝑢| 2

This equivalent formulation will be useful when talking about the methods used to solve this problem.

2.1 Finite Element Approximation


To approximate solutions, the domain is discretized Ω using a triangular mesh 𝜏ℎ and define a finite element
space using piecewise linear (P1) basis functions. In this paper, 𝑉ℎ = { 𝑓 ∈ 𝐶 0 (Ω) | 𝑓 ∈ 𝑃1 (𝑇ℎ ) ∀𝑇ℎ ∈
𝜏ℎ , 𝑓 = 0 on 𝜕Ω}. It is useful to define the space for the test functions.
The following notation will be assumed trough the paper:

• N: Number of nodes in the mesh.

• Nb : Number of boundary nodes in the mesh.

Thus the approximate solution can be written as:


𝑁
∑︁
𝑢ℎ = 𝑢𝑗𝜙𝑗 (6)
𝑗=1

Using Equation 3, by integrating over Ω and then integrating by part, it is possible to obtain the following
equation: ∫
∇𝑢 ℎ · ∇𝑣 ℎ
√︁ = 0 ∀𝑣 ℎ ∈ 𝑉ℎ . (7)
Ω 1 + |∇𝑢 ℎ | 2

This equation can be solved in many different ways, but in this paper the discussion will be focused on two
of them in particular: Picardi method and Newton method.

2
2 MINIMAL SURFACE AS A GRAPH

2.2 Picard Iteration Method


The Picard iteration method [PSL18] is an approach for solving nonlinear problems by linearizing the
equation using the solution from the previous iteration in the nonlinear term. The iterative step is given by:

𝑎(𝑢 ℎ𝑘 ) ∇𝑢 ℎ𝑘+1 · ∇𝑣 ℎ = 0 ∀𝑣 ℎ ∈ 𝑉ℎ . (8)
Ω

which, in matrix formulation, becomes:


𝑆(𝑢 ℎ𝑘 )𝑢 ℎ𝑘+1 = b (9)
where b is the vector that takes into account the boundary conditions. The iterations continue until the norm
of the difference between two consecutive iterations is small enough.

2.2.1 Assembly of the Matrix


To understand the numerical implementation, Eq. (8) is rewritten explicitly for assembling the stiffness
matrix and right-hand side vector. Recalling the finite element discretization of 𝑢 ℎ :

𝑎(𝑢 ℎ𝑘 ) ∇𝑢 ℎ𝑘+1 · ∇𝜙𝑖 = 0, ∀𝑖 ∈ 1, . . . , 𝑁. (10)
Ω

Expanding 𝑢 ℎ𝑘+1 in the finite element basis:

𝑁
∑︁
𝑢 ℎ𝑘+1 = 𝑢 𝑘+1
𝑗 𝜙 𝑗, (11)
𝑗=1

substituting into the integral:


𝑁
∑︁ ∫
𝑢 𝑘+1
𝑗 𝑎(𝑢 ℎ𝑘 ) ∇𝜙 𝑗 · ∇𝜙𝑖 = 0, ∀𝑖 ∈ 1, . . . , 𝑁. (12)
𝑗=1 Ω

Since 𝑎(𝑢 ℎ𝑘 ) is known at iteration 𝑘 + 1, it remains inside the integral, implying that a modified stiffness
matrix S must be constructed at each iteration.
To efficiently assemble S, the fact that 𝑃1 elements are been using can be exploited, which implies that the
gradient ∇𝑢 ℎ is piecewise constant over each element (triangle). Thus, the gradients are evaluated over each
triangle rather than at nodes, introducing the notation ∇𝑢 𝑡 , where 𝑢 𝑡 represents 𝑢 ℎ on a specific element 𝑇ℎ .
By using the Lagrange basis functions, the gradient in each triangle can be approximated as:
∑︁
∇𝑢 𝑡 = 𝑢 𝑗 ∇𝜙 𝑗 , ∀𝑇ℎ ∈ 𝜏ℎ . (13)
𝑗 ∈𝑇ℎ

Taking the squared norm: ∑︁


|∇𝑢 𝑡 | 2 = 𝑢 𝑗 𝑢 𝑖 ∇𝜙 𝑗 · ∇𝜙𝑖 , ∀𝑇ℎ ∈ 𝜏ℎ . (14)
𝑖, 𝑗 ∈𝑇ℎ

From here, computing 𝑎(𝑢 𝑡 ) is straightforward. Consequently, when assembling S, 𝑎(𝑢 𝑡 ) has to be accounted
for on each element.

3
2 MINIMAL SURFACE AS A GRAPH

2.2.2 Boundary Conditions


After assembling the system matrix, boundary conditions must be applied.
Right-Hand Side Modification:The boundary conditions are enforced at the beginning of the algorithm by
setting:
𝑏(𝑖) = 𝑀 × 𝑓𝑖 , ∀𝑖 ∈ 𝑁 𝑏 , (15)
where 𝑁 𝑏 represents the boundary nodes, and 𝑀 is a large penalty value (e.g., 1030 ) to enforce the prescribed
boundary values.
Stiffness Matrix Modification:At each iteration, the boundary conditions must also be enforced in the
modified matrix S:
𝑆(𝑖, 𝑖) = 𝑀, ∀𝑖 ∈ 𝑁 𝑏 . (16)
This ensures that the boundary values remain fixed throughout the iterations.

2.2.3 Linear system


To actually solve the linear system, an iterative method will be used. Since S is Symmetric Positive Definite
(SPD), one of the best choice is the conjugate gradient. Thus at each iteration the system is solved and the
solution 𝑢 ℎ𝑘 is updated.

4
2 MINIMAL SURFACE AS A GRAPH

2.3 Newton’s Method for Minimal Surfaces


The second method proposed for solving the minimization problem is the Newton’s method. It requires the
computation of the Jacobian of 𝐹 (defined in 3), and the general update step follows:

𝑢 ℎ𝑘+1 = 𝑢 ℎ𝑘 − 𝛼𝐽 (𝑢 ℎ𝑘 ) −1 𝐹 (𝑢 ℎ𝑘 ) (17)

where 𝐽 (𝑢 𝑘 ) is the Jacobian of the functional 𝐹 evaluated at iteration 𝑘, and 𝛼 ∈ (0, 1) is a relaxation
parameter to improve stability of the method. There are various ways to choose alpha, but in this project
the Armijo rule [Wik24] will be used. Further studies could be done to use a better algorithm or to tune the
parameters used for the Armijo rule for faster convergence.

2.3.1 Computing the Jacobian


To compute the Jacobian, the Fréchet derivative of the Functional 𝐹 is computed. To do so, it is possible to
start from the equation 3, and only after to pass in the weak form.
" !#
𝛿𝐹 (𝑢) 𝑑𝐹 𝑑 ∇(𝑢 + 𝜖 𝑤)
= (𝑢 + 𝜖 𝑤) = −∇ · √︁ =
𝛿𝑢 𝑑𝜖 𝜖 =0 𝑑𝜖 1 + |∇(𝑢 + 𝜖 𝑤)| 2 𝜖 =0

" !#
𝑑 ∇(𝑢 + 𝜖 𝑤)
= −∇ · √︁ =
𝑑𝜖 1 + |∇(𝑢 + 𝜖 𝑤)| 2 𝜖 =0

 
∇𝑤 1 2∇𝑢 · ∇𝑤 + 𝜖 |∇𝑤| 2 ∇(𝑢 + 𝜖 𝑤) ª
= −∇ · ­ 1 − 2 =
©
2 2
3
2 2
®
𝜖 =0
« 1 + |∇(𝑢 + 𝜖 𝑤)| 1 + |∇(𝑢 + 𝜖 𝑤)| ¬

∇𝑤 (∇𝑢 · ∇𝑤) ∇𝑢 ª
= −∇ · ­ − 3 ® =
©
 1
1 + |∇𝑢| 2 2 1 + |∇𝑢| 2 2
« ¬

∇𝑤 𝐴(𝑢)∇𝑤 ª
= −∇ · ­ −
©
 1 3 ®
2 2 1 + |∇𝑢| 2 2 ¬
« 1 + |∇𝑢|
where 𝐴(𝑢) = ∇𝑢 ⊗ ∇𝑢 is the dyadic product of ∇𝑢 with itself. Passing to the weak formulation, the Fréchet
derivative of the minimal surface operator assumes the form:


𝛿𝐹 (𝑢) ∇𝑤 𝐴(𝑢)∇𝑤 ª
⟨ (𝑤), 𝑣⟩ = −  3 ® ∇𝑣 𝑑𝑥 ∀𝑣 ∈ 𝐻01 (Ω)
©
­ 1
𝛿𝑢 Ω 1 + |∇𝑢|

2 2 1 + |∇𝑢| 2 2
« ¬
while the operator itself, written in weak form, is the same as in 7, but for the moment it’s still in the
continuous form: ∫
∇𝑢 · ∇𝑣 1
⟨𝐹 (𝑢), 𝑣⟩ =  1 𝑑𝑥 ∀𝑣 ∈ 𝐻0 (Ω)
Ω 1 + |∇𝑢| 2 2

2.3.2 Boundary conditions


After having defined the matrix 𝐽, it is possible to solve the linear system in 17, that is:

𝐽 (𝑢 ℎ𝑘 )𝑤 = −𝐹 (𝑢 ℎ𝑘 ) (18)

5
2 MINIMAL SURFACE AS A GRAPH

so that 𝑢 ℎ𝑘+1 = 𝑢 ℎ𝑘 + 𝛼𝑤. The boundary conditions have to be taken into consideration: supposing to start
with a 𝑢 0ℎ s.t. 𝑢 0ℎ = 𝑓 on 𝜕Ω, 𝑤 = 0 on 𝜕Ω has to be imposed. Thus, 𝐽 and 𝐹 can be modified in a
similar way to 2.2.2, and an initial condition consistent with the considerations above has to be imposed. An
example could be to set: (
0 𝑓 on 𝜕Ω,
𝑢ℎ = (19)
0 otherwise

2.3.3 Symmetry and positive definitiveness of J


Now, it is possible to show that 𝐽 is also symmetric and positive definite, so that the conjugate gradient
(CG) method for a faster convergence can be used as well. The Newton step matrix in the finite element
formulation can be rewritten as:
𝐴𝑖 𝑗 = (∇𝜙𝑖 , 𝐵∇𝜙 𝑗 ), (20)
where 𝐵 is a 𝑑 × 𝑑 matrix given by:
 
𝐵 = 𝑎(𝑢) 𝐼 − 𝑎(𝑢) 2 ∇𝑢 ⊗ ∇𝑢 . (21)

From this formulation, it is clear that 𝐵 is symmetric since it consists of the identity matrix 𝐼 and a symmetric
term ∇𝑢 ⊗ ∇𝑢 scaled by a scalar factor. To show that 𝐵 is positive definite, consider its eigenvalues. Define:

∇𝑢 𝑛
𝑣1 = , (22)
|∇𝑢 𝑛 |

which is an eigenvector of 𝐵 with eigenvalue:

|∇𝑢| 2
 
1
𝜆 1 = 𝑎(𝑢) 1 − 2
= 𝑎(𝑢) > 0. (23)
1 + |∇𝑢| 1 + |∇𝑢| 2

All other eigenvectors 𝑣 2 , . . . , 𝑣 𝑑 , which are perpendicular to 𝑣 1 and to each other (because the matrix is
symmetric), correspond to the eigenvalue:
𝜆2 = 𝑎(𝑢). (24)
Since all eigenvalues are strictly positive, 𝐵 is positive definite, implying that the full matrix 𝐴 is also positive
definite. It is then possible to use the CG algorithm to solve the linear system.

6
3 PARAMETRIC MINIMAL SURFACE

3 Parametric Minimal Surface


In the parametric formulation of the Plateau problem (for further reference, see [DGHE94]), one seeks a
mapping
𝑢 : 𝐵 → R3 ,
with 𝐵 ⊂ R2 (often the unit disk) such that the boundary condition

𝑢| 𝜕𝐵 = 𝛾(𝑠)

satisfies a prescribed Jordan curve Γ in R3 . Here, 𝛾 : 𝜕𝐵 → Γ is a fixed smooth parametrization of the


curve, and 𝑠 is a reparametrization function defined as

𝑠 = id + 𝜉, 𝜉 ∈𝑇 (25)
where 𝑇 is a subspace of functions on 𝜕𝐵 satisfying normalization conditions that eliminate rigid motions:
∫ 2𝜋
𝜉 (𝜙) 𝑑𝜙 = 0,
0
∫ 2𝜋
𝜉 (𝜙) cos 𝜙 𝑑𝜙 = 0,
0
∫ 2𝜋
𝜉 (𝜙) sin 𝜙 𝑑𝜙 = 0. (26)
0

The minimal surface is obtained by minimizing the Dirichlet energy functional of the harmonic extension of
𝛾 ◦ 𝑠, given by: ∫
1
𝐸 (𝑠) = |∇Φ(𝛾 ◦ 𝑠)| 2 𝑑𝑥.
2 𝐵
This energy functional is closely related to the Douglas integral and provides a variational formulation of
the problem.

In the discrete setting, following the finite element method (FEM) described in [DGHE94], the discrete
energy is given by
3
1 ∑︁
𝐸 ℎℎ (𝑠) = ⟨𝑆𝑢 𝑘 (𝑠), 𝑢 𝑘 (𝑠)⟩, (27)
2 𝑘=1
where:

• 𝑆 is the FEM stiffness matrix (discrete Laplacian) on the triangulated domain 𝐵 ℎ ,

• 𝑢 𝑘 (𝑠) are the discrete harmonic extensions for each coordinate component, subject to the boundary
condition 𝛾 ◦ 𝑠.

A discrete reparametrization 𝑠 is optimal if it is monotone and satisfies the stationarity condition, i.e.

⟨𝐸 ℎ′ (𝑠 ℎ ), 𝜉 ℎ ⟩ = 0 ∀ 𝜉 ℎ ∈ 𝑇ℎ (28)

where 𝑇ℎ is the finite-dimensional subspace of 𝑇. The FEM formulation ensures stability and convergence
of the numerical solution to the continuous minimal surface.

7
3 PARAMETRIC MINIMAL SURFACE

3.1 FEM Implementation


In this section, the algorithm implemented for finding the minimal surface will be described. As for the
graph minimal surface, the Newton algorithm will be used.
The notation used will be the same in [DGHE94]. In particular:

• S ∈ R 𝑁 × 𝑁 : Stiffness matrix representing the Laplace operator.

• S0 ∈ R 𝑁 × 𝑁 : Stiffness matrix representing the Laplace operator with 0 Dirichlet boundary conditions.

• ei ∈ R 𝑁𝑏 : 𝑖 𝑡 ℎ canonical basis’ vector.

The equation that needs to be solved is:

𝑠 ℎ𝑘+1 = 𝑠 ℎ + 𝛼 𝑑𝑠 ℎ𝑘 (29)
𝐽 (𝑠 ℎ𝑘 )𝑑𝑠 ℎ𝑘 = −𝑏(𝑠 ℎ𝑘 ) (30)

with 𝑏 and 𝐽 s.t.


3
𝜕𝐸 ℎℎ ∑︁
𝑏𝑖 = (𝑠 ℎ ) = ⟨𝑆𝑢 𝑘 (𝑠), (𝐼 − 𝑆0−1 𝑆)𝑒 𝑖 ⟩𝛾 𝑘′ (𝑠𝑖 ) ∀𝑖 ∈ {1, . . . , 𝑁 𝑏 } (31)
𝜕𝑠𝑖 𝑘=1
3
𝜕 2 𝐸 ℎℎ ∑︁
𝐽𝑖 𝑗 = = ⟨𝑆(𝐼 − 𝑆0−1 𝑆)𝑒 𝑖 , (𝐼 − 𝑆0−1 𝑆)𝑒 𝑗 ⟩𝛾 𝑘′ (𝑠𝑖 )𝛾 𝑘′ (𝑠 𝑗 ) + (32)
𝜕𝑠𝑖 𝜕𝑠 𝑗 𝑘=1
3
∑︁
+ 𝛿𝑖 𝑗 ⟨𝑆𝑢 𝑘 (𝑠), (𝐼 − 𝑆0−1 𝑆)𝑒 𝑖 ⟩𝛾 𝑘′′ (𝑠𝑖 ) ∀𝑖, 𝑗 ∈ {1, . . . , 𝑁 𝑏 } (33)
𝑘=1

Moreover, by Lemma 7 in [DGHE94], it is possible to retrieve 𝑢 ℎ by:

𝑢 ℎ (𝑠) = −𝑆0−1 𝑆𝛾(𝑠) + 𝛾(𝑠) (34)

Since the vectors (𝐼 − 𝑆0−1 𝑆)𝑒 𝑖 and 𝑆(𝐼 − 𝑆0−1 𝑆)𝑒 𝑖 don’t change between the iterations, they can be computed
only once and stored.

3.2 Boundary condition


At each iteration, it is not enough to solve the linear system in 30, since, by only doing this, eq 26 is not
accounted for. Thus, it is mandatory to discretize them so that, at each iteration, the variables 𝑠 ℎ are still of
the form 25. To numerically solve the problem, the boundary 𝜕𝐵 is discretized by introducing a set of fixed
points
0 = 𝜙1 < 𝜙2 < · · · < 𝜙 𝑀 < 2𝜋.
𝜉 (𝜙) is approximated using piecewise linear functions between these points. That is, for each subinterval
[𝜙𝑖 , 𝜙𝑖+1 ], 𝜉 (𝜙) is approximated as:

𝜉𝑖+1 − 𝜉𝑖
𝜉 (𝜙) ≈ (𝜙 − 𝜙𝑖 ) + 𝜉𝑖 .
𝜙𝑖+1 − 𝜙𝑖

8
3 PARAMETRIC MINIMAL SURFACE

3.2.1 Discretizing the First Condition


The first condition in (26) is: ∫ 2𝜋
𝜉 (𝜙) 𝑑𝜙 = 0.
0
Summing over all subintervals:
∫ 𝜙𝑖+1
𝜉𝑖+1 + 𝜉𝑖
𝜉 (𝜙) 𝑑𝜙 ≈ (𝜙𝑖+1 − 𝜙𝑖 ).
𝜙𝑖 2
Summing over all intervals:
𝑀
∑︁ 𝜉𝑖+1 + 𝜉𝑖
(𝜙𝑖+1 − 𝜙𝑖 ) = 0.
𝑖=1
2
This ensures that the total shift introduced by 𝜉 is zero in the discrete setting.

3.2.2 Discretizing the Second Condition


The second condition in (26) is: ∫ 2𝜋
𝜉 (𝜙) cos 𝜙 𝑑𝜙 = 0.
0
Using the piecewise linear approximation, the integral over each interval becomes:
∫ 𝜙𝑖+1
𝜉 (𝜙) cos 𝜙 𝑑𝜙.
𝜙𝑖

Expanding 𝜉 (𝜙): ∫  
𝜙𝑖+1
𝜉𝑖+1 − 𝜉𝑖
(𝜙 − 𝜙𝑖 ) + 𝜉𝑖 cos 𝜙 𝑑𝜙.
𝜙𝑖 𝜙𝑖+1 − 𝜙𝑖
Splitting the terms: ∫ ∫
𝜙𝑖+1 𝜙𝑖+1
𝜉𝑖+1 − 𝜉𝑖
(𝜙 − 𝜙𝑖 ) cos 𝜙 𝑑𝜙 + 𝜉𝑖 cos 𝜙 𝑑𝜙.
𝜙𝑖+1 − 𝜙𝑖 𝜙𝑖 𝜙𝑖

Approximating these integrals leads to:


𝑀
∑︁ 𝜉𝑖+1 − 𝜉𝑖
(cos 𝜙𝑖+1 − cos 𝜙𝑖 ) = 0.
𝜙 − 𝜙𝑖
𝑖=1 𝑖+1

This ensures that the re-parametrization does not introduce a net shift in the 𝑥-direction.

3.2.3 Discretizing the Third Condition


Similarly, for the sine integral:
∫ 2𝜋
𝜉 (𝜙) sin 𝜙 𝑑𝜙 = 0.
0

9
3 PARAMETRIC MINIMAL SURFACE

Using the same piecewise approximation:


𝑀
∑︁ 𝜉𝑖+1 − 𝜉𝑖
(sin 𝜙𝑖+1 − sin 𝜙𝑖 ) = 0.
𝜙
𝑖=1 𝑖+1
− 𝜙𝑖

This ensures that the reparametrization does not introduce a net shift in the 𝑦-direction.

3.3 Final Discrete Equations


Thus, the three discrete normalization conditions become:

𝑀
∑︁ 𝜉𝑖+1 + 𝜉𝑖
(𝜙𝑖+1 − 𝜙𝑖 ) = 0,
𝑖=1
2
𝑀
∑︁ 𝜉𝑖+1 − 𝜉𝑖
(cos 𝜙𝑖+1 − cos 𝜙𝑖 ) = 0,
𝜙 − 𝜙𝑖
𝑖=1 𝑖+1
𝑀
∑︁ 𝜉𝑖+1 − 𝜉𝑖
(sin 𝜙𝑖+1 − sin 𝜙𝑖 ) = 0. (35)
𝜙 − 𝜙𝑖
𝑖=1 𝑖+1

To account for these three equations, one way is to use Lagrange multipliers coupled with eq 30. Defining
𝐶 ∈ R3× 𝑁𝑏 the matrix corresponding to 35, the new system becomes:

J CT
    
ds −b
= (36)
C 0 𝜆 0

This also keeps symmetry of the matrix, since 𝐽 is also symmetric by construction.

3.4 Iterative solver


Since, in general, the matrix in 36 is symmetric but not positive definite, the conjugate gradient won’t be
used. Instead, an implementation of the Min-Res [Wik25] algorithm is carried out to solve the linear system.
Moreover, since it may happen that the values of 𝑠 are not strictly increasing, a sorting operation it computed
every few iterations. The number of step to wait before sorting again the vector 𝑠 may vary, but by numerical
observation, if the sorting is done at every iteration, there might be cases in which the solution keeps bouncing
back and forth between two consecutive iterations.

10
4 NUMERICAL RESULTS

4 Numerical results
In this section, some numerical examples will be briefly presented and discussed, to show the efficiency of
the solvers.

4.1 Graph minimal surface


The maximum number of iteration is set to 500, the tolerance is set to 1𝑒 − 12, and the 𝛼𝑚𝑖𝑛 for the Armijo’s
rule is set to 0.25.
In general, it is able to observe that both the methods reach convergence, starting from 𝑢 0ℎ defined as in 19.
The different figures represent the initial graph and the final graph obtained with the two methods. Moreover,
a plot of the residuals is also present.
Something worth of noting is that in some cases the Newton method seem to reach convergence very fast
but then slows a lot. This is almost surely due to the Armijo’s rule, since by having the method more robust
(without it convergence was almost never obtained), the fast convergence rate is lost.
An example of this is Figure 1.
Moreover, from Figure 4, it is possible to observe that if the initial condition is discontinuous, the Newton
method seems to have some problems reaching the set tolerance, more than in the other cases.
At last, in general the Picardi method seems very robust, with a convergence rate that scales a lot based of the
area of the surface (as can be seen, in Figure 3, both methods fail to converge in the set number of iteration,
and the area is the biggest considered).

11
4 NUMERICAL RESULTS

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the two methods.

(b) Plot of the residuals of the methods.

Figure 1: Boundary conditions: 13 (𝑥 − 2) 3 + 3(𝑥 − 2) (𝑦 − 2) 2 .


The domain is the square (0, 4) × (0, 4).
Ndof = 961.
Initial area: 13.873, Final: 8.9341

12
4 NUMERICAL RESULTS

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the two methods.

(b) Plot of the residuals of the methods.

Figure 2: Boundary conditions: sin(4𝜋𝑥) sin(4𝜋𝑦).


The domain is the square (0, 1) × (0, 1).
Ndof = 2601.
Initial area: 1.00951 Final area: 1.00251

13
4 NUMERICAL RESULTS

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the two methods.

(b) Plot of the residuals of the methods.

1 2
Figure 3: Boundary conditions: 5𝑒 𝑥 + 1+𝑦 2 + 2𝑥𝑦 + 25𝑥 .

The domain is the disk centered in 0 with radius 2


Ndof = 2791.
Initial area: 48.6529 Final area: 26.5665

14
4 NUMERICAL RESULTS

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the two methods.

(b) Plot of the residuals of the methods.

Figure 4: Boundary conditions: 2𝐼 {sin( 𝜋 𝑥 )+sin( 𝜋 𝑦) ≥0} − 1, with 𝐼 the indicator function.
The domain is the disk centered in 0 with radius 1
Ndof = 2791.
Initial area: 3.19105 Final area: 3.15119

15
4 NUMERICAL RESULTS

4.2 Parametric minimal surface


The maximum number of iteration is set to 500, the tolerance is set to 1𝑒 − 6, and the 𝛼𝑚𝑖𝑛 for the Armijo’s
rule is set to 0.1.
From the first two Figures, it is obvious that starting with an initial guess of 𝑠 given by the Identity, the results
are very similar. However, from Figure 7, it is possible to observe a big difference. The results are correct,
for this last Figure, since they have been compared with the ones in [Tsu86].

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the Newton method.

(b) Plot of the residual of the method.

Figure 5: Boundary conditions: (1 + 14 cos(3𝑥)) cos(2𝑥), 1 + 14 cos(3𝑥)) sin(2𝑥), 14 sin(3𝑥)), ∀𝑥 ∈ [0, 2𝜋)
Ndof = 2791.
Initial energy: 6.87012 Final energy: 6.76825

16
4 NUMERICAL RESULTS

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the Newton method.

(b) Plot of the residual of the method.

Figure 6: Boundary conditions:



(𝑅 3 cos(3𝑥) + 4𝑅 5 cos(5𝑥), 𝑅 3 sin(3𝑥) − 4𝑅 5 sin(5𝑥), − 15𝑅 4 sin(4𝑥)) 𝑅 = 0.8, ∀𝑥 ∈ [0, 2𝜋)
Ndof = 4921.
Initial energy: 45.2633 Final energy: 45.2633

17
4 NUMERICAL RESULTS

(a) Plot of the initial values of the function, and of the solution of the minimal problem for the Newton method.

(b) Plot of the residual of the method.

Figure 7: Boundary conditions: (1 + 12 cos(3𝑥)) cos(𝑥), 1 + 12 cos(3𝑥)) sin(𝑥)) ∀𝑥 ∈ [0, 2𝜋)


Ndof = 4921.
Initial energy: 3.8768 Final energy: 3.54629

18
5 CONCLUSION

5 Conclusion
In this work, FEM based approaches have been developed and analyzed for solving minimal surface problems
in both the graph and parametric formulations. For the graph formulation, two iterative methods (the Picard
iteration and Newton’s method) have been explored. The Picard method demonstrated robustness across
a range of problems, while the Newton method, despite exhibiting rapid initial convergence, sometimes
experienced a slowdown due to the stabilizing effect of the Armijo rule. These observations suggest a trade-
off between convergence speed and robustness that must be carefully managed depending on the problem
characteristics. Some further improvement could be done on a better method for improving the convergence
of the Newton method.
For the parametric minimal surface problem, a Newton based solver was implemented with a careful
treatment of the boundary reparametrization conditions using Lagrange multipliers. The discrete formulation
successfully captured the essential features of the continuous problem, as evidenced by the numerical
experiments, and produced results that are in good agreement with those available in the literature.
Overall, the numerical results confirm that FEM is an effective and flexible tool for approximating minimal
surfaces. Future work may involve refining the solver strategies, such as by incorporating adaptive mesh
refinement or alternative linear solvers, to further enhance computational efficiency and accuracy.

19
REFERENCES

References
[DGHE94] Dziuk, Gerhard, Hutchinson, and John E. A finite element method for the computation of
parametric minimal surfaces. DML-CZ, 1994.

[PSL18] D. Peñaloza and A. Sáenz-Ludlow. On picard’s iteration method to solve differential equations
and a pedagogical space for otherness. ResearchGate, 2018.

[Tsu86] Takuya Tsuchiya. On two methods for approximating minimal surfaces in parametric form.
AMS, 1986.

[Wik24] Wikipedia contributors. Backtracking line search — Wikipedia, the free encyclopedia, 2024.
[Online; accessed February 5, 2025].

[Wik25] Wikipedia contributors. Minimal residual method — Wikipedia, the free encyclopedia, 2025.
[Online; accessed February 5, 2025].

20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy