0% found this document useful (0 votes)
76 views54 pages

On The One Dimensional Stefan Problem: With Some Numerical Analysis

This document summarizes a thesis about the Stefan problem, which models heat transfer during phase changes like melting. It presents the one-dimensional Stefan problem with two boundary conditions - constant and time-dependent. It derives the Stefan condition, discusses properties like uniqueness, and presents numerical methods to approximate solutions. Graphical solutions are shown with error estimates.

Uploaded by

asha shaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views54 pages

On The One Dimensional Stefan Problem: With Some Numerical Analysis

This document summarizes a thesis about the Stefan problem, which models heat transfer during phase changes like melting. It presents the one-dimensional Stefan problem with two boundary conditions - constant and time-dependent. It derives the Stefan condition, discusses properties like uniqueness, and presents numerical methods to approximate solutions. Graphical solutions are shown with error estimates.

Uploaded by

asha shaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

On the one dimensional

Stefan problem
with some numerical analysis

Tobias Jonsson

Spring 2013
Thesis, 15hp
Bachelor of Mathematics, 180hp
Department of Mathematics and Mathematical Statistics
Sammanfattning
I den här uppsatsen presenteras Stefanproblemet med två olika randvillkor, ett
konstant och ett tidsberoende. Det här problemet är ett klassiskt exempel på ett
frirandsproblem där randen rör sig med tiden, inom partiella differentialekva-
tioner. För det endimensionella fallet visas specifika egenskaper och det viktiga
Stefanvillkoret härleds. Betydelsen av maximumprinciper och existensen av en
unik lösning diskuteras. För att lösa problemet numeriskt, görs en analys i
gränsvärdet t → 0. Approximativa lösningar fås till Stefanproblemet, som visas
grafiskt med tillhörande feluppskattningar.

Abstract
In this thesis we present the Stefan problem with two boundary conditions,
one constant and one time-dependent. This problem is a classic example of a
free boundary problem in partial differential equations, with a free boundary
moving in time. Some properties are being proved for the one-dimensional case
and the important Stefan condition is also derived. The importance of the
maximum principle, and the existence of a unique solution are being discussed.
To numerically solve this problem, an analysis when the time t goes to zero
is being done. The approximative solutions are shown graphically with proper
error estimates.
Contents
1 Introduction 4
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Historical background . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Physical background . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 One dimensional Stefan problem 8


2.1 The Stefan condition . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 The melting problem in one dimension . . . . . . . . . . . . . . . 11
2.2.1 Rescaling the problem into a dimensionless form . . . . . 12
2.2.2 Similarity solution . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 Nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 The maximum principle . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . 20

3 Numerical analysis 21
3.1 Finite difference method . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 Forward Euler scheme . . . . . . . . . . . . . . . . . . . . 21
3.1.3 Crank-Nicholson scheme . . . . . . . . . . . . . . . . . . . 24
3.2 Analysis when t → 0 . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Constant boundary condition . . . . . . . . . . . . . . . . 26
3.2.2 Time-dependent boundary condition . . . . . . . . . . . . 28
3.3 Numerical method for solving the Stefan problem . . . . . . . . . 29
3.3.1 Constant boundary condition . . . . . . . . . . . . . . . . 29
3.3.2 Time-dependent boundary condition . . . . . . . . . . . . 33
3.4 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Discussion 38
4.1 Future studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5 List of notations 39

F Appendix 40

G Appendix 47

Bibliography 54
1 Introduction
A Stefan Problem is a specific type of boundary value problem for a partial
differential equation concerning heat distribution in a phase changing medium.
Since the evolving interface is a priori unknown, a part of the solution will be
to determine the boundary. An example is the diffusion of heat in the melt-
ing of ice, and as the melting occurs the boundary of the ice will be changing
position. The problem is by some authors denoted as a "free boundary value
problem" due to the boundary of the domain is a priori unknown [1] [5] [6].
To distinct the case of a moving boundary (associated with a time-dependant
problem) from the problem with stationary boundary a few authors denote the
latter as a "moving boundary problem" [3]. To stick with the common notation,
the term "free boundary problem" will be used in this thesis and denote both
the time-dependant and the stationary boundary.

To achieve a unique solution for a Stefan Problem there needs to be two bound-
ary condition, one to determine the a priori unknown boundary itself and one is
as usual a suitable condition on the fixed boundary. The natural occurrence of
a Stefan problem is mostly associated with the melting and solidification prob-
lems, however there also exist some Stefan-like problems, for instance the fluid
flow in porous media or even shock waves in gas dynamics [12].

A precise formulation of a Stefan problem is not possible, however we can list


some characteristic factors regarding this type of problems. To mention a few ex-
amples of common features we have: (1) The heat distribution and heat transfer
is described by equations, (2) there exist a distinct interface (or phase change-
boundary) between two phases (or more), which are distinguishable from each
other, (3) the temperature of the interface is a priori known [6].

To facilitate the reading, a list of notations used throughout the text is given
at the end of the thesis.

1.1 Objectives
The objectives of this thesis is to get a mathematical understanding of the
problem with an unknown free boundary in one dimension. That includes to
explain the mathematical model, derive the analytic solution and discuss the
validity of the solution. For an even more explaining picture we would like to
include two different kinds of boundary condition and also a numerical analysis,
with simulated solutions. Due to the fact of numerical approximations, an error
analysis is of course required.

4
1.2 Historical background
The first known paper about diffusion of heat in a medium with a change of
phase state was published by the french mathematicians Gabriel Lamé and
Benoît Paul Émile Clapeyron in 1831 [7]. The stated problem was to cool a
liquid filling the halfspace x > 0 and determine the thickness of the generated
solid crust, with a constant boundary condition at x = 0. They discovered that
the thickness of the crust is proportional to the square root of the time, but no
determination of the coefficient of proportionally was attached.

Almost 60 years later in 1889 was this question picked up and stated in a
more general form by the Austrian physicist and mathematician Joseph Stefan
[12]. Joseph Stefan published four papers describing mathematical models for
real physical problems with a change of phase state. This was the first general
study of this type of problem, since then free boundary problems are called Ste-
fan problems. Of the four papers published it was the one about ice formation
in the polar seas that has drawn the most attention, which can be found in
[14]. The given mathematical solution was actually found earlier by the Ger-
man physicist and mathematician Franz Ernst Neumann in 1860. It is called
the Neumann solution [15].

1.3 Physical background


The melting of ice and solidification of water are two examples of something
called a phase transformation, which is a discontinuous change of the properties
in the substance. The different states of aggregation in the transition are called
phases and they share the same physical properties (for instance density and
chemical composition), therefore a phase is more specific then a state of matter.
As a phase transition occurs, there will appear a latent heat which either is
absorbed or released by the body/thermodynamic system without changing the
temperature. Heat itself is a mechanism of energy transport between objects
due to a difference in temperature. It is synonymous with heat flow and we say
it is the flow of energy from a hotter body to a colder one. Temperature can
be seen as a measure of an object’s "willingness" or more precise probability to
give up energy. Heat flow may be understood with statistical mechanics which
uses probability theory applied on thermodynamics. For further reading, there
exists a good and gentle introduction to thermodynamics and statistical physics
[13]. In the case of heat conduction, i.e., heat transfer by direct contact, one
might wonder at what rate the heat flows between a hot object and a cold one.
To obtain an equation describing the heat distribution, consider any open and
piece-wise smooth region Ω where Ω ⊂ Rn , with a boundary ∂Ω. By natural
assumption the rate of change of a total quantity should be equal to the net
flux through the boundary ∂Ω:
Z Z Z
d
u dV = − F · ν dS = −divF dV, (1.1)
dt Ω ∂Ω Ω

where u is the density of some quantity (such as heat), ν being the unit normal
pointing inwards and F being the flux density. In the last equality of equation
(1.1) we used Gauss’s theorem under the condition that the flux function F
defined on the domain Ω is continuously differentiable. Assuming that u and ut

5
exist and are continuous, we can use Leibniz’s rule for differentiation under the
integral sign on the first integral in equation (1.1),

Z Z
∂u
dV = −divF dV. (1.2)
Ω ∂t Ω
We present here a classical result:

Theorem 1.3.1. Let f be any function such f ∈ C(Ω) where Ω ⊂ Rn ,


Z
If f dV (x) = 0, for all test volumes V ⊂ Ω, then
V

f ≡ 0 on Ω. (1.3)

Proof. Assume f 6≡ 0, then at a point x0 ∈ Ω, f (x0 ) = λ > 0 (The proof for the
other case is similarly). But since f is continuous we have for any , for instance
 = λ2 > 0, there is a δ > 0 such that if

|x − x0 | < δ, x ∈ Ω
⇒ (1.4)
λ
|f (x) − f (x0 )| <  =
2
which implies that
λ 3λ
< f (x) < (1.5)
2 2
and by integrating the inequality above (1.5) over the test volume V
Z Z
λ λ
f (x) dx > dx = × volume(V ) > 0. (1.6)
V V 2 2
Therefore we can conclude that
Z
f (x) dx > 0 (1.7)
V
we must have a contradiction and thus is f ≡ 0.

By writing equation (1.2) as


Z
∂u
+ divF dV = 0 (1.8)
Ω ∂t
we must conclude from Theorem 1.3.1 that

ut = −div F. (1.9)

6
The flux density F is often proportional to the negative gradient of u (minus
since the flow is from higher heat to lower heat):

F = −αDu (α > 0) (1.10)


where α is the thermal diffusivity. Assuming that α is constant, we get by
equation (1.10) and equation (1.9) the heat equation:

ut = α div(Du) = α∆u. (1.11)

7
2 One dimensional Stefan problem
We will now formulate the most simple form of a mathematical model describing
phase transitions. The classical Stefan problem is a solidification and a melt-
ing problem, for example the transition between ice and water. To acquire a
solution for the classical Stefan problem, the heat equation needs to be solved.
As mentioned before, a boundary condition on the evolving boundary is needed
to get a unique solution. It is called "the Stefan condition" and will be derived
below. In this chapter we will follow the ideas from [2].

2.1 The Stefan condition


The evolving unknown interface is denoted as x = s(t), where x is the position
in space; s(t) is the free boundary and t is the time. To derive the Stefan
condition we need to make some assumptions. As the transitions occur there
will be a small volume change, although here, we will ignore this property in
favour for simplicity. By physical reason the temperature should be continuous
at the interface x = s(t) between the phases:

lim uS (x, t) = lim uL (x, t) = um for all t. (2.1)


x→s(t)+ x→s(t)−

The phase-change temperature between the two phases is assumed to be of con-


stant value, um . At a fixed time t = t0 consider a domain Ω with two different
phases separated at x = s(t0 ), which can be seen in Figure 1. We assume plane
symmetry to have the temperature u to depend on only t and x.

Ω1 Ω2
(Liquid phase) (Solid phase)

s(t) x

Figure 1: Domain Ω separated into two phases at x = s(t) which are Ω1 =


Ω ∩ {x < s(t)} and Ω2 = Ω ∩ {x > s(t)}.

Assume the case of the interface evolving to the right, i.e, when the solid is
melting. Thus we should expect that u ≥ um in the liquid phase and u ≤ um in
the solid phase. At time t = t0 consider a portion of the interface, for simplicity
in the shape of a disk A with area S. Later at time t1 > t0 the position of
the interface has changed to s(t1 ) > s(t0 ). Meanwhile a cylinder of volume
S × (s(t1 ) − s(t0 )) has melted and therefore released a quantity of heat Q:

Q = S(s(t1 ) − s(t0 )) × ρl, (2.2)


where l is the specific latent heat and ρ is the density. The heat flux

8
φL = −KL DuL (2.3)
φS = −KS DuS (2.4)

where Ki is the is the material’s conductivity for liquid with i = L and solid
i = S, and we assume u ∈ C 1 . By energy conservation it is natural to assume
that the total heat absorbed in equation (2.2) is equal to

Z t1 Z
Q= [φL · x̂ + φS · (−x̂)] dA dτ =
t0 A
Z t1 Z (2.5)
[−KL DuL (s(τ ), τ ) · x̂ − KS DuS (s(τ ), τ ) · (−x̂)] dA dτ,
t0 A

where x̂ is the unit vector in the x−direction. Integrating expression (2.5) over
the spatial coordinates gives
Z t1
∂uL ∂uS
Q=A [−KL (s(τ ), τ ) + KS (s(τ ), τ )] dτ (2.6)
t0 ∂x ∂x
we assume this to be equal to expression (2.2). Equating equation (2.2) and
(2.6), we get

Z t1  
∂uL ∂uS
(s(t1 ) − s(t0 )) ρl = −KL (s(τ ), τ ) + KS (s(τ ), τ ) dτ (2.7)
t0 ∂x ∂x

divide by (t1 − t0 ) and take the limit t1 −→ t0 ends up with:

t1  
s(t1 ) − s(t0 )
Z
1 ∂uL ∂uS
lρ lim = lim −KL (s(τ ), τ ) + KS (s(τ ), τ ) dτ.
t1 →t0 t1 − t0 t1 →t0 t1 − t0 t0 ∂x ∂x
(2.8)
Theorem 2.1.1 (The Intermediate Value Theorem). Let f : [a, b] → R and
f ∈ C, then f attains all values between the endpoints f (a) and f (b).
Proof. Is given by for instance [10]

Theorem 2.1.2 (The Extreme value theorem). A continuous function f on


a bounded and closed interval [a, b], attains both a maximum and a minimum
value at least once.
Proof. Is given by for instance [10]
Theorem 2.1.3 (The Mean-Value Theorem for integrals). If f : [a, b] → R and
f ∈ C on [a, b], then there exists a number c ∈ [a, b] such that
Z b
f (x) dx = (b − a)f (c). (2.9)
a

9
Proof. It follows from Theorem 2.1.2 that a continuous function f (x) has a
minimum value m and a maximum value M on a interval [a, b]. From the
monotonicity of integrals and m ≤ f (x) ≤ M , it follows that
Z b Z b Z b
mI = m dx ≤ f (x) dx ≤ M dx = M I (2.10)
a a a
where
Z b
I≡ dx = b − a (2.11)
a

using equation (2.11) and dividing by I (assuming I > 0) in equation (2.10)


gives
Z b
1
m≤ f (x) dx ≤ M. (2.12)
b−a a
From the extreme value theorem we know that both m and M is attained at
least once by the integral. Therefore the Intermediate value theorem says that
the function f attains all values in [m, M ], more specific it exists a c ∈ [a, b]
such that
Z b
1
f (c) = f (x) dx. (2.13)
b−a a

We can with help of Theorem 2.1.3 write equation (2.8) as


1
lρs0 (t1 ) = lim × (t1 − t0 )f (c) (2.14)
t1 →t0 t1 − t0
where we introduced a new function (for simplicity)

f (c) ≡ −KL ux (s(c), c) + KS ux (s(c), c) (2.15)

and c ∈ [t0 , t1 ]. But as t0 → t1 and f is continuous (u ∈ C 1 ), then

lρs0 (t1 ) = f (t1 ). (2.16)


However since the same procedure could be done at any time t instead of t1 , we
could instead write
lρs0 (t) = f (t) (2.17)
and hence with our expression for f we reach
ds
lρ = KS ux (s(t), t) − KL ux (s(t), t) (2.18)
dt

which is called the Stefan condition and is a boundary condition on the free
boundary [2].

10
2.2 The melting problem in one dimension
The one-dimensional one phase problem could be represented as a semi-infinite
solid, for instance a thin block of ice occupying 0 ≤ x < ∞ at the solidification
temperature u = 0. An assumption needed is that we ignore any volume change
in the solidification. At the fixed boundary of the thin block of ice x = 0 there
could be many different type of "flux functions" f (t). For instance we could
have a constant temperature which is above the solidification temperature i.e
u0 > 0, or a function depending on time. We assume that the temperature in
the solid phase is being constant. Thus the problem is to find the temperature
distribution in the liquid phase and the location of the free boundary s(t). Even
if there will be two phases present, the problem is called a one-phase problem
since it is only the liquid phase which is unknown.

The liquid region, 0 ≤ x < s(t)


2 2
∂u KL ∂ u
∂t = CL ρ ∂x2 = αL ∂∂xu2 , The heat equation 0 < x < s(t), t > 0,
u(0, t) = f (t), Boundary condition, t>0,
u(x, 0) = 0, Initial condition.
The free boundary, x = s(t)
∂u
lρ ds
dt = −KL ∂x , Stefan condition,
s(0) = 0, Initial position of the melting interface,
u(s(t), t) = 0, The Dirichlet condition at the interface,
i.e freezing temperature
Phase 2 − The solid region, s(t) < x < ∞,
u(x, t) = 0, For all t, x ≥ s(t).
(2.19)
The boundary condition at x = 0 could be anything, but in this thesis we will
follow [9] and consider the following cases:

(i) f (t) = 1,
(2.20)
(ii) f (t) = et − 1.
Actually both of our problems could be solved exactly according to [9].

11
2.2.1 Rescaling the problem into a dimensionless form
The goal here is to rescale the problem into a more convenient form, and since
we have a mathematical approach we only need to make sure that the new
expressions satisfy the necessary equations. Consider the following change of
variables
(
u → av,
(2.21)
t → γτ
where γ and a are constants. Put

a = γ2 = α (2.22)

where α 6= 0. With help of (2.21) and (2.22) we can show that the system of
equations in (2.19) could be transformed into

vτ = vxx , (2.23)
d
β s(γτ ) = −vx (s(γτ ), γτ ) (2.24)

v(x, 0) = 0 (2.25)
v(0, γτ ) = f (γτ )γ (2.26)

where β is KL . Replace the functions by

s(γτ ) → σ(τ )
f (γτ ) → F (τ )/γ (2.27)
v(x, γτ ) → λ(x, τ )
and now

λτ (x, τ ) = λxx (x, τ )


d (2.28)
β σ(τ ) = −λx (σ(τ ), τ )

however, in this thesis we make a mathematical approach, so we change the
letters

σ→s
τ →t
(2.29)
λ→u
F → f.
And we get the rescaled problem as

ut = uxx
u(0, t) = f (t)
u(x, 0) = 0 (2.30)
ds
β = −ux (s(t), t).
dt

12
2.2.2 Similarity solution
Here will we consider the Stefan problem defined in system (2.19) with condition
(i) at the boundary x = 0 and derive an explicit expression for the solution,

ut = αL uxx , 0 < x < s(t), t > 0 (2.31)


u(s(t), t) = 0, t ≥ 0 (2.32)
ds
lρ = −KL ux (s(t), t), t>0 (2.33)
dt
s(0) = 0 (2.34)
u(0, t) = u0 > 0, t > 0. (2.35)

If we first consider the ordinary rescaled heat equation ut − uxx = 0 a solu-


tion on the bounded domain 0 < x < s(t) is found by a change of variables
with dilatation scaling. By similar argument we try to introduce the similarity
variable
x
ξ=√ (2.36)
t
and thus seeks a solution of the form

u(x, t) = F (ξ(x, t)) (2.37)


where F (ξ) is an unknown function yet to be found. Substituting equation
(2.37) in the heat equation (2.31) gives

∂u ∂ξ
dF −x dF
(x, t) = = √ (2.38)
∂t ∂t
dξ 2t t dξ
∂u ∂ξ
dF 1 dF
(x, t) = =√ (2.39)
∂x ∂x
dξ t dξ
2
1 d2 F
 
∂ u 1 d dF ∂ξ
αL 2 (x, t) = αL √ = αL . (2.40)
∂x t dξ dξ ∂x t dξ 2

Equation (2.38) and (2.40) gives the second order linear homogeneous differen-
tial equation

d2 F ξ dF
2
+ =0 (2.41)
dξ 2αL dξ
which can be solved with an integrating factor
ξ2
Rξ s
ds
M (ξ) = e s0 2αL = C1 e 4αL (2.42)

where C1 is an integration constant. M (x) in equation (2.42) is multiplied with


equation (2.41) and by identifying the product rule we have

d2 F
 
ξ dF d dF
M (ξ) + M (ξ) = M (ξ) =0 (2.43)
dξ 2 2αL dξ dξ dξ
and by integrating equation (2.43) we get

13
dF
M (ξ) = C2 (2.44)

where C2 is an integration constant. From the fundamental theorem of calculus
the solution of equation (2.44) is
Z ξ 2
s
− 4α
F (ξ) = C e L ds + D (2.45)
0
s
where D is an integration constant. By using the substitution y = √2α ,
L
equation (2.45) could be written in terms of the error function (defined in ’List
of notations’)
 
ξ
F (ξ) = A erf √ +D (2.46)
2 αL
and thus the solution to equation (2.31) is
 
x x
u(x, t) = F ( √ ) = Aerf √ + D. (2.47)
t 2 tαL
From the boundary condition at x = 0 and x = s(t) we get

D = u0 (2.48)

and
0 − u0
A= (2.49)
erf(λ)
where
s(t)
λ≡ √ (2.50)
2 tαL
since A in equation (2.49) is a constant, it follows that λ must also be constant,
thus

s(t) = 2λ αL t (2.51)
and with the constants A and D, the solution is:
 
u0 x
u(x, t) = u0 − erf √ . (2.52)
erf(λ) 2 αL t
About the parameter λ
The Stefan condition at the free boundary x = s(t) is
ds
lρ = −KL ux (s(t), t) (2.53)
dt
and the time derivative of s(t) is

ds(t) d √  αL
= 2λ αL t = λ √ (2.54)
dt dt t
and for the other derivative in the Stefan condition we need to first take the
spatial derivative of the solution u given by equation (2.52)

14
x2
√x −
1 e 4αL t
Z
u0 2 d 2 αL t
−y 2 u0
ux (x, t) = − √ e dy = − √ √ (2.55)
erf(λ) π dx 0 erf(λ) π αL t

and at x = s(t)
2
u0 e−λ
ux (s(t), t) = − √ √ . (2.56)
erf(λ) αL t π
By putting in equation (2.56) and (2.54) in the Stefan condition (2.53) and
solving for λ, we get the following transcendental equation

2 KL u0 CL (u0 ) StL
λeλ erf(λ) = √ = √ ≡ √ (2.57)
ρlαL π πl π
where StL is the Stefan number [1].

Summing up the solution of u(x, t) and s(t), and the condition for λ, gives
  
u0 x


 u(x, t) = u0 − erf(λ) erf √
2 αL t


s(t) = 2λ αL t (2.58)

 λeλ2 erf(λ) = St


√L .
π

15
2.2.3 Nonlinearity
If we look at the problem before the free boundary s(t) we have only the heat
equation, which is a linear equation. Now we want to analyse the problem how
it behaves at the free boundary x = s(t). As shown in section 2.1 with energy
conservation, the boundary condition at the free boundary is
∂u ds
−KL (s(t), t) = lρ . (2.59)
∂x dt
The value at the free boundary is

u(s(t), t) = 0 (2.60)

for any t, where s(t) ∝ t, which can be seen in Figure 2.

Solid
phase
x=s(t)

Liquid
phase

Figure 2: The free boundary s(t) against time t


.

The curve of the free boundary in the (x, t)-plane could be described with

r(t) = (x(t), t) = (s(t), t). (2.61)


The rate of change of the temperature u is then the directional derivative of u
along v = dr
dt .

∂u ds ∂u
v · Du = + =0 (2.62)
∂x dt ∂t
and thus
∂u
∂u ∂t
= − ds . (2.63)
∂x dt

16
By multiplying equation (2.59) with equation (2.63) gives

2
lK ∂ 2 u

∂u ∂u
−KL (s(t), t) = −lρ (s(t), t) = − (s(t), t) (2.64)
∂x ∂t C ∂x2

which implies that the Stefan problem is a nonlinear problem.

17
2.3 The maximum principle
In our domain of interest, the solution is given by the heat equation, which we
give a discussion about here.

A very common and important tool in the study of partial differential equations
is the maximum principle. The maximum principle is nothing more than a gen-
eralization of the single variable calculus fact that the maximum of a function
00
f is achieved at one of the endpoints a and b of the interval [a, b] where f > 0.
Hence we could say more generally that functions which satisfy a differential
inequality in any domain Ω posses a maximum principle, since their maximum
is achieved on the boundary ∂Ω. The maximum principle help us obtain infor-
mation about the solution of a differential equation even without any explicit
information of the solution itself. For example, the maximum principle is an
important tool when an approximative solution is searched for. The following
theory could be found in [11].

Definition 2.3.1 (The parabolic boundary). Since the heat equation is often
prescribed with its temperature initially and at the endpoints. The most natural
approach is to consider the region

ET = {(x, t) : 0 < x < l(t), 0 < t ≤ T } (2.65)


in the (x, t)-plane. We suppose that the temperature is known on the remaining
sides of ET :
S1 : {x = 0, 0 ≤ t ≤ T }, S2 : {0 ≤ x ≤ l(t), t = 0},
S3 : {x = l(t), 0 ≤ t ≤ T }.
2
Lemma 2.3.2. Assume the function u(x, t) ∈ C(1) and satisfies the differential
inequality

∂ 2 u ∂u
L[u] ≡ − >0 (2.66)
∂x2 ∂t
in ET . Then u cannot attain its maximum value at the interior of the closure
ET of ET .
Proof. Assume that u obtain its maximum value at an interior point P = (x0 , t0 )
of E. As the point P is a critical point, the derivative ut is 0 and since it is a
maximum uxx (x0 , t0 ) ≤ 0. However this contradicts L[u] > 0 and thus cannot
u have a maximum at an interior point.

Theorem 2.3.3 (The weak maximum principle). Suppose the function u(x, t)
satisfies the differential inequality

∂ 2 u ∂u
L[u] ≡ − ≥0 (2.67)
∂x2 ∂t
in the rectangular region ET given by (2.65). Then the maximum value of u on
the closure ET must occur on one of the remaining boundaries S1 , S2 or S3 .

18
Proof. Let M be the largest value of u that occur on S1 , S2 and S3 and assume
that there is a point in the interior P = (x0 , t0 ) where u(x0 , t0 ) = M1 > M .

Define the help function


M1 − M
w(x) = (x − x0 )2 , (2.68)
2l2
from w and u we define the function

v(x, t) ≡ u(x, t) + w(x). (2.69)


On the boundaries S1 , S2 and S3 we have u ≤ M and 0 < x < l and thus
M1 − M
v(x, t) ≤ M + < M1 . (2.70)
2
At the interior point (x0 , t0 ) we have

v(x0 , t0 ) = u(x0 , t0 ) + 0 = M1 (2.71)


and in ET we have
M1 − M
L[v] ≡ L[u] + L[w] = L[u] + > 0. (2.72)
l2
From the conditions (2.70) and (2.71) we can conclude that the maximum
v(x0 , t0 ) must be attained either on the interior of E or along

S4 : {0 < x < l <, t = T }


From lemma 2.3.2 we know that the inequality (2.72) gives us no possibility
for an interior maximum. If we
∂2v
instead would have a maximum on S4 , we get
∂v
∂x2 ≤ 0 which implies that ∂t t=T < 0. Hence must v have a larger value at an
earlier time t < T and from this contradiction we can see that our assumption
u(x0 , t0 ) > M is wrong.

Remark. Notice that Theorem 2.3.3 is only the weak maximum principle and
the theorem permits the maximum to occur on the interior points as well, in
addition to the boundary [11].

Theorem 2.3.4 (The strong maximum principle). Suppose that there is a


(x0 , t0 ) in ET such that u(x0 , t0 ) = maxET u. Then is u(x, t) ≡ u(x0 , t0 ) for
all (x, t) ∈ ET .
Proof. The proof could be found in [11]

19
2.4 Existence and uniqueness
To be "allowed" to use any kind of numerical analysis in this thesis it is essential
to show that this problem is well-posed. A well-posed problem for a partial
differential equation is required to satisfy following criteria, e.g. [4]:

1. A solution to the problem exists


2. The solution is unique
and
3. The solution depends continuously on the given data

To give any conclusions about the solution to a partial differential equation, we


require the existence of a unique solution. If we would not have one unique
solution any prediction made, would depend on which of the solutions we have
chosen to be the "right" one. The last requirement is important in the problems
from physical applications, since it is preferable if our solution would not change
much when the initial conditions are perturbed.
We therefore need to present and give a proof of the existence and uniqueness
of the Stefan Problem defined in (2.19). In this thesis we present a special case
of the theorem stated in the book by Friedman [5, p.216], due to less general
boundary conditions in (2.19).
Theorem 2.4.1 ([5], Theorem 1, page 216). For boundary condition u(0, t) =
f (t) (which is continuously differentiable) and u(x, 0) =constant there exists a
unique solution {u(x, t), s(t)} of the system in (2.19) for all t < ∞.

Proof. The proof is beyond the scope of this thesis and could be found in the
book by Avner Friedman [5, p.222].
When we now know that there truly exist one unique solution to (2.19) we can
show the following theorem:

Theorem 2.4.2 ([5], Theorem 1, page 217). If u and s(t) gives a solution to
(2.19) for all t < σ, where σ is a finite number, then x = s(t) is a monotone
non-decreasing function.
Proof. From the weak Maximum Principle given by Theorem 2.3.3 we know that
u(x, t) ≥ 0 for 0 < x < s(t). Since the temperature u is 0 at the boundary x =
s(t), the rate of change in temperature with respect to x at the free boundary
x = s(t) is lesser or equal to 0. Thus we get from the Stefan condition that
ds
dt ≥ 0, i.e, the free boundary s(t) is monotone non-decreasing.

20
3 Numerical analysis
The theory of partial differential equation is of fundamental importance for
numerical analysis. At the undergraduate level the main goal in partial differ-
ential equations is to find solutions that could be expressed explicitly in terms
of elementary functions. There exist several methods of finding those solutions
to various problem, but to most problems an explicit solution could not be
found. When this occurs, numerical methods are useful to give some further
information about the problem and the solutions. In differential equations the
contributions from analysis often lies in to find conditions to ensure uniqueness
and existence of the solution, not construct the explicit solution. Both the case
of the existence of an unique solution and the reverse case are interesting to
study in their own way. When searched for existence and uniqueness in analy-
sis, the problems are posed on an infinite dimensional space. This is a contrast
to numerical analysis, where the problems are discrete and posed on a finite di-
mensional space. In addition to the existence and uniqueness which are needed
to be found valid for different discretizations, error estimates are also required
to be found for the approximations.

There does not exist any general numerical algorithm to every PDE you may
encounter, but in reality there are a lot of different types of numerical schemes
depending on what problem you need to approximate. To approximate the so-
lution of a partial differential equation with numerical methods, the procedures
can be largely decomposed into two types. In the first procedure the solution is
searched for at a finite number of nodes in the domain of definition. The other
procedure is about expanding the solution as a sum of basis functions defined
on the domain of definition and find the coefficients for this expansion. In this
thesis we are only going to look at the first procedure, namely a finite difference
method.

3.1 Finite difference method


The main idea of the finite difference method is to use approximation of the
appearing derivatives in a PDE by sums and differences of function values. The
function values are defined on a discrete set of points (nodes), which often are
uniformly spaced. In this thesis the 1 dimensional heat equation is considered.
Two methods will be used, the Forward Euler scheme and Crank-Nicholson
scheme.

3.1.1 Forward Euler scheme


Forward Euler is a numerical method for approximating solutions to differential
equations. This method is explicit since we can express the solution at time
step n + 1 in terms of the values of former step n. To demonstrate this method
let us start out with a simple example:

21
ut = uxx (x, t) ∈ (0, 1) × (0, ∞),
u(x, 0) = v(x) in (0, 1), (3.1)
u(0, t) = u(1, t) = 0, for t > 0.
The first step of this procedure is to make space and time discrete. Put

x → xj = x0 + jh j = 0, 1, 2...M ,
n
t → t = t0 + nk n = 0, 1, 2...

where x0 = t0 = 0, M the number of nodes, k the size of the time step and h is
spatial step given by
1
h=
. (3.2)
M
We denote the numerical solution Ujn of u as

Ujn ≈ u(jh, nk). (3.3)


which is the approximative solution of u at time step n and spatial step j. For
the spatial derivatives, introduce the forward and backward difference quotients
with respect to space and time as
n
Uj+1 −Ujn n
Uj+1 −Ujn
∂x Ujn = h , ∂x Ujn = h
Ujn −Uj−1
n
Ujn −Uj−1
n (3.4)
∂ x Ujn = h ∂ x Ujn = h .
The Forward Euler example stated in equation (3.1) is written as


 ∂ U n = ∂x ∂ x Ujn for j = 1, ..., M.1, n ≥ 0
 t j


U0n = UM
n
= 0, for n > 0, (3.5)


 U 0 = V = v(x ),

for j = 0, ..., M.
j j j

By introducing the mesh ratio λ = k


h2 and solving for Ujn+1 in equation (3.5),
we get

Ujn+1 = λ(Uj−1
n n
+ Uj+1 ) + (1 − 2λ)Ujn (3.6)
U0n+1 = n+1
UM =0 (3.7)
Uj0 = v(xj ) (3.8)

and equation (3.6) in matrix form

Ū n+1 = AŪ n (3.9)


where Ū n denotes the (M − 1)−vector related to U n and A is a symmetric
tridiagonal coefficient matrix given by:

22
 
1 − 2λ λ 0 ... 0
..
 
 .. 

 λ 1 − 2λ λ . . 


A= .. .. .. 
. (3.10)
 0 . . . 0 
.. ..
 
.
 
 . λ 1 − 2λ λ 
 
0 ... 0 λ 1 − 2λ

This method is however only first order accurate in time, and second order in
space as can seen in the Theorem 3.1.2 below. This means that the error in
time will dominate unless the time step k is much smaller than the spatial step
h. The Forward Euler scheme is stable when the mesh ratio λ ≤ 21 ,[8, thm 9.5].
Theorem 3.1.2. Let U n and u be the solutions of (3.5) and (3.1), with
λ = hk2 ≤ 12 and u being fourth order continuous differentiable, then

kU n − un k∞,h ≤ Ctn h2 max


n
|u(x, t)|C 4 , for t ≥ 0,
t≤t

Proof. See [8, thm 9.5].


Theorem 3.1.2 gives us an error estimate for the numerical approximation, and
for this numerical scheme we have a quite restrictive stability condition between
time step k and spatial step h. With this method it is not possible to have the
same order of magnitude in k and h and still maintain stability.

To give an example of what happens to the numerical simulation with a λ ≥ 1/2,


we plot the maximum norm against tn with the forward Euler scheme shown in
Figure 3.

Euler forward with λ = 1/1.9

1.5
||Un−un||∞

0.5

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
tn

Figure 3: Plot of the maximum norm of the error against time for the forward
Euler scheme with λ > 1/2.

23
3.1.3 Crank-Nicholson scheme
Another finite difference scheme is the Crank-Nicholson method which is an
implicit method, which uses a symmetry around a middle point, (xj , tn+1/2 ),
where
tn+1 + tn
tn+1/2 = . (3.11)
k
Due to the implicit scheme an algebraic system must thus be solved for each time
step. The example given in equation (3.1) with the Crank-Nicholson method is
then defined as


 ∂ U n+1 = 21 ∂x ∂ x (Ujn + Ujn+1 ) for j = 1, ..., M − 1, n ≥ 0
 t j


n+1
U0n+1 = UM = 0, for n > 0, (3.12)


 U 0 = V = v(x ),

for j = 0, ..., M.
j j j

Rewriting the first equation in (3.12) as


1 1
(I − k∂x ∂ x )Ujn+1 = (I + k∂x ∂ x )Ujn (3.13)
2 2
where I is the (M − 1)-Identity matrix, and then using the defined difference
quotients and collecting similar terms, we get

1 1
n+1
(1 + λ)Ujn+1 − λ(Uj−1 n+1
+ Uj+1 ) = (1 − λ)Ujn + λ(Uj−1
n n
+ Uj+1 ). (3.14)
2 2
The matrix form is

B Ū n+1 = AŪ n (3.15)


where both A and B are tridiagonal matrices given by
 
1 + λ − 12 λ 0 ... 0
..
 

 −1λ 1 + λ −1λ . .. 
 2 2 . 

B=
 .. .. .. 
 0 . . . 0 

. ..
 

 .. . − 21 λ 1 + λ − 12 λ


 
0 ... 0 − 12 λ 1 + λ
and
 
1
1−λ 2λ 0 ... 0
..
 
 1 1 .. 

 2λ 1−λ 2λ
. . 


A= .. .. .. 
.
 0 . . . 0 
.. ..
 

. 1 1 
 . 2λ 1−λ 2λ

 
1
0 ... 0 2λ 1−λ

24
Solving equation (3.15) we get the solution at time (n + 1)

U n+1 = B −1 AU n . (3.16)

Previously we had the forward Euler method with only a first order of accuracy
in time, which together with the stability criteria limited our numerical analysis.
We can see in the Theorem 3.1.4 that the Crank-Nicholson method is second
order accurate in time. This method give us no time step restriction.
Theorem 3.1.4. Let U n and u be the solutions of (3.12) and (3.1), with u
being sixth order continuous differentiable, then

kU n − un k2,h ≤ Ctn (h2 + k 2 )max


n
|u(x, t)|C 6 , for t ≥ 0,
t≤t

Proof. The proof of this Theorem could be found in [8].

25
3.2 Analysis when t → 0
The problem (2.19) gets a degeneracy as t → 0, since the thickness of the melt
region is initially zero according to equation (2.34). To solve this problem, we
do a change of variable as in the paper by S.L. Mitchell and M. Vynnycky [9].
Another goal here is to attain an initial condition for the new variables. It is
a good idea to work in the transformed coordinates suggested by the analytical
solution given by (2.47), and we therefore set

x (3.17)
ξ= s(t) , u = h(t)F (ξ, t),
where h(t) is chosen to ensure that we get a well-posed problem as t → 0. The
space derivative, in the new coordinates are

∂u ∂   ∂ξ h(t) ∂F
= h(t) F (ξ, t) = , (3.18)
∂x ∂ξ ∂x s(t) ∂ξ
and

∂2u ∂  h(t) ∂F  ∂ξ h ∂2F


2
= = 2 , (3.19)
∂x ∂ξ s(t) ∂ξ ∂x s(t) ∂ξ 2
moreover, the time derivative ut takes the form

   
∂u dh(t) ∂F ∂F ∂ξ dh ∂F ξ ds ∂F
= F + h(t) + = F +h − . (3.20)
∂t dt ∂t ∂ξ ∂t dt ∂t s dt ∂ξ

By equating ut and uxx and multiplying s2 on both sides we end up with

∂2F
 
dh ∂F ds ∂F
h 2 = s s F + sh −ξ . (3.21)
∂ξ dt ∂t dt ∂ξ

3.2.1 Constant boundary condition



Let β = K, and for the boundary conditions (i) in (2.20) we have

∂2F
 
dh ∂F ds ∂F
h 2 = s s F + sh −ξ (3.22)
∂ξ dt ∂t dt ∂ξ
subject to

F = 0, at ξ = 1, (3.23)
1
F = , at ξ = 0, (3.24)
h(t)
ds ∂F
βs(t) = −h , at ξ = 1. (3.25)
dt ∂ξ

We need to pick a function h(t) such that F is independent of t as t → 0. Since


the numerator in equation (3.24) is constant, we can choose any constant value

26
to be h(t), to attain the necessary boundary condition, as in system (2.19). A
simple and good choice√is therefore h(t) = 1. By using equation (2.51) with
α = 1, we get s(t) = 2λ t. Equation (3.21) will thus reduce to

∂2F ∂F
2
= −2λ2 ξ , (3.26)
∂ξ ∂ξ
with the boundary conditions


F (1) = 0 F (0) = 1 2
2λ β = − ∂F
∂ξ

. (3.27)
ξ=1

The equation (3.26) could be solved in a similar way as the previous differential
equation found in equation (2.41) and thus is the solution

erf(λξ)
F (ξ) = 1 − , (3.28)
erf(λ)
this solution is valid in t → 0. Where the constant λ satisfies (2.57).

27
3.2.2 Time-dependent boundary condition
The case (ii) in (2.20) is a time-dependent condition and we need to make sure
that F (0, t) is independent of time in the limit t → 0. The boundary condition
is

e( t)−1
F = h(t) at ξ = 0 (3.29)

and to choose h(t), we look at the Taylor expansion of et − 1 around t = 0

t2 t3
f (t) ∼ t + + + O(t4 ). (3.30)
2 6
Hence in the limit as t → 0 an appropriate choice would be h(t) = t. The Stefan
condition will thus take the form
ds
s(t) = O(t) (3.31)
dt
and by integrating both sides with initial condition s(0) = 0, we get an expres-
sion for the free boundary

s(t) = γt, (3.32)


where γ is a proportionality constant, which according to Theorem 2.4.2 must
be positive. System (3.22) with h(t) = 1 will in the limit t → 0 be

∂2F
=0 (3.33)
∂ξ 2
subject to

F (1) = 0 F (0) = 1 2
γ β= − ∂F
∂ξ

(3.34)
ξ=1

and the solution is

F (ξ) = 1 − ξ. (3.35)
From the Stefan condition we get
1
γ = ±√ , (3.36)
β
where the positive solution is chosen, as proven in Theorem 2.4.2.

28
3.3 Numerical method for solving the Stefan problem
As seen previously the Crank-Nicholson method is a sufficient stable method
for the heat equation. We will therefore use the Crank-Nicolson scheme to
approximate the solution of the Stefan problem under the assumption that the
discretization of ξ and t is uniform.

3.3.1 Constant boundary condition


For the first case in equation (3.24) at ξ = 0, we have h(t) = 1 and to simplify
the algebra, we put z ≡ s2 . The system in equation (3.22) will therefore take
the appearance:


∂2F ξ dz ∂F


 ∂ξ 2 = z ∂F
∂t − 2 dt ∂t ,



 F = 0, at ξ = 1,
(3.37)
β dz ∂F
2 dt = − ∂ξ , at ξ = 1,






 F = 1, at ξ = 0,

with the condition z(0) = 0. We apply a discretization using the same method
as in section (3.1.3) to acquire the Crank-Nicolson scheme. The scheme uses a
central difference at time tn+1/2 and a second-order central difference for the
spatial derivative at position ξj . By applying the Crank-Nicolson method as in
section (3.1.3), we will end up with

n+1
! ! " n+1 #
1 Fj+1 − 2Fjn+1 + Fj−1
n+1 n
Fj+1 − 2Fjn + Fj−1
n Fj − F n
j
+ = z n+1/2
2 h2 h2 k
" n+1 n+1
! !#
n n
ξj z n+1 − z n 1 Fj+1 − Fj−1 1 Fj+1 − Fj−1
− + .
2 k 2 2h 2 2h
(3.38)
The system (3.38) is according to (3.12) valid for j = 1, 2, ..., M − 1 and n =
0, 1, 2, ... . The boundary conditions become

   ! !
n+1 n+1 n n
 β z n+1 −z n FM −FM FM +1 −FM −1
= − 12 +1 −1
− 1




 2 k 2h 2 2h

(3.39)

 F0n+1 = 1


 n+1
 FM = 0.

To be able to approximate this numerically we need to write (3.38) in matrix


form, and take into account that F = 1 at ξ = 0. Using the same notation and
regrouping system (3.38) we have

1  n+1 n+1 n+1


 z n+1/2
n+1 z 0  n+1 n+1

L̃ = F − 2F + F − F + ξ j F − F (3.40)
2h2 j+1 j j−1
k j
8h j+1 j−1

29
and

1  n n n
 z n+1/2
n z0  n n

R̃ = − F j+1 − 2Fj + Fj−1 − Fj − ξ j F j+1 − F j−1 , (3.41)
2h2 k 8k
where
z n+1 −z n z n+1 +z n
z0 = k z n+1/2 = 2
(3.42)

and L̃ and R̃ denote the side at time (n + 1) and (n). To take care of the
boundary condition F0n+1 = 1 we evaluate equation (3.40) (equation (3.41) is
done in a similar way) at j = 1 :

1  n+1 n+1 n+1


 z n+1/2
n z 0  n+1 n+1

L̃j=1 = F −2F +F − F 1 + ξ 1 F −F . (3.43)
2h2 2 1
=1
0
k 8k 2 0
=1

Sort out the two constant vectors(due the boundary condition) we could put
them together into one constant vector
 
1 z0
2 − ξ1
 2h 8h 
0
 
 
C≡ 
..
.
 (3.44)

 . 

0
Furthermore the coefficient matrix of the first parenthesis in equation (3.40) is
 
−2 1 0 ... 0
.. 
 
 1 −2 1 . . .

 . 

1  . . . 
A= 2 0  .. .. .. 0  (3.45)
2h  
 .. ..

. 1 −2 1 

 .
 
0 ... 0 1 −2
and the second coefficient matrix in equation (3.40) is
 
0 ξ1 0 ... ... 0
..
 
 .. 
 −ξ2
 0 ξ2 . ... . 

 
1  0
 −ξ3 0 ξ3 ... 0 
B= . (3.46)

8h  0
 . .. . .. . .. . ..
 0 

 .. .. ..
 
. . −ξM −2

 . 0 ξM −2 
 
0 ... ... 0 −ξM −1 0
Then the system (3.38) could be written in matrix form as

30
 z n+1/2   z n+1/2 
A− I + z 0 B F n+1 = − I − z 0 B − A F n − 2C, (3.47)
k k
where I is the (M − 1)−Identity matrix, and
Denote the two tridiagonal matrix in equation (3.47) as L and R (as in left and
right) and then equation (3.47) could be written as

LF n+1 = RF n − 2C (3.48)
where

   
z n+1/2 z n+1/2
L= A− k I + z0B , R= − k I − z0B − A . (3.49)

and
 
1 z0
2h2 − 8h ξ1
 
0
 
 
C=
 ..
.
 (3.50)

 . 

0

The approximated Stefan condition in equation (3.39) include the extrapolated


n+1 n
values FM +1 and FM +1 . To approximate this problem we evaluate equation
(3.38) at j = M and thus we can eliminate the extrapolated values by some
algebra. This will lead us to the following polynomial equation of second order
for z n+1

 
βξM n+1 2 βξM n 2βr n+1 n βξM n 2
(z ) + − z + + FM − FM z n+1 + (z )
2k k v 2k
(3.51)
2βr n n+1
− z + (2r + z n )FMn
− 2r(FM n
−1 + FM −1 ) = 0.
v

31
Flow chart The constant boundary condition

1. Start by introducing all the constants, "Allocate" memory for the vectors,
and give the initial conditions their values.
2. From equation (2.57) we can use the Newton-Raphson method (found in
the appendix) to approximate the physical constant λ related to the given
value of β.

3. Start iterating a time loop for the whole problem. Because equation (3.38)
involves z at time level (n + 1), we need to iterate the value until a given
tolerance is reached.
4. We use the starting guess for the free boundary to be the previous value
z(n) and with help of (3.51) we can update z n+1 until the tolerance is
reached.
5. Convert back the values for the temperature and the free boundary. Save
values at each time step for the exact solutions for both the free boundary
and the temperature, for later usage.

6. Plot both the numerical and the exact values for both the temperature
and the free boundary against time. Plot the error in the numerical tem-
perature and the numerical free boundary against time.

32
3.3.2 Time-dependent boundary condition
For the second case in equation (3.24), h(t) = t


∂2F
h i
∂F ds ∂F


 t ∂ξ 2 = s sF + st ∂t − ξt dt ∂ξ ,



 F = 0, at ξ = 1,
(3.52)
∂F



 βs ds
dt = −t ∂ξ , at ξ = 1,

 F = et −1 ,

at ξ = 0.
t

By applying the Crank-Nicolson method as in section (3.1.3), we will end up


with

" n+1
! !#
n+ 21 1 Fj+1 − 2Fjn+1 + Fj−1
n+1
1 n
Fj+1 − 2Fjn + Fj−1
n
t + =
2 h2 2 h2
n+1
1 2 1 n+1 1 1  2 Fj − Fjn
sn+ 2 + Fjn + tn+ 2 sn+ 2

Fj −
2 " ! k !#
n+ 21 n+ 12 0 1 n+1 n+1 1 n n
ξj t s s Fj+1 − Fj−1 + Fj+1 − Fj−1 , (3.53)
2 2

where
1 tn+1 +tn 1 sn+1 +sn sn+1 −sn
tn+ 2 = 2 sn+ 2 = 2 s0 = k . (3.54)

The system (3.53) is according to (3.12) valid for j = 1, 2, ..., M − 1 and n =


0, 1, 2, ... . The boundary conditions become

 ! !
n+1 n+1 n n
1 1 FM −FM 1 n+ 12 FM +1 −FM −1
βsn+ 2 s0 = − 21 tn+ 2 +1
 −1



 2h − 2t 2h (3.55)


 !
  n+1 n
et −1 et −1



1
2 F0n+1 + F0n = 12 tn+1 + tn (3.56)




 n+1
FM = 0. (3.57)

The system (3.53) involves s at the next time level, so an iteration is needed.
We use the value at level n as starting guess for sn+1 . To update the value
of sn+1 we use the Stefan condition in (3.55). However does equation (3.55)
n n+1
involve the two extrapolated values FM +1 and FM +1 . To remove those values,
we evaluate equation (3.53) at j = M and this leads to a quartic expression in
terms of sn+1 :

n+1 4β h ξM i n+1 2 


2r(tn+1 +tn )(FM n
−1 +FM −1 )− r+ (sn+1 )2 −(sn )2 (s ) −(sn )2 = 0.
v 4h
(3.58)

33
To be able to approximate this numerically we need to write (3.53) in matrix
!
tn+1 tn
 
1 n+1 1 e −1 e −1
form, and take into account that 2 F0 + F0n = 2 tn+1 + tn at

ξ = 0. Denote the left hand side of equation (3.53) as LH and evaluate at j = 1

1
" #
tn+ 2 n+1 n+1 n+1 n n n
LHj=1 := F2 − 2F1 + F0 + F2 − F1 + F0 , (3.59)
2h2

and the right hand side of (3.53) at j = 1


1 1
1 2 n+ 12 1 1 2 F1n+1 − F1n ξ1 tn+ 2 sn+ 2 s0
RHj=1 := sn+ 2 F1 + tn+ 2 sn+ 2 −
" k 4h # (3.60)
F2n+1 − F0n+1 + F2n − F0n

!
  n+1 n
et −1 et −1
and then by replacing 1
2 F0n+1 + F0n with 1
2 tn+1 + tn in both RHj=1

and LHj=1 , and collect the terms in a time-dependent vector as

  ! 
1 1 1 sn+1 −sn n+1 n
ξ1 tn+ 2 sn+ 2

tn+ 2 et −1 et −1
2h2 − +
k
 4h tn+1 tn 
 
 
0
 
D(t) =  . (3.61)
..
 
 

 . 

0

Regrouping equation (3.53) we get

LF n+1 = RF n + D(t) (3.62)


with
1

 L := tn+ 21 A − sn+ 12 2 1 + tn+ 2 I + tn+ 12 sn+ 12 s0 B
2 k
1 (3.63)
 R := −tn+ 21 A + sn+ 12 2 1 − tn+ 2 I − tn+ 12 sn+ 21 s0 B
2 k

where I is the (M − 1)−Identity matrix, and the two other coefficient matrix
are
 
−2 1 0 ... 0
..
 
 .. 
 1 −2
 1 . . 

1  .. .. .. 
A= 2 0 . . . 0  (3.64)
2h  
 .. ..
 
.

 . 1 −2 1 
 
0 ... 0 1 −2

34
and
 
0 ξ1 0 ... ... 0
..
 
 .. 
 −ξ2
 0 ξ2 . ... . 

 
 0
1  −ξ3 0 ξ3 ... 0 
B= . (3.65)

4h  0
 .. .. .. ..
 . . . . 0 

 .. .. ..
 
. .

 . −ξM −2 0 ξM −2 
 
0 ... ... 0 −ξM −1 0

35
Flow chart The time-dependent boundary condition

1. Start by introducing all the constants, "Allocate" memory for the vectors,
and give the initial conditions their values.
2. Start iterating a time loop for the whole problem, and because equation
(3.53) involves s at time level (n + 1), we need to iterate the value until a
given tolerance is reached for every time step.

3. Take care of the case with 0/0 separately at n = 1, where the values in
the limit t → 0 is analysed previously in section (3.2.2).
4. Due boundary condition at ξ = 0 we get a time constant vector D for
every time step.

5. Introduce the right and left coefficient matrix for the expression LF n+1 =
RF n + D and solve for F n+1 .
6. Solve the quartic equation with Newton-Raphson method (found in ap-
pendix) with initial guess of sn+1 = sn .

7. When a sufficient good value of sn+1 is found we add the boundary con-
ditions at ξ = 0 and ξ = 1. Avoid the 0/0 case at n = 1.
8. Convert back the values for the temperature and the free boundary. Save
values at each time step for the exact solutions for both the free boundary
and the temperature, for later usage.

9. Plot both the numerical and the exact values for both the temperature
and the free boundary against time. Plot the error in the numerical tem-
perature and the numerical free boundary against time.

36
3.4 Error analysis
In this section we would like to discuss the methods used around the numerical
analysis to measure the error.
To investigate the error in the numerical analysis, we use the discrete l2 -norm
denoted by l2,h [8, p.133] for the temperature.
The error in the l2 -norm at time tn is given by
 1/2
M
X
E n := kU n − un k2,h = h (U (xj , tn ) − Ujn ) , (3.66)
j=0

where U (xj , tn ) is the exact solution at (xj , tn ) and Ujn is the numerical solution.
The error for the free boundary at time tn is given by
n
s(t ) − sn .

(3.67)

System (3.38) and (3.53) involves z n+1 and sn+1 . To be able to get those value
we do an iteration on z n+1 respectively sn+1 for a given tolerance ε using the
same value at level n as a starting guess. We get the new update from the
Stefan condition (3.39), and keep on updating until the tolerance is reached. If
n+1
we denote zm and sn+1
m as the m : th iterated value of z n+1 and sn+1 , we can
write a convergence criterion as
n+1
n+1
n+1
n+1
m+1 − zm m+1 − sm
z <ε s < ε.

37
4 Discussion
In this thesis we have introduced the one-dimensional Stefan problem defined
on a semi-infinite interval with two different type of boundary conditions. The
important Stefan condition for the free boundary was derived under the con-
dition of equal densities for the two phases. That is not realistic, and even
for water the small difference will give a volume change in the transition. Due
to the change in volume, another term will occur in the heat equation and
also change the appearance for the Stefan condition. By taking consideration of
this small but realistic feature, the whole problem of ours will get more complex.

Another simplification in this thesis was the start temperature of the solid phase.
If the (for our case) solid phase had another temperature then the melting point,
a heat flux will flow back into the liquid phase. And thus attain another term
for the Stefan condition (seen in equation (2.1)). Furthermore we will need to
solve two different heat equations, separately describing the temperature in each
phase.

The numerical simulations was made with the Crank-Nicholson method, which
have a second order accuracy in both time and space. This method uses an
implicit discretization about (n + 21 ), instead of a more straightforward explicit
one. The numerical implementation is harder, but as shown previously, the
explicit scheme imposes both lesser accuracy and a restriction of our time step.

4.1 Future studies


It does clearly exist a lot of opportunities to continue with similar problems that
I have been working on this semester. In this thesis only one time-dependent
boundary condition was analysed and it does exist some other more realistic and
exciting conditions, for instance; oscillating functions which could be describing
the variation of temperature over a day. Even further it could be possible to
expand the model, find an adequate real life problem. It would be very inter-
esting to simulate nature itself.

Furthermore, this was a one-phase problem so we could extend our interest


for a problem with the temperature changing over time in both of the phases,
two-phase problem. So the temperature would change in both the liquid phase
and the solid phase. For any problem we have discussed in this thesis, we have
skipped the density difference between the phases, which causes the liquid phase
to attain another velocity vector due to volume changes in the phase transition.

For the confident and brave person, the option to go up a dimension does always
exist. Look at some weak solutions and maybe discover something exciting. But
as far as that I have understood, the level of complexity do increase substantially
as we go up in dimensions.

38
5 List of notations
x = s(t) The position of the free boundary.
C The specific heat capacity.
ρ The density of the specific material.
K Thermal conductivity.
α Thermal diffusivity.
l Latent heat.
Open set A set which does not contain any of its boundary points.
Rn The n-dimensional real Euclidian space, where R = R1 .
Du Gradient vector := (ux1 , ux2 , ..., uxn ).
Pn ∂Fi
Div u The divergence of a vector valued function := i=1 ∂xi .

∂V The boundary of V .
V The closure is defined as ∂V ∪ V .
C A function defined on a set is said to be of class C (or C 0 ) if it is continuous.
Thus, C is the class of all continuous functions.
CK A function defined on a set is of class C K if the first K derivatives exists and are
continuous.
Smooth or C ∞ . A function is said to be smooth if it has derivatives of all orders.
Test volume An open and connected subset of Rn whose boundary is "smooth" enough, but in
this essay we demand only C 1 boundary.
2 2
C(1) A function u is of class C(1) when its time derivative ut is continuous and its
spatial derivatives ux and uxx are continuous.
L A linear operator acting on functions.
Rx 2
erf(x) The error function, which is defined as erf(x) = √2
π 0
e−t dt.
O(hk ) Truncation error - An approximation is of order k with respect to h, which mean
that the error is proportional to hk .
kxk Norm, a measure of length or size of a vector in a vector space.
kxk∞ Maximum norm, the largest value of the vector x.
 1/2
Pn 2
kxk2 l2 -norm, is defined as i=1 |x i | .

39
F Appendix
To visualize the approximate solutions for our problem given in (2.19), we
present the plots here. We follow the paper by Mitchell and Vynnycky [9]
and put

M = 10
k = h = 1/M
β = {0.2, 2} For boundary condition (i)
β=1 For boundary condition (ii),

where β = KL and the boundary conditions both are given in the system (2.19).

Plots for the constant boundary condition with β = 2


Using the method described in section 3.3.1 we present below the approximate
solutions for the temperature u and the free boundary s(t), shown in Figure 4
and Figure 5. The error in s(t) and u which is defined in section 3.4, are also
given below, in Figure 7 and Figure 6.

0.9 Numerical solution


Analytical solution
0.8

0.7

0.6
U

0.5

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
x

Figure 4: Plot of the temperature against the position for both the numerical
and the analytical solution. β = 2.

40
1
Numerical solution
Analytical solution
0.9

0.8

0.7
s(t)

0.6

0.5

0.4

0.3

0.2
0 0.2 0.4 0.6 0.8 1
tn

Figure 5: Plot of the free boundary against the time for both the numerical and
the analytical solution. β = 2.

−4
x 10
1.22

1.215

1.21

1.205
En

1.2

1.195

1.19

1.185

1.18
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
tn

Figure 6: Plot of l2 −error for the temperature against time, with β = 2.

41
−3
10
|s(tn)−sn|

−4
10

0 0.2 0.4 0.6 0.8 1


tn

Figure 7: Plot of the error in the free boundary s(t) for β = 2.

42
Plots for the constant boundary condition with β = 0.2
Using the method described by section 3.3.1 we present below the approximate
solutions for the temperature u and the free boundary s(t), shown in Figure 8
and Figure 9. The error in s(t) and u which is defined in section 3.4, are also
given below, in Figure 10 and Figure 11.

1
Numerical solution
0.9 Analytical solution

0.8

0.7

0.6
U

0.5

0.4

0.3

0.2

0.1

0
0 0.5 1 1.5 2
x

Figure 8: Plot of the temperature against the position for both the numerical
and the analytical solution. β = 0.2.

2.2
Numerical solution
Analytical solution
2

1.8

1.6
s(t)

1.4

1.2

0.8

0 0.2 0.4 0.6 0.8 1


tn

Figure 9: Plot of the free boundary against the time for both the numerical and
the analytical solution. β = 0.2.

43
−2
10

−3
10
|s(tn)−sn|

−4
10

0 0.2 0.4 0.6 0.8 1


tn

Figure 10: Plot of the error in the free boundary s(t) for β = 0.2.

−4
x 10

2
En

0
0 0.2 0.4 0.6 0.8 1
tn

Figure 11: Plot of l2 −error for the temperature with β = 0.2.

44
Plots for the time-dependent boundary condition with
β=1
Using the method described by section 3.3.2 we present below the approximate
solutions for the temperature u and the free boundary s(t), given in Figure 12
and Figure 13. The error in s(t) and u which is defined in section 3.4, are also
given below, in Figure 14 and Figure 15.

1.8
Numerical solution
Analytical solution
1.6

1.4

1.2

1
U

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
x

Figure 12: Plot of the temperature against the position for both the numerical
and the analytical solution.

1.1

Numerical solution
1 Analytical solution

0.9

0.8

0.7

0.6
s

0.5

0.4

0.3

0.2

0.1
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
tn

Figure 13: Plot of the free boundary against the time for both the numerical and
the analytical solution.

45
−2
10
Error for the free boundary

−3
10
|s(t )−s |
n
n

−4
10

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


tn

Figure 14: Error in the free boundary s(t) against time.

−4
x 10
1.3
Error for the temperature

1.28

1.26

1.24

1.22
2
||U(tn)−Un||

1.2

1.18

1.16

1.14

1.12

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


tn

Figure 15: Error in the temperature U against time tn .

46
G Appendix
The Stefan Problem - Constant boundary condition

clear all I=s p e y e (M−1) ;


close all
%−−−−−−−−−−−−−−CONSTANTS−−−−−−−−−−−−−−−−%
%−−−−−−−−− i n i t i a l c o n d i t i o n s −−−−−−−−−−−−
%Number o f n o d e s F=1− e r f ( lambda ∗ x i _ p o s ) / e r f ( lambda ) ;
M=10; F_old=F ( 2 :M) ' ;
%S p a t i a l s t e p s i z e h %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
h=1/M;
%t i m e s t e p s i z e k %C o n s t a n t c o e f f i c i e n t f o r equation regarding Zn+1
k=h ; a=b e t a ∗ x i _ p o s (M+1) / ( 2 ∗ k ) ;

%The r e c i p r o c a l s t e f a n number , found in the stefan condition


b e t a =0 . 2 ;
%To s i m p l i f y , d e n o t e
r=k / h ^ 2 ; %i t e r a t i o n f o r z ^ ( n+1) and the initial guess z ( 2 )=z ( 1 ) , i.e z ^(n
v=k / h ; +1)=z ^ n
f o r n =1:NT
%P h y s i c a l c o n s t a n t , d e c i d e d by the stefan condition z _ e r r o r =1e −12;
lambda = newton ( b e t a ) ; w h i l e z _ e r r o r > 1 e −13

47
%Number of time steps %z ( n +1/2) and ( dz / d t ) ^ n
NT =M; z _ h a l f= ( z ( 2 )+z ( 1 ) ) / 2 ;
z_prim=( z ( 2 )−z ( 1 ) ) / k ;
%" A l l o c a t e " memory
E r r o r _ s=z e r o s (M, 1 ) ; %The c o e f f i c i e n t matrix f o r the modifed heat equation : L and
Error_F=z e r o s (M+ 1 , 1 ) ; R,
F=z e r o s (M+ 1 , 1 ) ; F _ a n a l y t i c a l=z e r o s (M+ 1 , 1 ) ; F_old=z e r o s (M−1 ,1) ; %L∗Un+1=H∗Un
F_new=F_old ; C ( 1 )=−x i _ p o s ( 2 ) ∗ 1 / ( 8 ∗ h ) ∗ z_prim +1/(2∗ h ^ 2 ) ;
z=z e r o s ( 2 , 1 ) ; L=(A−z _ h a l f / k ∗ I+z_prim ∗B) ;
z _ e r r o r =1e −12; R=−( z _ h a l f / k ∗ I+A+z_prim ∗B) ;
%The c o n s t a n t v e c t o r , t a k i n g c a r e o f t h e b o u n d a r y c o n d i t i o n F_0=1
C=z e r o s ( s i z e ( F_old ) ) ; %S o l v e f o r U^ ( n+1) w i t h our guess
Error_U=z e r o s (NT, 1 ) ;
%The node v a l u e s f o r \ x i F_new=L \ (R∗ F_old −2∗C) ;
x i _ p o s = ( 0 :M) ∗ h ;
%Time v e c t o r f o r a l l t i m e s t e p s
t = ( 0 :NT) ∗ k ; %Some c o e f f i c i e n t s f o r t h e q u a d r a t i c e q u a t i o n g i v i n g z ^ ( n+1)
b=−b e t a ∗ x i _ p o s (M+1) / k ∗ z ( 1 ) +2∗ b e t a ∗ r / v ;
c=b e t a ∗ x i _ p o s (M+1) / ( 2 ∗ k ) ∗ ( z ( 1 ) ) ^2 −2∗ b e t a ∗ r / v ∗ z ( 1 ) −2∗ r ∗ ( F_new (
%C o e f f i c i e n t M a t r i x M−1)+F_old (M−1) ) ;
e=o n e s (M−1 ,1) ;
A=1/(2∗ h ^ 2 ) ∗ s p d i a g s ( [ e −2∗ e e ] , − 1 : 1 ,M−1 ,M−1) ;
%r o o t s f o r t h e q u a d r a t i c e q u a t i o n regarding zn+1
x i _ a b o v e=x i _ p o s ( 1 :M−1) ' ; z 1=(−b+s q r t ( b^2 −4∗ a ∗ c ) ) / ( 2 ∗ a ) ;
x i _ b e l o w=x i _ p o s ( 3 :M+1) ' ; z 2=(−b−s q r t ( b^2 −4∗ a ∗ c ) ) / ( 2 ∗ a ) ;
B=s p d i a g s ([ − x i _ b e l o w x i _ a b o v e ] , [ − 1 1 ] ,M−1 ,M−1) ;
B=1/(8∗ h ) ∗B ; %c h e c k if we g e t complex roots
if b^2 <4∗ a ∗ c xlabel ( 'x ' )
d i s p ( ' Complex roots for z ( n+1) ' ) y l a b e l ( 'U ' )
break ;
end end
h o l d on
%S a v e s t h e o l d value of z ( n+1)
z o l d=z ( 2 ) ; figure (2)
p l o t ( t ( n+1) , s_num , ' kx ' , t ( n+1) , s _ e x a c t , ' r o ' )
%z s h o u l d be p o s i t i v e , f o r n a t u r a l reasons , uppdate for z (n i f n==1
+1) legend ( ' Numerical s o l u t i o n ' , ' A n a l y t i c a l solution ' )
i f z1>z 2 end
z ( 2 )=z 1 ;
else if n==NT
z ( 2 )=z 2 ; xlabel ( ' t ^n ' )
end ylabel ( ' s ' )
z _ e r r o r=a b s ( z ( 2 )−z o l d ) ; end
end h o l d on

%Add t h e b o u n d a r y conditions %p l o t ( t ( n+1) , s , ' kx ' , t ( n+1) , s _ e x a c t , ' r o ' )


F ( 2 :M)=F_new ; % h o l d on
F ( 1 ) =1;
F (end)=0;
% figure (2)
%−−−−c o n v e r t back the values of the free boundary at t ^ ( n+1) %s u b p l o t ( 2 , 1 , 2 )
−−−−−% % p l o t ( x i _ p o s ∗ s , U, ' kx ' , x i _ p o s ∗ s , U_exact , ' r −−')
% h o l d on
s_num=s q r t ( z ( 2 ) ) ; %l e g e n d ( ' B l a c k i s n u m e r i c a l ' )

48
%The e x a c t f u n c t i o n f o r t h e f r e e b o u n d a r y % axis ([0 1 0 1])
s _ e x a c t =2∗lambda ∗ s q r t ( t ( n+1) ) ; % xlabel ( 'x ')
%The e r r o r f o r s a t t i m e s t e p t ^ ( n+1) % y l a b e l ( 'U' )
E r r o r _ s ( n )=a b s ( s_num−s _ e x a c t ) ; F_old=F_new ;
z =[ z ( 2 ) z ( 2 ) ] ;
pause (0 . 1 )
%C o n v e r t b a c k t o t h e t e m p e r a t u r e U=h ( t ) F w h e r e h ( t ) =1 end
U_num=1∗F ; hold o f f
%The e x a c t t e m p e r a t u r e a t t ^ ( n+1)
U_exact=1− e r f ( x i _ p o s ∗s_num / ( 2 ∗ s q r t ( t ( n+1) ) ) ) / e r f ( lambda ) ; %The E r r o r in the temperature against time
figure (4)
pl ot ( t (2:end ) , Error_U , ' k−− ' )
%E r r o r f o r t h e t e m p e r a t u r e a t t i m e s t e p t ^ n legend ( ' Error f o r the temperature ' )
Error_U ( n )=s q r t ( ( h ∗sum ( ( U_exact−U_num) . ^ 2 ) ) ) ; xlabel ( ' t ^n ' )
y l a b e l ( ' | | U( t ^ n )−U^ n | | _2 ' )
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%−−−−−−−−−−−−THE PLOTS−−−−−−−−−−−−%
figure (1) figure (3)
p l o t ( x i _ p o s ∗s_num , U_num, ' kx ' , s_num∗ x i _ p o s , U_exact , ' r−− ' ) pl ot ( t (2:end ) , E r r o r _ s , ' kx ' )
i f n==1 l e g e n d ( ' E r r o r f o r t h e f r e e boundary ' )
legend ( ' Numerical s o l u t i o n ' , ' A n a l y t i c a l s o l u t i o n ' ) xlabel ( ' t ^n ' )
end y l a b e l ( ' | s ( t ^ n )−s ^ n | ' )

if n==NT
The Stefan Problem - Time-dependent boundary condition

clear all
close all
%−−−−−−−−−−−−−−CONSTANTS−−−−−−−−−−−−−−−−%

%Number o f n o d e s for n =1:NT


M=10; s _ e r r o r =1e −12;
%S p a t i a l s t e p s i z e h i t t =0;
h=1/M; %i t e r a t i o n f o r s ^ ( n+1) and the initial guess s ( 2 )=s ( 1 )
%t i m e s t e p s i z e k w h i l e s _ e r r o r > 1 e −13
k=h ;
%s a t t i m e p o s i t o n n+1/2
%The r e c i p r o c a l s t e f a n number , found in the stefan condition s _ h a l f =( s ( 2 )+s ( 1 ) ) / 2 ;
b e t a =1; %The f o r w a r d t i m e d i f f e r e n c e quotient for s
%To s i m p l i f y , d e n o t e s_prim =( s ( 2 )−s ( 1 ) ) / k ;
r=k / h ^ 2 ; %t a t n+1/2
v=k / h ; t _ h a l f =( t ( n+1)+t ( n ) ) / 2 ;

%P h y s i c a l c o n s t a n t , d e c i d e d by the stefan condition


lambda = 1/ s q r t ( b e t a ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%F i x i n g t h e t i m e c o n s t a n t D( t )
%Number of time steps %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−%
NT =M;

49
c 2=x i _ p o s ( 2 ) ∗ t _ h a l f ∗ s_prim ∗ s _ h a l f / ( 4 ∗ h ) ;
%" A l l o c a t e " memory c 1=t _ h a l f / ( 2 ∗ h ^ 2 ) ;
%E r r o r _ s=z e r o s (M, 1 ) ;
Error_U=z e r o s (M+ 1 , 1 ) ; %Avoid t h e 0/0− c a s e , since t ( 1 ) =0
F=z e r o s (M+ 1 , 1 ) ; F _ a n a l y t i c a l=z e r o s (M+ 1 , 1 ) ; F_old=z e r o s (M−1 ,1) ; i f n==1
F_new=F_old ;
s=z e r o s ( 2 , 1 ) ; Y=( exp ( t ( n+1) ) −1) / ( t ( n+1) ) +1;
s _ e r r o r =1e −12;
D=z e r o s ( s i z e ( F_old ) ) ; else
%The node v a l u e s f o r \ x i
x i _ p o s = ( 0 :M) ∗ h ; Y=( exp ( t ( n+1) ) −1) / ( t ( n+1) ) +( exp ( t ( n ) ) −1) / ( t ( n ) ) ;
%Time v e c t o r f o r a l l t i m e s t e p s
t = ( 0 :NT) ∗ k ; end

%C o e f f i c i e n t M a t r i x D( 1 ) =( c2−c 1 ) ∗Y ;
e=o n e s (M−1 , 1) ; %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−%
A=1/(2∗ h ^ 2 ) ∗ s p d i a g s ( [ e −2∗ e e ] , − 1 : 1 ,M−1 ,M−1) ;

x i _ a b o v e=x i _ p o s ( 1 :M−1) ' ;


x i _ b e l o w=x i _ p o s ( 3 :M+1) ' ;
B=s p d i a g s ([ − x i _ b e l o w x i _ a b o v e ] , [ − 1 1 ] ,M−1 ,M−1) ; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
B=1/(4∗ h ) ∗B ; %The c o e f f i c i e n t m a t r i x f o r F^ { n+1} and F ^ n ; LF^ { n+1} = RF^ { n
I=s p e y e (M−1) ; }
%−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−%
%−−−−−−−−− i n i t i a l c o n d i t i o n s −−−−−−−−−−−−
F=1−x i _ p o s ; L=t _ h a l f ∗A−(( s _ h a l f ) ^2/2+ t _ h a l f ∗ ( s _ h a l f ) ^ 2 / k ) ∗ I+t _ h a l f ∗ s _ h a l f
F_old=F ( 2 :M) ' ; ∗ s_prim ∗B ;
%−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
R=−t _ h a l f ∗A+(( s _ h a l f ) ^2/2 − t _ h a l f ∗ ( s _ h a l f ) ^ 2 / k ) ∗ I−t _ h a l f ∗
s _ h a l f ∗ s_prim ∗B ; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%−−−−−−−−−−−−THE PLOTS−−−−−−−−−−−−%
%−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−% figure (1)
p l o t ( x i _ p o s ∗s_num , U_num, ' kx ' , s_num∗ x i _ p o s , U_exact , ' r−− ' )
%S o l v e f o r t h e new v a l u e at level n+1 i f n==1
F_new=L \ (R∗ F_old+D) ; legend ( ' Numerical s o l u t i o n ' , ' A n a l y t i c a l s o l u t i o n ' )
end
%The q u a r t i c e q u a t i o n i n t e r m s o f s ^ { n +1} , w h e r e x := s ^ { n+1}
f u n =@( x ) 2∗ r ∗ ( t ( n+1)+t ( n ) ) ∗ ( F_new (M−1)+F_old (M−1) ) −4∗ b e t a / v if n==NT
∗ ( r+x i _ p o s (M+1) / ( 4 ∗ h ) ∗ ( x ^2 −( s ( 1 ) ) ^ 2 ) ) ∗ ( x ^2 −( s ( 1 ) ) ^ 2 ) ; xlabel ( 'x ' )
%S a v e s t h e o l d v a l u e o f s ( n+1) b e f o r e t h e u p d a t e y l a b e l ( 'U ' )
s _ o l d=s ( 2 ) ;
%Newton−r a p h s o n method w i t h s t a r t i n g g u e s s o f s ^ n end
s ( 2 )=newton3 ( f u n , s ( 1 ) ) ; h o l d on

%S a v e s t h e e r r o r i n t h e f r e e boundary at t h e m: t h iteration figure (2)


f o r s ^ { n+1} p l o t ( t ( n+1) , s_num , ' kx ' , t ( n+1) , s _ e x a c t , ' r o ' )
s _ e r r o r=a b s ( s ( 2 )−s _ o l d ) ; i f n==1
legend ( ' Numerical s o l u t i o n ' , ' A n a l y t i c a l solution ' )
end end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% if n==NT
%Adding b o u n d a r y c o n d i t i o n s xlabel ( ' t ^n ' )
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ylabel ( ' s ' )
F ( 2 :M)=F_new ; end
%A v o i d i n g t h e 0 / 0 a t n=1 h o l d on
i f n==1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

50
F ( 1 ) =1;
else
F ( 1 )= ( exp ( t ( n+1) ) −1) / ( t ( n+1) ) +( exp ( t ( n ) ) −1) / ( t ( n ) )−
Fendpoint ;
end
F(end)=0; %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %−−−−−−−− O v e r w r i t e t h e o l d v a l u e s f o r t h e n e x t t i m e l o o p
−−−−−−−
%−−−−c o n v e r t b a c k a l l t h e v a l u e s a t t ^ ( n+1)−−−−−% %
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
F_old=F_new ;
U_num=F∗ t ( n+1) ; F e n d p o i n t=F ( 1 ) ;
s_num=s ( 2 ) ; s =[ s ( 2 ) s ( 2 ) ] ;
%The e x a c t f u n c t i o n f o r t h e f r e e b o u n d a r y and the temperature
s _ e x a c t =1/ s q r t ( b e t a ) ∗ t ( n+1) ; end

%The e r r o r f o r s a t t i m e s t e p t ^ ( n+1) %The E r r o r i n t h e f r e e b o u n d a r y a g a i n s t time


E r r o r _ s ( n+1)=a b s ( s_num−s _ e x a c t ) ; figure (3)
%The e x a c t t e m p e r a t u r e a t t i m e l e v e l t ^ ( n+1) p l o t ( t , E r r o r _ s , ' k−− ' )
U_exact=exp ( t ( n+1)−s _ e x a c t ∗ x i _ p o s ) −1; l e g e n d ( ' E r r o r f o r t h e f r e e boundary ' )
xlabel ( ' t ^n ' )
%E r r o r f o r t h e t e m p e r a t u r e a t t i m e s t e p t ^ ( n+1) y l a b e l ( ' | s ( t ^ n )−s ^ n | ' )
Error_U ( n+1)=s q r t ( ( h ∗sum ( ( U_exact−U_num) . ^ 2 ) ) ) ;
%The E r r o r in the temperature against time
figure (4)
p l o t ( t , Error_U , ' k−− ' ) xlabel ( ' t ^n ' )
legend ( ' Error f o r the temperature ' ) y l a b e l ( ' | | U( t ^ n )−U^ n | | _2 ' )

51
The Newton Raphson method - constant boundary condition

function [ x o l d ] = newton ( b e t a ) x o l d=xnew ;


i f a b s ( temp−x o l d )<0 . 0 0 0 1
f = @( x ) s q r t ( p i ) ∗ b e t a ∗ x ∗ exp ( x ^ 2 ) ∗ e r f ( x ) −1; break ;
f p r i m = @( x ) s q r t ( p i ) ∗ b e t a ∗ ( exp ( x ^ 2 ) ∗ e r f ( x ) +2∗x ^ 2 ∗ exp ( x ^ 2 ) ∗ e r f ( x ) end
+2∗x / s q r t ( p i ) ) ;
end
x o l d =10;
while 1
xnew = x o l d −f ( x o l d ) / f p r i m ( x o l d ) ;
temp=x o l d ; end

52
The Newton Raphson method - time-dependent condition

function [ x o l d ] = newton3 ( f , x o l d ) temp=x o l d ;


i t = i t +1;
%f = @( x ) s q r t ( p i ) ∗ b e t a ∗ x ∗ exp ( x ^ 2 ) ∗ e r f ( x ) −1; x o l d=xnew ;
%f p r i m = @( x ) s q r t ( p i ) ∗ b e t a ∗ ( exp ( x ^ 2 ) ∗ e r f ( x ) +2∗x ^ 2 ∗ exp ( x ^ 2 ) ∗ e r f ( x i f a b s ( temp−x o l d )<0 . 0 0 0 1
) +2∗x / s q r t ( p i ) ) ; break ;
end
f u n c t i o n [ f p r i m ]= f p r i m ( x o l d , h )
f p r i m =( f ( x o l d+h )− f ( x o l d ) ) / h ; end
end

i t =0;
%x o l d =10; end
while 1
xnew = x o l d −f ( x o l d ) / f p r i m ( x o l d , 0 . 0 0 0 0 0 1 ) ;

53
Bibliography
[1] V. Alexiades and A.D. Solomon. Mathematical Modeling of Melting and
Freezing Processes. Hemisphere Publishing Corporation, Washington, 1993.
[2] D. Andreucci. Lecture notes on the Stefan problem. 2002.

[3] J. Crank. Free and Moving Boundary Problems. Hemisphere Publishing


Corporation, Washington, 1993.
[4] L.C. Evans. Partial Differential Equations. American Mathematical Soci-
ety, Providence, Rhose Island, 2010.

[5] A. Friedman. Partial Differential Equations of Parabolic Type. Prentice-


Hall, Inc, Englewood Cliffs, N.J., 1964.
[6] S.C. Gupta. The Classical Stefan Problem - Basic Concepts, Modelling and
Analysis. Elsevier Science B.V, Amsterdam, 2003.

[7] G. Lamé and B.P. Clapeyron. Mémoire sur la solidification par refroidisse-
ment d’un globe liquide. Ann. Chem. Phys 47, pp. 250-256, 1831.
[8] S. Larsson and V. Thomée. Partial Differential Equations with Numerical
Methods. Springer, Berlin, 2005.
[9] S.L. Mitchell and M. Vynnycky. Finite-difference methods with increased
accuracy and correct initialization for one-dimensional Stefan problems.
Applied Mathematics and Computation 215 1609-1621, 2009.
[10] F. Morgan. Real Analysis and Applications. American Mathematical Soci-
ety, Providence, Rhode Island, 2005.

[11] M.H. Protter and H.F. Weinberger. Maximum Principles in Differential


Equations. Prentice-Hall, Inc, London, 1967.
[12] L.I. Rubenstein. The Stefan Problem. American Mathematical Society,
Providence, Rhode Island, 1971.
[13] D.V. Schroeder. An Introduction to Thermal Physics. Addison Wesley
Longman, Boston, Massachusetts, 2000.
[14] J. Stefan. Über die Theorie der Eisbildung, insbesondere über die Eisbil-
dung im Polarmeere. Ann. Physik Chemie 42, pp. 269–286, 1891.
[15] C. Vuik. Some historical notes about the Stefan problem. Nieuw Archief
voor Wiskunde, 4e serie 11 (2): 157–167, 1993.

54

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy