Cdias Adams 30 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 95

ISSN 0070-7414

Sgrı́bhinnı́ Institiúid Árd-Léinn Bhaile Átha Cliath


Sraith. A. Uimh 30

Communications of the Dublin Institute for Advanced Studies


Series A (Theoretical Physics), No. 30

LECTURES ON
MATHEMATICAL
STATISTICAL MECHANICS
By

S. Adams

DUBLIN

Institiúid Árd-Léinn Bhaile Átha Cliath

Dublin Institute for Advanced Studies

2006
Contents
1 Introduction 1

2 Ergodic theory 2
2.1 Microscopic dynamics and time averages . . . . . . . . . . . . 2
2.2 Boltzmann’s heuristics and ergodic hypothesis . . . . . . . . . 8
2.3 Formal Response: Birkhoff and von Neumann ergodic theories 9
2.4 Microcanonical measure . . . . . . . . . . . . . . . . . . . . . 13

3 Entropy 16
3.1 Probabilistic view on Boltzmann’s entropy . . . . . . . . . . . 16
3.2 Shannon’s entropy . . . . . . . . . . . . . . . . . . . . . . . . 17

4 The Gibbs ensembles 20


4.1 The canonical Gibbs ensemble . . . . . . . . . . . . . . . . . . 20
4.2 The Gibbs paradox . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 The grandcanonical ensemble . . . . . . . . . . . . . . . . . . 27
4.4 The ”orthodicity problem” . . . . . . . . . . . . . . . . . . . . 31

5 The Thermodynamic limit 33


5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Thermodynamic function: Free energy . . . . . . . . . . . . . 37
5.3 Equivalence of ensembles . . . . . . . . . . . . . . . . . . . . . 42

6 Gibbs measures 44
6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2 The one-dimensional Ising model . . . . . . . . . . . . . . . . 47
6.3 Symmetry and symmetry breaking . . . . . . . . . . . . . . . 51
6.4 The Ising ferromagnet in two dimensions . . . . . . . . . . . . 52
6.5 Extreme Gibbs measures . . . . . . . . . . . . . . . . . . . . . 57
6.6 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.7 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

7 A variational characterisation of Gibbs measures 62

8 Large deviations theory 68


8.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3 Some results for Gibbs measures . . . . . . . . . . . . . . . . . 72

i
9 Models 73
9.1 Lattice Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
9.2 Magnetic Models . . . . . . . . . . . . . . . . . . . . . . . . . 75
9.3 Curie-Weiss model . . . . . . . . . . . . . . . . . . . . . . . . 77
9.4 Continuous Ising model . . . . . . . . . . . . . . . . . . . . . . 84

ii
Preface
In these notes we give an introduction to mathematical statistical mechanics,
based on the six lectures given at the Max Planck institute for Mathematics in
the Sciences February/March 2006. The material covers more than what has
been said in the lectures, in particular examples and some proofs are worked
out as well the Curie-Weiss model is discussed in section 9.3. The course
partially grew out of lectures given for final year students at the University
College Dublin in spring 2004. Parts of the notes are inspired from notes of
Joe Pulé at University College Dublin.
The aim is to motivate the theory of Gibbs measures starting from basic
principles in classical mechanics. The first part covers Sections 1 to 5 and
gives a route from physics to the mathematical concepts of Gibbs ensembles
and the thermodynamic limit. The Sections 6 to 8 develop a mathematical
theory for Gibbs measures. In Subsection 6.4 we give a proof of the ex-
istence of phase transitions for the two-dimensional Ising model via Peierls
arguments. Translation invariant Gibbs measures are characterised by a vari-
ational principle, which we outline in Section 7. Section 8 gives a quick intro-
duction to the theory of large deviations, and Section 9 covers some models
of statistical mechanics. The part about Gibbs measures is an excerpt of
parts of the book by Georgii ([Geo88]). In these notes we do not discuss
Boltzmann’s equation, nor fluctuations theory nor quantum mechanics.
Some comments on the literature. More detailed hints are found through-
out the notes. The books [Tho88] and [Tho79] are suitable for people, who
want to learn more about the physics behind the theory. A standard ref-
erence in physics is still the book [Hua87]. The route from microphysics to
macrophysics is well written in [Bal91] and [Bal92]. The old book [Kur60] is
nice for starting with classical mechanics developing axiomatics for statistical
mechanics. The following books have a higher level with special emphasis
on the mathematics. The first one is [Khi49], where the setup for the micro-
canonical measures is given in detail (although not in used modern manner).
The standard reference for mathematical statistical mechanics is the book
[Rue69] by Ruelle. Further developments are in [Rue78] and [Isr79]. The
book [Min00] contains notes for a lecture and presents in detail the two-
dimensional Ising model and the Pirogov-Sinai theory, the latter we do not
study here. A nice overview of deep questions in statistical mechanics gives
[Gal99], whereas [Ell85] and [Geo79],[Geo88] have their emphasis on proba-
bility theory and large deviation theory. The book [EL02] gives a very nice
introduction to the philosophical background as well as the basic skeleton of
statistical mechanics.

iii
I hope these lectures will motivate further reading and perhaps even fur-
ther research in this interesting field of mathematical physics and stochastics.
Many thanks to Thomas Blesgen for reading the manuscript. In particular I
thank Tony Dorlas, who gave valuable comments and improvements.

Leipzig, Easter 2006 Stefan Adams

iv
1 Introduction
The aim of equilibrium Statistical Mechanics is to derive all the equilibrium
properties of a macroscopic system from the dynamics of its constituent par-
ticles. Thus its aim is not only to derive the general laws of thermodynamics
but also the thermodynamic functions of a given system. Mathematical Sta-
tistical Mechanics has originated from the desire to obtain a mathematical
understanding of a class of physical systems of the following nature:
• The system is an assembly of identical subsystems.
• The number of subsystems is large (N ∼ 1023 , Avogardo’s number 6.023 ×
1023 , e.g. 1cm3 of hydrogen contains about 2.7 × 1019 molecules/atoms).
• The interactions between the subsystems are such as to produce a thermo-
dynamic behaviour of the system.

Thermodynamic behaviour is phenomenological and refers to a macroscopic


description of the system. Now ”macroscopic description” is operationally
defined in the way that subsystems are considered as small and not individ-
ually observed.

Thermodynamic behaviour:
(1) Equilibrium states are defined operationally. A state of an isolated system
tends to an equilibrium state as time tends to +∞ (approach to equilibrium).
(2) An equilibrium state of a system consists of one or more macroscopically
homogeneous regions called phases.
(3) Equilibrium states can be parametrised by a finite number of thermody-
namic parameters (e.g. temperature, volume, density, etc) which determine
the thermodynamic functions (e.g. free energy, pressure, entropy, magneti-
sation, etc).
It is believed that the thermodynamic functions depend piecewise ana-
lytical (or smoothly) on the parameters and that singularities correspond to
changes in the phase structure of the system (phase transitions). Classical
thermodynamics consists of laws governing the dependence of the thermo-
dynamic functions on the experimental accessible parameters. These laws
are derived from experiments with macroscopic systems and thus are not
derived from a microscopic description. Basic principles are the zeroth, first
and second law as well as the equation of state for the ideal gas.
The art of the mathematical physicist consists in finding a mathematical
justification for the statements (1)-(3) from a microscopic description. A mi-
croscopic complete information is not accessible and is not of interest. Hence,
despite the determinism of the dynamical laws for the subsystems, random-

1
ness comes into play due to lack of knowledge (macroscopic description). The
large number of subsystems is replaced in a mathematical idealisation by in-
finitely many subsystems such that the extensive quantities are scaled to stay
finite in that limit. Stochastic limit procedures as the law of large numbers,
central limit theorems and large deviations principles will provide appropri-
ate tools. In these notes we will give a glimpse of the basic concepts. In
the second chapter we are concerned mainly with the Mechanics in the name
Statistical Mechanics. Here we motivate the basic concepts of ensembles via
Hamilton equations of motions as done by Boltzmann and Gibbs.

2 Ergodic theory
2.1 Microscopic dynamics and time averages
We consider in the following N identical classical particles moving in Rd , d ≥
1, or in a finite box Λ ⊂ Rd . We idealise these particles as point masses
having the mass m. All spatial positions and momenta of the single particles
are elements in the phase space
Γ = Rd × Rd )N or ΓΛ = Λ × Rd )N . (2.1)
Specify, at a given instant of time, the values of positions and momenta of the
N particles. Hence, one has to specify 2dN coordinate values that determine
a single point in the phase space Γ respectively ΓΛ . Each single point in
the phase space corresponds to a microscopic state of the given system of N
particles. Now the question arises whether the 2dN − dimensional continuum
of microscopic states is reasonable. Going back to Boltzmann [Bol84] it seems
that at that time the 2dN − dimensional continuum was not really deeply
accepted ([Bol74], p. 169):

Therefore if we wish to get a picture of the continuum in words,


we first have to imagine a large, but finite number of particles
with certain properties and investigate the behaviour of the en-
sembles of such particles. Certain properties of the ensemble may
approach a definite limit as we allow the number of particles ever
more to increase and their size ever more to decrease. Of these
properties one can then assert that they apply to a continuum,
and in my opinion this is the only non-contradictory definition of
a continuum with certain properties...

and likewise the phase space itself is really thought of as divided into a fi-
nite number of very small cells of essentially equal dimensions, each of which

2
determines the position and momentum of each particle with a maximum
precision. Here the maximum precision that the most perfect measurement
apparatus can possibly provide is meant. Thus, for any position and momen-
tum coordinates
δp(j) (j)
i δqi ≥ h, for i = 1, . . . , N, j = 1, . . . , d,
with h = 6, 62 × 10−34 Js being Planck’s constant. Microscopic states of
the system of N particles should thus be represented by phase space cells
consisting of points in R2dN (positions and momenta together) with given
centres and cell volumes hdN . In principle all what follows should be for-
mulated within this cell picture, in particular when one is interested in an
approach to the quantum case. However, in this lectures we will stick to the
mathematical idealisation of the 2dN continuum of the microscopic states,
because all important properties remain nearly unchanged upon going over
to the phase cell picture.

Let two functions W : Rd → R and V : R+ → R be given. The energy of


the system of N particles is a function of the positions and momenta of the
single particles, and it is called Hamiltonian or Hamilton function. It is
of the form
N  2
X pi  X
H(q, p) = + W (qi ) + V (|qi − qj |), (2.2)
i=1
2m 1≤i<j≤N

where q = (q1 , . . . , qN ), p = (p1 , . . . , pN ). Here, the function W is called the


external-potential at the spatial positions due to external forces (walls,
pressure,...) or external fields (gravitation, magnetic field,...), and the func-
tion V is called the pair potential, depending only on the spatial distances
of each pair of particles. Also more general many-particle interaction poten-
tials can be considered (see [Rue69] for details). In the following we assume
that the Hamiltonian H : Γ → R is twice continuously differentiable and we
abbreviate n = dN . The phase space dynamics is governed by Hamilton’s
equations of motion
∂H ∂H
q̇i = and ṗi = − , i = 1, . . . , N, (2.3)
∂pi ∂qi
where the dot denotes as usual differentiation with respect to the time vari-
able. If J denotes the 2n × 2n matrix
 
0 1ln
,
−1ln 0

3
1ln the identity in Rn , the Hamilton vector field is given as
v : Γ → Γ, x 7→ J∇H(x)
with x = (q, p) ∈ R2n and
 ∂H ∂H ∂H ∂H 
∇H(x) = ,..., , ..., .
∂q1 ∂qN ∂p1 ∂pN
With the vector field v we associate the differential equations
d
x(t) = v(x(t)), (2.4)
dt
where x(t) denotes a single microstate of the system for any time t ∈ R. For
each point x ∈ Γ there is one and only one function x : R → Γ such that
x(0) = x and dx(t)
dt
= v(x(t)) for any t ∈ R. For any t ∈ R we define a phase
space map
φt : Γ → Γ, x 7→ φt (x) = x(t).
From the uniqueness property we get that Φ = {φt : t ∈ R} is a one-parameter
group which is called a Hamiltonian flow. Hamiltonian flows have the
following property.
Lemma 2.1 Let Φ be a Hamiltonian flow with Hamiltonian H, then any
function F = f ◦ H of H is invariant under φ:
F ◦ φt = F
for all t ∈ R.

Proof. The proof follows from the chain rule and hx, Jxi = 0. Recall that
if G : Rn → Rm then G0 (x) is the m × n matrix
 
∂Gi
(x) for x ∈ Rn .
∂xj i=1,...,m
j=1,...,n

d dφt
F (φt (x)) = F 0 (φt (x)) (x)
dt dt
dφt
= f 0 (H(φt (x))) H 0 (φt (x)) (x)
dt
dφt
= f 0 (H(φt (x))) (∇H(φt (x)))T (x)
dt
dφt
= f 0 (H(φt (x))) h∇H(φt (x)), (x)i
dt
= f 0 (H(φt (x))) h∇H(φt (x)) , J ∇H(φt (x))i.

The following theorem is the well known Liouville’s Theorem.

4
Theorem 2.2 (Liouville’s Theorem) The Jacobian | det φ0t (x)| of a Hamil-
tonian flow is constant and equal to 1.

Proof. Let M (t) and A(t) be linear mappings such that

dM (t)
= A(t)M (t)
dt
for all t ≥ 0, then
Z t 
det M (t) = det M (0) exp trace A(s)ds .
0

Now
dφt (x)
= (v ◦ φt )(x).
dt
Thus
dφ0t (x)
= v 0 (φt (x))φ0t (x)
dt
Rt
so that det φ0t (x) = exp 0 trace v 0 (φs (x))ds because φ00 (x) is the identity map
on Γ. Now
v(x) = J ∇H(x) and v 0 (x) = JH 00 (x),
 2 
where H 00 (x) = ∂x∂i ∂x
H
j
is the Hessian of H at x. Since H is twice continu-
00
ously differentiable, H (x) is symmetric. Thus

trace (JH 00 (x)) = trace((JH 00 (x))T ) = trace ((H 00 (x))T J T )


= trace (H 00 (x)(−J)) = −trace (JH 00 (x)).

Therefore trace (JH 00 (x)) = 0 and det(φ0t (x)) = 1. 

From Lemma 2.1 and Theorem 2.2 it follows that a probability measure
on the phase space Γ, i.e., an element of the set P(Γ, BΓ ) of probability
measure on Γ with Borel σ−algebra BΓ , whose Radon-Nikodym density with
respect to the Lebesgue measure is a function of the Hamiltonian H alone is
stationary with respect to the Hamiltonian flow Φ = {φt : t ∈ R}.

Corollary 2.3 Let µ ∈ P(Γ, BΓ )R with density ρ = F ◦ H for some function


F : R → R be given, ie., µ(A) = A ρ(x)dx for any A ∈ BΓ . Then

µ ◦ φ−1
t = µ for any t ∈ R.

5
Proof. We have
Z Z
µ(A) = 1lA (x)ρ(x)dx = 1lA (φt (x))ρ(φt (x))| det φ0t (x)|dx
ZΓ Γ

= 1lφ−1
t A
(x)ρ(x)dx = µ(φ−1
t A). 
Γ

For such a stationary probability measure one gets a unitary group of time
evolution operators in an appropriate Hilbert space as follows.

Theorem 2.4 (Koopman’s Lemma:) Let Γ1 be a subset of the phase space


Γ invariant under the flow Φ, i.e., φt Γ1 ⊂ Γ1 for all t ∈ R. Let µ ∈
P(Γ1 , BΓ1 ) be a probability measure on Γ1 stationary under the flow Φ that
is µ ◦ φ−1
t = µ for all t ∈ R. Define Ut f = f ◦ φt for any t ∈ R and any
function f ∈ L2 (Γ1 , µ), then {Ut : t ∈ R} is a unitary group of operators in
the Hilbert space L2 (Γ1 , µ).

Proof. Since Ut U−t = U0 = I, Ut is invertible and thus we have to prove


only that Ut preserves inner products.
Z Z
hUt f, Ut gi = (f ◦ φt )(x)(g ◦ φt (x)µ(dx) = (f¯g)(φt (x))µ(dx)
Z Z
= (f g)(x)(µ ◦ φt )(dx) = (f¯g)(x)µ(dx) = hf, gi.
¯ −1

Remark 2.5 (Boundary behaviour) If the particles move inside a finite


volume box Λ ⊂ Rd according to the equations (2.3) respectively (2.4); these
equations of motions do not hold when one of the particles reaches the bound-
ary of Λ. Therefore it is necessary to add to theses equations some rules of
reflection of the particles from the inner boundary of the domain Λ. For ex-
ample, we can consider the elastic reflection condition: the angle of incidence
is equal to the angle of reflection. Formally, such a rule can be specified by a
boundary potential Vbc .

We discuss briefly Boltzmann’s Proposal for calculating measured values


of observables. Observables are bounded continuous functions on the phase
space. Let Γ1 be a subset of phase space invariant under the flow Φ, i.e.
φt Γ1 ⊂ Γ1 for all t ∈ R. Suppose that f : Γ → R is an observable and suppose
that the system is in Γ1 so that it never leaves Γ1 . Boltzmann proposed that
when we make a measurement it is not sharp in time but takes place over

6
a period of time which is long compared to say times between collisions.
Therefore we can represent the observed value by
Z T
¯ 1
f (x) = lim dt(f ◦ φt )(x).
T →∞ 2T −T

Suppose that µ is a probability measure on Γ1 invariant with respect to φt


then Z Z T Z
1
f¯(x)µ(dx) = lim dt (f ◦ φt )(x)µ(dx)
Γ1 T →∞ 2T −T Γ1
Z T Z
1
= lim dt f (x)(µ ◦ φ−1
t )(dx)
T →∞ 2T −T Γ1
Z T Z
1
= lim dt f (x)µ(dx)
T →∞ 2T −T Γ1
Z
= f (x)µ(dx).
Γ1

Assume now that the observed value is independent of where the system is
at t = 0 in Γ1 , i.e. if f¯(x) = f¯ for a constant f¯, then
Z Z
f¯(x)µ(dx) = f¯µ(dx) = f¯.
Γ1 Γ1

Therefore Z
f¯ = f (x)µ(dx).
Γ1

We have madeZ Ttwo assumptions in this argument:


1
(1) lim dt(f ◦ φt )(x) exists.
T →∞ 2T −T

(2) f¯(x) is constant on Γ1 .

Statement (1) has been proved by Birkhoff (Birkhoff’s pointwise ergodic the-
orem (see section 2.3 below)): f¯(x) exists almost everywhere. We shall prove
a weaker version of this, Von Neumann’s ergodic theorem.
Statement (2) is more difficult and we shall discuss it later.

The continuity of the Hamiltonian flow entails that for each f ∈ Cb (Rd ) and
each x ∈ Γ the function

fx : R → R, 7→ fx (t) = f (x(t)) (2.5)

7
is a continuous function of the time t. For any time dependent function ϕ
we define the following limit
Z T
1
hϕi := lim dtϕ(t) (2.6)
T →∞ 2T −T

as the time average.

2.2 Boltzmann’s heuristics and ergodic hypothesis


Boltzmann’s argument (1896-1898) for the introduction of ensembles in sta-
tistical mechanics can be broken down into steps which were unfortunately
entangled.
1. Step: Find a set of time dependent functions ϕ which admit an invari-
ant mean hϕi like (2.6).
2. Step: Find a reasonable set of observables, which ensure that the time
average of each observable over an orbit in the phase space is indepen-
dent of the orbit.
Let Cb (R) be the set of all bounded continuous functions on the real line R,
equip it with the supremum norm ||ϕ||∞ = supt∈R |ϕ(t)|, and define
Z T
n 1 o
M = ϕ ∈ Cb (R) : hϕi = lim dtϕ(t) exists . (2.7)
T →∞ 2T −T

Lemma 2.6 There exist positive linear functionals λ : Cb (R) → R, nor-


malised to 1, and invariant under time translations, i.e., such that
(i) λ(ϕ) ≥ 0 for all ϕ ∈ M,
(ii) λ linear,
(iii) λ(1) = 1,
(iv) λ(ϕs ) = λ(ϕ) for all s ∈ R, where ϕs (t) = ϕ(t − s), t, s ∈ R,
1
RT
such that λ(ϕ) = limT →∞ 2T −T
dtϕ(t) for all ϕ ∈ Cb (R) where this limit
exists.

Our time evolution


(t, x) ∈ R × Γ 7→ φt (x) = x(t)
is continuous, and thus we can substitute fx in (2.5) for ϕ in the above result.
This allows to define averages on observables as follows.

8
Lemma 2.7 For every f ∈ Cb (Γ) and every x ∈ Γ, there exists a time-
invariant mean λx given by

λx : Cb (Γ) → R, f 7→ λx (f ) = λ(fx ),

with λx depending only on the orbit {x(t) : t ∈ R, x(0) = x}.

For any E ∈ R+ let

ΣE = {(q, p) ∈ Γ : : H(q, p) = E}

denote the energy surface for the energy value E for a given Hamiltonian
H for the system of N particles.

The strict ergodicity hypothesis

The energy surface contains exactly one orbit, i.e. for every x ∈ Γ
and E ≥ 0
{x(t) : t ∈ R, x(0) = x} = ΣE .

There is a more realistic mathematical version of this conjecture.

The ergodicity hypothesis

Each orbit in the phase space is dense on its energy surface, i.e.
{x(t) : t ∈ R, x(0) = x} is a dense subset of ΣE .

2.3 Formal Response: Birkhoff and von Neumann er-


godic theories
We present in this section briefly the important results in the field of ergodic
theory initiated by the ergodic hypothesis. For that we introduce the notion
of a classical dynamical system.

Notation 2.8 (Classical dynamical system) A classical dynamical sys-


tem is a quadruple (Γ, F, µ; Φ) consisting of a probability space (Γ, F, µ),
where F is a σ−algebra on Γ, and a one-parameter (additive) group T (R
or Z) and a group Φ of actions, φ : T × Γ → Γ, (t, x) 7→ φt (x), of the group
T on the phase space Γ, such that the following holds.
(a) fT : T × Γ → R, (t, x) 7→ f (φt (x)) is measurable for any measurable
f : Γ → R,

(b) φt ◦ φs = φt+s for all s, t ∈ T ,

9
(c) µ(φt (A)) = µ(A) for all t ∈ T and A ∈ F.

Theorem 2.9 (Birkhoff ) Let (Γ, F, µ; Φ) be a classical dynamical system.


For every f ∈ L1 (Γ, F, µ), let
Z T
T 1
λx (f ) = dtf (φt (x)).
2T −T

Then there exists an event Af ∈ F with µ(Af ) = 1 such that


(i) λx (f ) = limT →∞ λTx (f ) exists for all x ∈ Af ,

(ii) λφt (x) (f ) = λx (f ) for all (t, x) ∈ R × Af ,

(iii) Z Z
µ(dx)λx (f ) = µ(dx)f (x).
Γ Γ

Proof. [Bir31] or in the book [AA68]. 

Note that in Birkhoff’s Theorem one has convergence almost surely. There
exists a weaker version, the following ergodic theorem of von Neumann. We
restrict in the following to classical dynamical system with T = R, the real
time. Let H = L2 (Γ1 , F, µ) and define Ut f = f ◦ φt for any f ∈ H. Then by
Koopman’s lemma Ut is unitary for any t ∈ R.

Theorem 2.10 (Von Neumann’s Mean Ergodic Theorem) Let

M = {f ∈ H : Ut f = f ∀ t ∈ R},

then for any g ∈ H,


1 T
Z
gT := dtUt g
T 0
converges to P g as T → ∞, where P is the orthogonal projection onto M.

For the proof of this theorem we need the following discrete version.

Theorem 2.11 Let H be a Hilbert space and let U : H → H be an unitary


operator. Let N = {f ∈ H : U f = f } = ker(U − I) then
N −1
1 X n
lim U g = Qg
N →∞ N
n=0

where Q is the orthogonal projection onto N .

10
R1
Proof of Theorem 2.10. Let U = U1 and g = 0
dtUt f , then
Z 1 Z n+1
n
U g= dtUn+t f = dtUt f
0 n

and thus
N
X −1 Z N
n
U g= dtUt f.
n=0 0
RN
Therefore N1 0 dtUt f converges as N → ∞. For T ∈ R+ , by writing T =
RT
N + r where 0 ≤ r < 1 and N ∈ N, we deduce that T1 0 dtUt f converges as
T → ∞. Define the operator P̃ by

1 T
Z
P̃ f = lim dtUt f.
T →∞ T 0

Note that P̃ f ∈ M and


Z T
∗ 1
P̃ f = lim dtU−t f ∈ M.
T →∞ T 0

If f ∈ M then clearly P̃ f = f , while if f ∈ M⊥ for all g ∈ H, hP̃ f, gi =


hf, P̃ ∗ gi = 0 since P̃ ∗ g ∈ M and therefore P̃ f = 0. Thus P̃ = P . 

Proof of the Discrete form, Theorem 2.11. We first check that

[ker(I − U )]⊥ = Range(I − U ).

If f ∈ ker(I − U ) and g ∈ Range(I − U ), then for some h,

hf, gi = hf, (I − U )hi = h(I − U ∗ )f, hi


= −h(I − U )f, U hi = 0.

Thus Range(I − U ) ⊂ [ker(I − U )]⊥ . Since [ker(I − U )]⊥ is closed,

Range(I − U ) ⊂ [ker (I − U )]⊥ .

If f ∈ [Range(I − U )]⊥ , then for all g

0 = hf, (I − U )U ∗ gi = h(I − U ∗ )f, U ∗ gi = −h(I − U )f, gi.

Therefore (I − U )f = 0, that is f ∈ ker(I − U ). Thus

[Range (I − U )]⊥ ⊂ ker (I − U ).

11
Then
[ker (I − U )]⊥ ⊂ [Range (I − U )]⊥⊥ = Range (I − U ).
If g ∈ Range(I − U ), then g = (I − U )h for some h. Therefore
N −1
1 X n 1
U g = {h − U h + U h − U 2 h + U 2 h − U 3 h + . . . + U N −1 h − U N h}
N n=0 N
1
= {h − U N h}.
N
Thus
1 N −1
X n 2khk

U g ≤ → 0 as N → ∞.
N n=0 N

Approximating
PNelements of Range(I − U ) by elements of Range(I − U ), we
1 −1 n
have that N n=0 U g → 0 = P g for all

g ∈ [ker (I − U )]⊥ = Range(I − U ).

If g ∈ ker (I − U ), then
N −1
1 X n
U g = g = P g.
N n=0

Definition 2.12 (Ergodicity) Let Φ = (φt )t∈R be a flow on Γ1 and µ a


probability measure on Γ1 which is stationary with respect to Φ. Φ is said to
be ergodic if for every measurable set F ⊂ Γ1 such that φt (F ) = F for all
t ∈ R, we have µ(F ) = 0 or µ(F ) = 1.

Theorem 2.13 (Ergodic flows) Φ = (φt )t∈R is ergodic if and only if the
only functions in L2 (Γ1 , µ) which satisfy f ◦φt = f are the constant functions.

Proof. Below a.s.(almost surely) means that the statement is true except
on a set of zero measure. Suppose that the only invariant functions are the
constant functions. If φt (F ) = F for all t then 1lF is an invariant function
and so 1lF is constant a.s. which means that 1lF (x) = 0 a.s. or 1lF (x) = 1 a.s.
Therefore µ(F ) = 0 or µ(F ) = 1l.
Conversely suppose φt is ergodic and f ◦ φt = f . Let F = {x| f (x) < a}.
Then φt F = F since

φt (F ) = {φt (x)| f (x) < a} = {φt (x)| f (φt (x) < a} = F.

12
Therefore µ(F ) = 0 or µ(F ) = 1. Thus f (x) < a a.s. or f (x) ≥ a a.s. for
every a ∈ R. Let
a0 = sup{a| f (x) ≥ a a.s.}.
Then, if a > a0 , µ({x| f (x) ≥ a}) = 0, and if a < a0 , µ({x| f (x) < a}) = 0.
Let (an ) and (bn ) be sequences converging to a0 such that an > a0 > bn .
Then
{x| f (x) 6= a0 } = ∪n {x| f (x) ≥ an } ∪ {x| f (x) < bn }.
Thus
X
µ({x| f (x) 6= a0 }) ≤ (µ({x| f (x) ≥ an }) + µ({x| f (x) < bn })) = 0,
n

and so f (x) = a0 a.s. 

If we can prove that a system is ergodic, then there is no problem in applying


Boltzmann’s prescription for the time average. For an ergodic system, by the
above theorem, M is the one-dimensional space of constant functions so that
Z
P g = h1, gi1 = g(x)µ(dx).
Γ1

Therefore Z T Z
1
lim dtUt g = g(x)µ(dx).
T →∞ 2T −T Γ1

Remark 2.14 However proving ergodicity has turned out to be the most dif-
ficult part of the programme. There is only one example for which ergodicity
has been claimed to be proved and that is for a system of hard rods (Sinai).
This concerns finite systems. In the thermodynamic limit (see Chapter 5)
ergodicity should hold, but we do not discuss this problem.

2.4 Microcanonical measure


Suppose that we consider a system with Hamiltonian H and suppose also
that we fix the energy of the system to be exactly E. We would like to
devise a probability measure on the points of Γ with energy E such that the
measure is stationary with respect to the Hamiltonian flow.
Note that the energy surface ΣE is closed since ΣE = H −1 ({E}) and H is
assumed to be continuous. Clearly φt (ΣE ) = ΣE since H ◦φt = H . Let A(Γ)
denote the algebra of continuous functions Γ → R with compact support.
The following Riesz-Markov theorem identifies positive linear functionals on
A(Γ) with positive measures on Γ.

13
Theorem 2.15 (Riesz-Markov) If l : A(Γ) → R is linear and for any pos-
itive f ∈ A(Γ) it holds l(f ) ≥ 0, then there is a unique Borel measure µ on
Γ such that Z
l(f ) = f (x)µ(dx).
Γ

Now define a linear functional lE on A(Γ) by


Z
1
lE (f ) = lim f (x)dx,
δ↓0 δ Σ
[E,E+δ]

where Σ[E,E+δ] = {x| H(x) ∈ [E, E + δ]} is the energy-shell of thickness δ.


By the Riesz-Markov theorem there is a unique Borel measure µ0E on Γ such
that Z
lE (f ) = f (x)µ0E (dx)
Γ
with the properties:
(i) µ0E is concentrated on ΣE .
If suppf ∩ ΣE = ∅ then for δ small enough since suppf and Σ[E,E+δ]
are closed Σ[E,E+δ] ∩ suppf = ∅ and
Z Z
f (x)dx = f (x)1lΣ[E,E+δ] (x)dx = 0.
Σ[E,E+δ]

(ii) µ0E is stationary with respect to φt .


Since the Lebesgue measure is stationary,
Z Z
1 1
lE (f ◦ φt ) = lim (f ◦ φt )(x)dx = lim f (x)dx
δ↓0 δ Σ δ↓0 δ φ (Σ
[E,E+δ] t [E,E+δ] )
Z
1
= lim f (x)dx = lE (f ).
δ↓0 δ Σ
[E,E+δ]

Definition 2.16 (Microcanonical measure) If ω(E) := µ0E (Γ) < ∞ we


can normalise µ0E to obtain
µE := µ0E /ω(E) (2.8)
which is a probability measure on (Γ, BΓ ), concentrated of the energy shell ΣE .
The probability µE is called the microcanonical measure or microcanon-
ical ensemble. The normalisation ω(E) is also called the microcanonical
partition function.
The expression S = k log ω(E) is called the Boltzmann entropy or micro-
canonical entropy, where k = 1, 38 × 10−23 Js is Boltzmann’s constant.

14
We now give an explicit expression for the microcanonical measure. First we
briefly recall briefly facts on curvilinear coordinates.
Let ζ : Rν → A ⊂ Rν be a bijection. Then we can use coordinates t1 , t2 , . . . tν
where the point x corresponds to the point t = ζ(x) in the new coordinates.
The coordinates are orthogonal if the level surfaces ti = constant, i = 1, . . . , ν
are orthogonal to each other, that is, for all x ∈ Rν if i 6= j,
h∇ζi (x), ∇ζj (x)i = 0.
Changing the variables of integration we then get
Z Z Z
−1 −1 0 dt
f (x)dx = f (ζ (t))| det(ζ ) (t)|dt = f (ζ −1 (t))
Rν | det(ζ (ζ −1 (t))|
0
ZA A
dt
= f (ζ −1 (t)) Qν −1 (t))k
.
A i=1 k(∇ζi )(ζ

Note that if A is an n × n matrix with rows a1 , . . . , an where hai , aj i = 0 for


i 6= j, then AAT is aQdiagonal matrix with diagonal entries ka1 k2 , . . . , kan k2 .
Therefore det(A) = ni=1 kai k.
Let Σt1 be the level surface ζ1 (x) = t1 (constant). We define the element of
surface area on Σt1 to be
dt2 . . . dtν
dσt1 = Qν −1 (t))k
.
i=2 k(∇ζi )(ζ

Then
dt1
dx = dσt .
k∇ζ1 k 1
We apply this to the Microcanonical Measure.
Choose ζ : R2n → A ⊂ R2n such that ζ1 = H and so Σt1 is an energy surface.
Then Z E+δ
f (ζ −1 (t))
Z Z
f (x)dx = dt1 −1 (t))k
dσt1 .
Σ[E,E+δ] E Σt1 k∇H(ζ

Therefore
f (ζ −1 (E, t2 , . . . , t2n ))dσE
Z Z
1
lim f (x)dx = .
δ→0 δ Σ[E,E+δ] ΣE k∇H(ζ −1 (E, t2 . . . , tn ))k

Thus
dσE
µ0E (dx) = .
k∇Hk
In particular Z
dσE
ω(E) = .
ΣE k∇Hk

15
Note also that
f (ζ −1 (t))
Z Z Z
g(H(x))f (x)dx = dt1 g(t1 ) dσt .
Γ Σt1 k∇H(ζ −1 (t))k 1

Notation 2.17 (Mircocanonical Gibbs ensemble) Let Λ ⊂ Rd and N ∈


N, and HΛ(N ) denotes the Hamiltonian for N particles in Λ with elastic bound-
ary conditions. Then we denote the microcanonical measure on (ΓΛ , BΛ ) by
µ0E,Λ and the partition function by

dσE
ωΛ (E, N ) = .
||HΛ(N ) ||

The microcanonical entropy is denoted by SΛ (E, N ) = k log ωΛ (E, N ).

Remark 2.18 Measure which are constructed like the microcanonical mea-
sure on hyperplanes are called Gelfand-Leray measures. In general one might
imagine that there are several integrals of motions. For example the angular
momentum is conserved. Then one has to consider intersections of several
level surfaces. We will not discuss this in these lectures.

3 Entropy
3.1 Probabilistic view on Boltzmann’s entropy
We discuss briefly the famous Boltzmann formula S = k log W for the entropy
and give here an elementary probabilistic interpretation. For that let Ω be
a finite set (the state space) and let there be given a probability measure
µ ∈ P(Ω) on Ω, where P(Ω) denotes the set of probability measures on Ω
with the σ-algebra being the set of all subsets of Ω. In the picture of Maxwell
and Boltzmann, the set Ω is the set of all possible energy levels for a system of
particles, and the probability measure µ corresponds to a specific histogram
of energies describing some macrostate of the system. Assume that µ(x) is
a multiple of n1 for any x ∈ Ω, n ∈ N, i.e. µ is a histogram for n trials or a
macrostate for a system of n particles. A microscopic state for the system of
n particles is any configuration ω ∈ Ωn .

Boltzmann’s idea: The entropy of a macrostate µ ∈ P(Ω) corresponds to


the degree of uncertainty about the actual microstate ω ∈ Ωn when only µ
is known and thus can be measured by log Nn (µ), the logarithmic number of
microstates leading to µ.

16
The associate macrostate for a microstate ω ∈ Ωn is
n
1X
Ln (ω) = δω ,
n i=1 i
and Ln (ω) is called the empirical distribution or histogram of ω ∈ Ωn . The
number of microstates leading to a given µ ∈ P(Ω) ∩ n1 [0, 1]Ω is the number
 n! 
Nn (µ) = {ω ∈ Ωn : Ln (ω) = µ} = Q .

x∈Ω (nµ(x))!

We may approximate µ ∈ P(Ω) by a sequence (µn )n∈N of probability mea-


sures µn ∈ P(Ω) with µn ∈ n1 [0, 1]Ω . Then we define the uncertainty H(µ) of
µ via Stirling’s formula as the n → ∞-limit of the mean-uncertainty of µn
per particle.
Proposition 3.1 Let µ ∈ P(Ω) and µn ∈ P(Ω) ∩ n1 [0, 1]Ω with n ∈ N and
µn → µ as n → ∞. Then the limit limn→∞ n1 log Nn (µn ) exists and equals
X
H(µ) = − µ(x) log µ(x). (3.9)
x∈Ω

Proof. A proof with exact error bounds can be found in [CK81]. 


The entropy H(µ) counts the number of possibilities to obtain the macrostate
or histogram µ, and thus it describes the hidden multiplicity of the ”true”
microstates consistent with the observed µ. It is therefore a measure of the
complexity inherent in µ.

3.2 Shannon’s entropy


We give a brief view on the basic facts on Shannon’s entropy, which was
established by Shannon 1949 ([Sha48] and [SW49]). We base the specific form
of the Shannon entropy functional on probability measures just on a couple of
clear intuitive arguments. For that we start with a sequence of four axioms on
a functional S that formalises the intuitive idea that entropy should measure
the lack of information (or uncertainty) pertaining to a probability measure.
For didactic reasons we limit ourselves to probability measures on a finite
set Ω = {ω1 , . . . , ωn } of elementary events. Let P ∈ P(Ω)
P be the probability
measure with P ({ωi }) = pi ∈ [0, 1], i = 1, . . . , n, and ni=1 pi = 1. Now we
formulate four axioms for a functional S acting on the set P(Ω) of probability
measures.

Axiom 1: To express the property that S is a function of P ∈ P(Ω) alone


and not of the order of the single entries, one imposes:

17
(a) For every permutation π ∈ Sn , where Sn is the group of permuta-
tions of n elements, and any P ∈ P(Ω) let πP ∈ P(Ω) be defined as
πP ({ωi }) = pπ(i) for any i = 1, . . . , n. Then
S(P ) = S(πP ).

(b) S(P ) is continuous in each of the entries pi = P ({ωi }), i = 1, . . . , n.


The next axiom expresses the intuitive fact that the outcome is most random
for the uniform distribution.
1
Axiom 2: Let P (uniform) ({ωi }) = n
for i = 1, . . . , n. Then
S(P ) ≤ S(P (uniform) ) for any P ∈ P(Ω).
The next axiom states that the entropy remains constant, whenever we ex-
tend our space of outcomes with vanishing probability.

Axiom 3: Let P 0 ∈ P(Ω0 ) where Ω0 = Ω ∪ {ωn+1 } and assume that


P 0 ({ωn+1 }) = 0. Then
S(P 0 ) = S(P )
for P ∈ P(Ω) with P ({ωi }) = P 0 ({ωi }) for all i = 1, . . . , n.

Finally we consider compositions.

Axiom 4: Let P ∈ P(Ω) and Q ∈ P(Ω0 ) for some set Ω0 = {ω10 , . . . , ωm


0
}
0
with m ∈ N. Define the probability measure P ∨ Q ∈ P(Ω × Ω ) as
P ∨ Q({ωi , ωl0 }) = Q({ωl0 }|{ωi })P ({ωi })
for i = 1, . . . , n, and l = 1, . . . , m. Here Q({ωl0 }|{ωi }) is the conditional
probability of the event {ωl0 } ∈ Ω0 conditioned that the event {ωi } ∈ Ω
occurred. Then
S(P ∨ Q) = S(P ) + S(Q|P ),
Pn
where S(Q|P ) = i=1 pi Si (Q) is the expectation of
m
X
Si (Q) = − Q({ωl0 }|{ωi }) log Q({ωl0 }|{ωi })
l=1

with respect to the probability measure P .


Si (Q) is the conditional entropy of Q given that event {ωi } ∈ Ω occurred.
Note that when P and Q are independent one has S(P ∨ Q) = S(P ) + S(Q).

Equipped with theses elementary assumptions we cite the following theorem


which gives birth to the Shannon entropy.

18
Theorem 3.2 (Shannon entropy) Let Ω = {ωi , . . . , ωn } be a finite set.
Any functional S : P(Ω) → R satisfying Axioms (1) to (4) must be necessarily
of the form
n
X
S(P ) = −k pi log pi for P ∈ P(Ω) with P ({ωi }) = pi , i = 1, . . . , n,
i=1

and where k ∈ R+ is a positive constant.

Proof. The proof can be found in the original work by Shannon and
Weaver [SW49] or in the book by Khinchin [Khi57]. 

Notation 3.3 (Entropy) The functional


X
H(µ) = − µ(ω) log µ(ω) for µ ∈ P(Ω)
ω∈Ω

is called the Shannon entropy of the probability measure µ.

The connection with the previous Boltzmann entropy for the microcanonical
ensemble is apparent from Axiom 2 above. Moreover, there are also con-
nections to the Boltzmann-H-function not mentioned at all. The interested
reader is referred to any of the following monographs [Bal91],[Bal92],[Gal99]
and [EL02].

19
4 The Gibbs ensembles
In 1902 Gibbs proposed three Gibbs ensembles, the microcanonical, the
canonical and the grandcanonical ensemble. The microcanonical ensemble
was introduced in Section 2.4 as a probability measure on the energy sur-
face, a hyperplane in the phase space. The microcanonical ensemble is most
natural from the physical point of view. However, in practise mainly the
canonical and the grandcanonical Gibbs ensembles are studied. The main
reason is that these ensembles are defined as probability measures in the
phase space with a density, the so-called Boltzmann factor e−βH , where
β > 0 is the inverse temperature and H the Hamiltonian of the system.
The mathematical justification to replace the microcanonical ensemble by
the canonical or grandcanonical Gibbs ensemble goes under the name equiv-
alence of ensembles, which we will discuss in Subsection 5.3. In this section
we introduce first the canonical Gibbs ensemble. Then we study the so-called
Gibbs paradox concerning the correct counting for a system of indistinguish-
able identical particles. It follows the definition of the grandcanonical Gibbs
ensemble. In the last subsection we relate all the introduced Gibbs ensem-
bles to classical thermodynamics. This leads to the orthodicity problem,
namely the question whether the laws of thermodynamics are derived from
the ensembles averages in the thermodynamic limit.

4.1 The canonical Gibbs ensemble


We define the canonical Gibbs ensemble for a finite volume box Λ ⊂ Rd
and a fixed number N ∈ N of particles with Hamiltonian HΛ(N ) having ap-
propriate boundary conditions (like elastic ones as for the microcanonical
ensemble or periodic ones). In the following we denote the Borel-σ-algebra
on the phase space ΓΛ by BΛ . The universal Boltzmann constant is
k = kB = 1.3806505 × 10−23 joule/kelvin. In the following T denotes tem-
perature measured in Kelvin.
1
Definition 4.1 Call the parameter β = kT > 0 the inverse temperature.
The canonical Gibbs ensemble for parameter β is the probability measure
β
γΛ,N ∈ P(ΓΛ , BΛ ) having the density
(N )
e−βHΛ (x)
ρβΛ,N (x) = , x ∈ ΓΛ , (4.10)
ZΛ (β, N )
with respect to the Lebesgue measure. Here
Z
(N )
ZΛ (β, N ) = dx e−βHΛ (x) (4.11)
ΓΛ

20
is the normalisation and is called partition function (”Zustandsssume”).

Gibbs introduced this canonical measure as a matter of simplicity: he wanted


the measure with density ρ to describe an equilibrium, i.e., to be invari-
ant under the time evolution, so the most immediate candidates were to be
functions of the energy. Moreover, he proposed that ”the most simple case
conceivable” is to take the log ρ linear in the energy. The following theorem
was one of his justifications of the utility of the definition of the canonical
ensemble.

Theorem 4.2 Let Λ1 , Λ2 ⊂ Rd , ΓΛ1 × ΓΛ2 be an aggregate phase space ΓΛ0


and γ0 ∈ P(ΓΛ0 , B0 ) with Lebesgue density ρ0 be given. Define the reduced
probability measures (or marginals) γi ∈ P(Γi , Bi ), i = 1, 2, as
Z
γ1 (A) = 1lA (x1 )ρ0 (x1 , x2 )dx2 for A ∈ B1 ,
A×Γ2
Z
γ2 (B) = 1lA (x2 )ρ0 (x1 , x2 )dx1 for B ∈ B2
Γ1 ×B

with the Lebesgue densities


Z Z
ρ1 (x1 ) = ρ0 (x1 , x2 )dx2 and ρ2 (x2 ) = ρ0 (x1 , x2 )dx1 .
Γ0 Γ0

Then the entropies


Z
Si = −k ρi (x) log ρi (x)dx with i = 0, 1, 2,
Γi

satisfy the inequality


S0 ≤ S1 + S2
with equality S0 = S1 + S2 if and only if ρ0 = ρ1 ρ2 .

Proof. The proof is given with straightforward calculation and the use of
Jensen’s inequality for the convex function f (x) = x log x + 1 − x. 

Gibbs himself recognised the condition for equality as a condition for inde-
pendence. He claimed that with the notations of Theorem 4.2 in the special
case ρ is the canonical ensemble density and the Hamiltonian is of the form
H0 = H1 + H2 with H1 (respectively H2 ) is independent of Γ2 (respectively
Γ1 ), the reduced densities (marginal densities) ρ1 and ρ2 are independent,
i.e., ρ0 = ρ1 ρ2 , and are themselves canonical ensemble densities.
In [Gib02] he writes:

21
-a property which enormously simplifies the discussion, and is the
foundation of extremely important relations to thermodynamics.

Indeed, it follows from this that the temperatures are all equal, i.e., T1 =
T2 = T0 .

Remark 4.3 (Lagrange multipliers) We note that the inverse tempera-


ture β in the canonical Gibbs ensemble can be seen as the Lagrange multipli-
cator for the extremum problem for the entropy under the constraint that the
mean energy is fixed with a parameter like β, see further [Jay89], [Bal91],
[Bal92], [EL02].

Theorem 4.4 (Maximum Principle for the entropy) Let β > 0, Λ ⊂


β
and N ∈ N be given. The canonical Gibbs ensemble γΛ,N , where β > 0 is
R β
determined by ΓΛ dxρΛ,N (x) = U , maximises the entropy
Z
S(γ) = −k ρ(x) log ρ(x)dx
ΓΛ

for any γ ∈ P(ΓΛ , BΛ ) having a Lebesgue density ρ subject to the constraint


Z
U= ρ(x)HΛ(N ) (x)dx. (4.12)
ΓΛ

Moreover, the values of the temperature T and the partition function ZΛ (β, N )
are uniquely determined from the condition
∂ 1
U =− log ZΛ (β, N ) with β = .
∂β kT

Proof. We give only a rough sketch of the proof. We use that

a log a − b log b ≤ (a − b)(1 + log a) a, b ∈ (0, ∞).

Let γ ∈ P(ΓΛ , BΛ ) with Lebesgue density ρ. Put a = ρβΛ,N (x) and b = ρ(x)
for any x ∈ ΓΛ and recall that ρβΛ,N is the density of the canonical Gibbs
ensemble. Then

ρβΛ,N (x) log ρβΛ,N (x) − ρ(x) log ρ(x) ≤ (ρβΛ,N (x) − ρ(x))(1 − log ZΛ (β, N )
− βHΛ(N ) (x)).

22
Integrating with respect to Lebesgue we get
Z
β
S(γΛ,N ) − S(µ) ≥ −k ((1 − log ZΛ (β, N ) − βHΛ(N ) (x))ρβΛ,N (x)dx
ΓΛ
Z 
(N )
− (1 − log ZΛ (β, N ) − βHΛ (x))ρ(x))dx
ΓΛ
= −k {(1 − log ZΛ (β, N ) − βU ) − (1 − log ZΛ (β, N ) − βU )}
= 0.
Therefore
β
S(γΛ,N ) ≥ S(γ).
Note that the entropy for the canonical ensemble is given by
Z
β
S(γΛ,N ) = k (log ZΛ (β, N ) + βHΛ(N ) (x))ρβΛ,N (x)dx
ΓΛ
Z
= k log ZΛ (β, N ) + kβ HΛ(N ) (x)ρβΛ,N (x)dx.
ΓΛ

To prove the second assertion, note that


Z
∂β log ZΛ (β, N ) = − ρβΛ,N (x)HΛ(N ) (x)dx,
Z ΓΛ  Z 2
β
2
∂β log ZΛ = (N )
ρΛ,N (x) HΛ (x) − ρβΛ,N (x)HΛ(N ) (x)dx dx ≥ 0.
ΓΛ ΓΛ

Thermodynamic functions

For the canonical ensemble the relevant thermodynamical variables are the
temperature T (or β = (kT )−1 ) and the volume V of the region Λ ⊂ R. We
have already defined the entropy S of the canonical ensemble by
1
SΛ (β, N ) = k log ZΛ (β, N ) + E β (H (N ) ),
T γΛ,N Λ
HΛ(N ) (x)ρβΛ,N (x)dx, the expectation of HΛ ,
R
where U = Eγ β (HΛ(N ) ) = ΓΛ
Λ,N

sometimes denoted also by hHΛ(N ) i. We define the Helmholtz Free Energy


by A = U − T S, and we shall call the Helmholtz Free Energy simply the free
energy from now on. We have
1
A = U − T SΛ (β, N ) = Eγ β (HΛ(N ) ) − T (k log ZΛ (β, N ) + E β (H (N ) ))
Λ,N T γΛ,N Λ
1
= − log ZΛ (β, N ).
β

23
By analogy with Thermodynamics we define the absolute pressure P of the
system by  
∂A
P =− .
∂V T
The other thermodynamic functions can be defined as usual:
The Gibbs Potential, G = U + P V − T S = A + P V,
The Heat Capacity at Constant Volume, CV = ∂U ∂T
.
V
Note that  
∂A
S=−
∂T V
is also satisfied. The thermodynamic functions can all be calculated from A.
Therefore all calculations in the canonical ensemble begin with the calcula-
tion of the partition function ZΛ (β, N ).
To make the free energy density finite in the thermodynamic limit we redefine
the canonical partition function by introducing correct Boltzmann
counting
Z Z
1 (N )
−βHΛ (x) 1
ZΛ (β, N ) = e dx = e−βHΛ (x) dx, (4.13)
(n/d)! ΓΛ N ! ΓΛ
see the following Subsection 4.2 for a justification of this correct Boltzmann
counting.
Example 4.5 (The ideal gas in the canonical ensemble) Consider a non-
interacting gas of N identical particles of mass m in d dimensions, contained
in a box Λ ⊂ Rd of volume V . The Hamiltonian for this system is
N
1 X 2
HΛ (x) = p , x = (q, p) ∈ ΓΛ .
2m i=1 i

We have for the partition function ZΛ (β, N )


Z Z N
1 −βHΛ (x) 1 N − βp
2
ZΛ (β, N ) = e dx = V e 2m dp
N ! hN d ΓΛ N ! hN d Rd
Z N d   1 Nd
1 N − βp
2
1 N 2πm 2 (4.14)
= V e 2m dp = V
N ! hN d R N! h2 β
 N
1 V
= ,
N ! λd
where  12
h2 β

λ=
2πm

24
is called the thermal wavelength because it is of the order of the de Broglie
wavelength of a particle of mass m with energy β1 . The free energy AΛ (β, N )
is given by
1 1
AΛ (β, N ) = − log ZΛ (β, N ) = (log N ! + N d log λ − N log V ) .
β β
Thus the pressure is given by
 ∂A (β, N )  N kT N
Λ
PΛ (β, N ) = − = = .
∂V T βV V
Let aN (β, v) be the free energy per particle considered as a function of the
specific density v, that is,
1
aN (β, v) = AΛ (β, N ),
N N
where ΛN is a sequence of boxes with volume vN and let pN (β, v) = PΛN (β, N )
be the corresponding pressure. Then
 ∂a (β, v) 
N
pN (β, v) = − .
∂v T

For the ideal gas we get then


 
1 1
aN (β, v) = log N ! + d log λ − log v − log N ,
β N
leading to
1
a(β, v) := lim aN (β, v) = (d log λ − log v − 1) ,
N →∞ β
since  
1
lim log N ! − log N = −1.
N →∞ N
If p(β, v) := limN →∞ pN (β, v), one gets
 ∂a(β, v) 
p(β, v) = − ,
∂v T

1
and thus p(β, v) = βv . We can also define the free energy density as a
function of the particle density ρ, i.e.,
1
fl (β, ρ) = AΛ (β, ρVl ),
Vl l

25
where Λl is a sequence of boxes with volume Vl with liml→∞ Vl = ∞ and

f (β, ρ) = lim fl (β, ρ).


l→∞

The pressure p(β, ρ) then satisfies


 
∂f (β, ρ)
p(β, ρ) = ρ − f (β, ρ).
∂ρ T

Clearly f (β, ρ) = ρ a(β, ρ1 ). For the ideal gas we get


ρ
f (β, ρ) = (d log λ + log ρ − 1)
β
and therefore
ρ
p(β, ρ) =.
β
Finally we want to check the relative dispersion of the energy in the canonical
ensemble. Let hHΛ(N ) i = Eγ β (HΛ(N ) ). Then
Λ,N

h(HΛ(N ) − hHΛ(N ) i)2 i ∂β2 log ZΛ (β, N )


= .
hHΛ(N ) i2 (∂β log ZΛ (β, N ))2

This gives for the ideal gas


q
h(HΛ(N ) − hHΛ(N ) i)2 i 1 1 1
(N )
= ( dN )− 2 = O(N − 2 ).
hHΛ i 2

4.2 The Gibbs paradox


The Gibbs paradox illustrates an essential correction of the counting within
the microcanonical and the canonical ensemble. Gibbs 1902 was not aware
of the fact that the partition function needed a re-definition, for instance a
redefinition in (4.13) in case of the canonical ensemble. The ideal gas suffices
to illustrate the main issue of that paradox. Recall the entropy of the ideal
gas in the canonical ensemble
d d
SΛ (β, N ) = kN log(V T 2 ) + kN (1 + log(2πm)) , β −1 = kT, (4.15)
2
where V = |Λ| is the volume of the box Λ ⊂ Rd . Now, make the following
”Gedanken”-experiment. Consider two vessels having volume Vi containing
Ni , i = 1, 2, particles separated by a thin wall. Suppose further that both

26
vessels are in equilibrium having the same temperature and pressure. Now
imagine that the wall between the two vessels is gently removed. The ag-
gregate vessel is now filled with a gas that is still in equilibrium at the same
temperature and pressure. Denote by S1 and S2 the entropy on each side of
the wall. Since the corresponding canonical Gibbs ensembles are indepen-
dent of one another, the entropy S12 of the aggregate vessel - before the wall
is removed - is exactly S1 + S2 . However an easy calculation gives us

S12 − (S1 + S2 ) = k((N1 + N2 ) log(V1 + V2 ) − N1 log V1 − N2 log V2 )


 V1 V2  (4.16)
= −k N1 log + N2 log > 0.
V1 + V2 V1 + V2
This shows that the informational (Shannon) entropy has increased, while
we expected the thermodynamic entropy to remain constant, since the wall
between the two vessels is immaterial from a thermodynamical point of view.
This is the Gibbs paradox.
We have indeed lost information in the course of removing the wall. Imagine
the gas before removing the wall consists of yellow molecules in one vessel
and of blue molecules in the other. After removal of the wall we get a
uniform greenish mixture throughout the aggregate vessel. Before we knew
with probability 1 that a blue molecule was initially in the vessel where we
had put it, after removal of the wall we only know that it is in that part of
the aggregate vessel with probability NN1 N1 2 .
The Gibbs paradox is resolved in classical statistical mechanics with an ad
hoc ansatz. Namely, instead of the canonical partition function ZΛ (β, N )
one takes N1 ! ZΛ (β, N ) and instead of the microcanonical partition function
ωΛ (E, N ) one takes N1 ! ωΛ (E, N ). This is called the correct Boltzmann
counting. The appearance of the factorial can be justified in quantum
mechanics. It has something to do with the in-distinguishability of iden-
tical particles. A state describing a system of identical particles should be
invariant under any permutation of the labels identifying the single parti-
cle variables. However, this very interesting issue goes beyond the scope of
this lecture, and we will therefore assume it from now on. In Subsection 5.1
we give another justification by computing the partition function and the
entropy in the microcanonical ensemble.

4.3 The grandcanonical ensemble


We give a brief introduction to the grandcanonical Gibbs ensemble. One
can argue that the canonical ensemble is more physical since in experiments
we never consider an isolated system and we never measure the total energy

27
but we deal with systems with a given temperature. Similarly we like not to
specify the number of particles but the average number of particles. In the
grandcanonical ensemble the system can have any number of particles with
the average number determined by external sources. The grandcanonical
Gibbs ensemble is obtained if the canonical ensemble is put in a ”particle-
bath”, meaning that the particle number is no longer fixed, only the mean of
the particle number is determined by a parameter. This was similarly done
in the canonical ensemble for the energy, where one considers a ”heat-bath”.
The phase space for exactly N particles in box Λ ⊂ Rd can be written as
ΓΛ,N = {ω ⊂ (Λ × Rd ) : ω = {(q, pq ) : q ∈ ω
b }, Card (b
ω ) = N }, (4.17)
where ωb , the set of positions occupied by the particles, is a locally finite
subset of Λ, and pq is the momentum of the particle at positions q. If the
number of the particles is not fixed, then the phase space is
ΓΛ = {ω ⊂ (Λ × Rd ) : ω = {(q, pq ) : q ∈ ω
b }, Card (b
ω ) finite}. (4.18)
A counting variable on ΓΛ is a random variable N∆ on ΓΛ for any Borel set
∆ ⊂ Λ defined by N∆ (ω) = Card (b ω ∩ ∆) for any ω ∈ ΓΛ .
Definition 4.6 (Grandcanonical ensemble) Let Λ ⊂ Rd , β > 0 and µ ∈
R. Define the phase space ΓΛ = ∪∞ N =0 ΓΛ,N , where ΓΛ,N = (Λ × R )
d 2N
is the

phase space in Λ for N particles, and equip it with the σ-algebra BΛ generated
β,µ ∞
by the counting variables. The probability measure γΛ ∈ P(ΓΛ , BΛ ) such
β,µ
that the restrictions γΛ Γ onto ΓΛ,N have the densities
Λ,N

(N )
(N )
ρβ,µ (x) = ZΛ (β, µ)−1 e−β(HΛ (x)−µN )
, N ∈ N,

where HΛ(N ) is the Hamiltonian for N particles in Λ, and partition function


∞ Z
X (N )
ZΛ (β, µ) = e−β(HΛ (x)−µN ) dx (4.19)
N =0 ΓΛ,N

is called the grandcanonical ensemble in Λ for the inverse temperature β


and the chemical potential µ.
Instead of the chemical potential µ sometimes the fugacity or activity eβµ
is used for the grandcanonical ensemble. Observables are now sequences
f = (f0 , f1 , . . .) with f0 ∈ R and fN : ΓΛ,N → R, N ∈ N, are functions on
the N -particle phase spaces. Hence, the expectation in the grandcanonical
ensemble is written as
∞ Z
1 X
Eγ β,µ (f ) = e βµN
ZΛ (β, N ) fN (x)ρβΛ,N (dx). (4.20)
Λ ZΛ (β, µ) N =0 ΓΛ,N

28
If N denotes the particle number observable we get that
1 ∂
Eγ β,µ (N ) = log ZΛ (β, µ).
Λ β ∂µ
For the grandcanonical measure we have a Principle of Maximum En-
tropy very similar to those for the other two ensembles. We maximise the
entropy subject to the constraint that the mean energy Eγ β,µ (HΛ ) and the
Λ
mean particle number Eγ β,µ (N ) are fixed, where HΛ = (HΛ(0) , HΛ(1) , . . .) is the
Λ
sequence of Hamiltonians for each possible number of particles.

Theorem 4.7 (Principle of Maximum Entropy) Let P be a probability


measure on ΓΛ such that its restriction to ΓΛ,N , denoted by PN , is absolutely
continuous with respect to the Lebesgue measure, that is
Z
PN (A) = ρN (x)dx for any A ∈ BΛ(N ) .
A

Define the entropy of the probability measure P to be


∞ Z
X
S(P ) = −kρ0 log ρ0 − k ρN (x) log(N !ρN (x))dx.
N =1 ΓΛ,N

Then the grandcanonical ensemble/measure γΛβ,µ , where β and µ are deter-


mined by Eγ β,µ (HΛ ) = E and Eγ β,µ (N ) = N0 , N0 ∈ N, maximises the entropy
Λ Λ
among the absolutely continuous probability measures on ΓΛ with mean en-
ergy E and mean particle number N0 .

Proof. As in the two previous cases we use a log a − b log b ≤ (a − b)(1 +


log a) and so

a log ta − b log tb ≤ (a − b)(1 + log a + log t).

eβµN e−βHN (x)


Let ρβ,µ
N (x) = and put a = ρ(N )
β,µ (x), b = ρN (x) and t = N !.
N !ZΛ (β, µ)
Then, writing Z for ZΛ (β, µ),
(N ) (N )
ρβ,µ (x) log(N !ρβ,µ (x)) − ρN (x) log(N !ρN (x))
(N )
≤ (ρβ,µ (x) − ρN (x))(1 − log Z − βHΛ(N ) (x) + βµN ).

29
Integrating with respect to the Lebesgue measure on ΓΛ,N and summing over
N we get
∞ Z
nX
β,µ
S(µ ) − S(µ) ≥ −k (1 − log Z − βHΛ(N ) (x) + βµN )ρ(N )
β,µ (x)dx
N =1 ΓΛ,N

XZ
+ (1 − log Z)ρ(0)
β,µ − (1 − log Z − βHΛ(N ) (x) + βµN )ρN (x))dx
ΓΛ,N
o N =1
− (1 − log Z)ρ0
= −k {(1 − log Z − βE + βµN0 ) − (1 − log Z − βE + βµN0 )} = 0.

Therefore
S(µβ,µ ) ≥ S(µ).
Note that the entropy for the grandcanonical ensemble is given by

S(µβ,µ ) = k log ZΛ (β, µ) + kβEγ β,µ (HΛ ) − kβµEγ β,µ (N ).


Λ Λ


Thermodynamic Functions:
We shall write Z for ZΛ (β, µ) and we suppress for a while some obvious
sub-indices and arguments. We have already defined the entropy S by
1
S = k log Z + (E β,µ (HΛ ) − µEγ β,µ (N )),
T γΛ Λ

and as before we define the internal energy of the system U by U = Eγ β,µ (HΛ ).
Λ
We then define the Helmholtz Free Energy as before by A = U − T S, and
we shall call the Helmholtz Free Energy simply the free energy from now on.
We have
1
A =U − T S = Eγ β,µ (HΛ ) − T (k log Z + (Eγ β,µ (HΛ ) − µEγ β,µ (N )))
Λ T Λ Λ

1  
=− log Z − µβEγ β,µ (N ) .
β Λ

In analogy with thermodynamics we should define the absolute pressure P


of the system by  
∂A
P =−
∂V T
with the constraint
Eγ β,µ (N ) = constant.
Λ

30
This constraint means that µ is a function of V and β. Therefore
1 ∂µ ∂   1 ∂ 1 ∂
P = log Z − µβEγ β,µ (N ) + log Z = log Z.
β ∂V ∂µ Λ β ∂V β ∂V
1
It is argued that log Z should be independent of V for large V and therefore
V
we can write
   
1 ∂ 1 ∂ 1 1 V ∂ 1
P = log Z = V log Z = log Z + log Z
β ∂V β ∂V V βV β ∂V V
1
≈ log Z.
βV
Therefore we define the pressure by the equation
1
P = log Z.
βV
This definition can be justified a posteriori when we consider the equivalence
of ensembles, see Subsection 5.3. The other thermodynamic functions can
be defined as usual:
The Gibbs Potential G = U + P V − T S = A+ PV ,
The heat capacity at constant volume, CV = ∂U ∂T
. Note
V
 ∂A 
S=−
∂T V

is also satisfied.
All the thermodynamic functions can be calculated from Z = ZΛ (β, µ).
Therefore all calculations in the grandcanonical ensemble begin with the
calculation of the partition function Z = ZΛ (β, µ).

4.4 The ”orthodicity problem”


We refer to one of the main aims of statistical mechanics, namely to derive
the known laws of classical thermodynamics from the ensemble theory. The
following question is called the Orthodicity Problem.
Which set E of statistical ensembles or probability measures has the property
that, as an element µ ∈ E changes infinitesimally within the set E, the
corresponding infinitesimal variations dU and dV of U and V are related to
the pressure P and to the average kinetic energy per particle,
N
Eµ (Tkin ) 1 X 2
Tkin = , Tkin = p,
N 2m i=1 i

31
such that the differential
dU + P dV
Tkin
is an exact differential at least in the thermodynamic limit. This will then
provide the second law of thermodynamics. Let us provide a heuristic check
for the canonical ensemble. Here,
Z
1 (N )
β
EγΛ,N (Tkin ) = Tkin (x)e−βHΛ (x) dx,
ZΛ (β, N ) ΓΛ,N

and U = −∂β ZΛ (β, N ). The pressure in the canonical ensemble can be


calculated as
1 2 a dq2 · · · dqN dp1 · · · dpN
Z
β
X N (N )
P (γΛ,N ) = e−βHΛ (x) p ,
Q
ZΛ (β, N ) p>0 2m A N!

where the sum goes over all small cubes Q adjacent to thePboundary of the
box Λ with volume V by a side with area a while A = Q a is the total
area of the container surface and q1 is the centre of Q. Let n(Q, v)dv, where
1
v = 2m p is the velocity, be the density of particles with normal velocity −v
that are about to collide with the external walls of Q. Particles will cede a
momentum 2mv = p in normal direction to the wall at the moment of their
collision (−mv → mv due to elastic boundary conditions). Then
XZ va
dvn(Q, v)(2mv)
Q v>0 A

is the momentum transferred per unit time and surface area to the wall.
Gaussian calculation gives then after a couple of steps that due to FΛ (β, N ) =
−β −1 log ZΛ (β, N ) and SΛ (E; N ) = (U − FΛ (β, N ))β we have that T =
2 Tkin
(kβ)−1 = dk N
, and that

T dSΛ = d(FΛ + T SΛ ) + pdV = dU + pdV,

with p = β −1 ∂V∂
log ZΛ (β, N ). Details can be found in [Gal99], where also
references are provided for rigorous proofs for the orthodicity in the canonical
ensemble. The orthodicity problem is more difficult in the microcanonical
ensemble. The heuristic approach goes similar. However, for a rigorous proof
of the orthodicity one needs here a proof that the expectation of the kinetic
energy in the microcanonical ensemble satisfies
α
E(Tkin ) = E(Tkin )α (1 + θN ) , α > 0,

32
with θN → 0 as N → ∞ (thermodynamic limit). The last requirement
would be easy for independent velocity, but this is not the case here due
to the microcanonical energy constraint, and therefore this refers not to an
application of the usual law of large numbers. A rigorous proof concerning
any fluctuations and moments of the kinetic energy in the microcanonical
ensemble is in preparation [AL06].

5 The Thermodynamic limit


In this section we introduce the concept of taking the thermodynamic limit,
give a simple example in the microcanonical ensemble and prove in Subsec-
tion 5.2 the existence of the thermodynamic limit for the specific free energy
in the canonical Gibbs ensemble for a given class of interactions. In the last
subsection we briefly discuss the equivalence of ensembles and the thermo-
dynamic limit at the level of states/measures.

5.1 Definition
Let us call state of a physical system an expectation value functional on the
observable quantities for this system. The averages, i.e., the expectation val-
ues with respect to the Gibbs ensembles, are such states. We shall say that
the systems for which the expectation with respect to the Gibbs ensembles
is taken are finite systems (e.g. finitely many particles in a region with
finite volume), but we may also consider the corresponding infinite sys-
tems which contain an infinity of subsystems and extend throughout Rd or
Zd for lattice systems. Thus the discussion in the introduction leads us to as-
sume that the ensemble expectation for finite systems approach in some sense
states/measures of the corresponding infinite system. Besides the existence
of such limit states/measures one is also interested in proving that these are
independent on the choice of the ensemble leading to the question of equiv-
alence of ensembles. One of the main problems of equilibrium statistical
mechanics is to study the infinite systems equilibrium states/measures and
their relation to the interactions which give rise to them. In Section 6 we in-
troduce the mathematical concept of Gibbs measures which appear as natural
candidates for equilibrium states/measures. We turn down one level in our
study and consider the problem of determination of the thermodynamic func-
tions from statistical mechanics in the thermodynamic limit. We introduced
earlier for each Gibbs ensemble a partition function, which is the total mass
of the measure defining the Gibbs ensemble. The logarithm of the partition
function divided by the volume of the region Λ containing the system has a

33
limit when this systems becomes large ( Λ ↑ Rd ), and this limit is identified
with a thermodynamic function. Any singularities for these thermodynamic
functions in the thermodynamic limit may correspond to phase transitions
(see [Min00],[Gal99] and [EL02] for details on theses singularities).
Taking the thermodynamic limit thus involves letting Λ tend to infin-
ity, i.e., approaching Rd or Zd respectively. We have to specify how Λ tends
to infinity. Roughly speaking we consider the following notion.

Notation 5.1 (Thermodynamic limit) A sequence (Λn )n∈N of boxes Λn ⊂


Rd is a cofinal sequence approaching Rd if the following holds,

(i) Λn ↑ Rd as n → ∞,

(ii) If Λhn = {x ∈ Rd : dist(x, ∂Λ) ≤ h} denotes the set of points with


distance less or equal h to the boundary of Λ, the limit

|Λhn |
lim =0
n→∞ |Λ|

exists.

The thermodynamic limit consists thus in letting n → ∞ for a cofinal se-


quence (Λn )n∈N of boxes with the following additional requirements for the
microcanonical and the canonical ensemble:
Microcanonical ensemble: There are energy densities εn ∈ (0, ∞), given
En
as εn = |Λn|
with En → ∞ as n → ∞, and particle densities ρn ∈ (0, ∞),
given as ρn = N n
Λn
with Nn → ∞ as n → ∞, such that εn → ε and
ρn → ρ ∈ (0, ∞) as n → ∞.
Canonical ensemble: There are particle densities ρn ∈ (0, ∞), given as
ρn = N
Λn
n
with Nn → ∞ as n → ∞, such that ρn → ρ ∈ (0, ∞) as n → ∞.

In some models one needs more assumptions on the cofinal sequence of


boxes, for details see [Rue69] and [Isr79].

We check the thermodynamic limit of the following simple model in the


microcanonical ensemble, which will give also another justification of the
correct Boltzmann counting.

Ideal gas in the microcanonical ensemble

Consider a non-interacting gas of N identical particles of mass m in d di-


mensions, contained in the box Λ of volume V = |Λ|. The gradient of the

34
Hamiltonian for this system is
n
1 X 2 1
∇HΛ (x) = ∇ pi = (0, . . . , 0, p1 , . . . , pn ) , x ∈ ΓΛ ,
2m i=1 m

where as usual n = N d. We have


n
1 X 2
2 2
|∇H(x)| = 2 pi = H(x) , x ∈ ΓΛ ,
m i=1 m

where | · | denotes the norm in R2n . Therefore on the energy surface ΣE ,


1
|∇H| = ( 2E m
) 2 . Let Sν (r) be the hyper sphere of radius r in ν dimensions,
that is Sν (r) = {x| x ∈ Rν , |x| = r}. Let Sν (r) be the surface area of Sν (r).
Then
Sν (r) = cν rν−1 for some constant cν .
1
For the non-interacting gas, we have ΣE = ΛN × Sn ((2mE) 2 ) and
 m  21 Z  m  12 1
ω(E) = dσ = V N Sn ((2mE) 2 )
2E ΣE 2E
 m  21 1 1
= V N cN d (2mE) 2 (N d−1) = mV N cN d (2mE) 2 N d−1 .
2E
The entropy S is given by
1
exp(S/k) = ω(E) = mV N cN d (2mE) 2 N d−1

and therefore
1 exp(S/k)V −N
(2mE) 2 N d−1 = .
mcdN
Thus, the internal energy follows as
  2N

1 exp V − (N d−2)
2S
k(N d−2)
U (S, V ) = E = 2 ,
2m (mcN d ) (N d−2)
and the temperature as the partial derivative of the internal energy with
respect to the entropy is
 ∂U  2 2U 2U
T = = U= 2 ≈
∂S V k(N d − 2) kN d(1 − N d ) kN d
for large N . This gives for large N the following relations
d
U ≈ N kT,
2
35
 ∂U 
d
CV =≈ N k.
∂T V 2
 ∂U  2 U 2U N kT
P =− = 2 ≈ ≈ .
∂V S d(1 − N d ) V dV V
The previous relation is the empirical ideal gas law in which k is Boltzmann’s
constant. We can therefore identify k in the definition of the entropy with
Boltzmann’s constant.
We need to calculate cν . We have via a standard trick
ν
Z ∞ ν Z Z ∞
−x2 −|x|2 2
π =
2 e dx = e dx = Sν (r)e−r dr

Z−∞∞ Z ∞ 0

ν−1 −r2 cν ν
−1 −t cν  ν 
= cν r e dr = t e dt = Γ
2 .
0 2 0 2 2
ν
2π 2
This gives cν = Γ( ν2 )
, where Γ is the Gamma-Function, defined as
Z ∞
Γ(x) = tx−1 e−t dt.
0

Note that if n ∈ N then Γ(n) = (n − 1)!. The behaviour of Γ(x) for large
positive x is given by Stirling’s formula
√ 1
Γ(x) ≈ 2πxx− 2 e−x .

This gives limx→∞ ( x1 log Γ(x) − log x) = −1. We now have for the entropy of
the non-interacting gas in a box Λ of volume V
Nd
 2π 2 1

SΛ (E, N ) = k log mV N N d (2mE) 2 N d−1 .
Γ( 2 )

Let v be the specific volume and ε the energy per particle, that is
V E
v= and ε= .
N N
Let sN (ε, v) be the entropy per particle considered as a function of ε and v,
1
sN (ε, v) = SΛ (εN, N ),
N N

36
where ΛN is a sequence of boxes with volume vN . Then
Nd
1 
N N 2π
2 1
N d−1

sN (ε, v) = k log mv N (2mεN ) 2
N Γ( N2d )
  N d 
d+2 d 1
≈ k log v + log N + log(4πmε) − log Γ
2 2 N 2
 
d d d d
≈ k log v + log N + log(4πmε) + − log( )
2 2 2 2
≈ k log N.
We expect sN (ε, v) to be finite for large N . Gibbs (see Section 4.2) postu-
lated that we have made an error in calculating ω(E), the number of states
of the gas with energy E. We must divide ω(E) by N !. It is not possi-
ble to understand this classically since in classical mechanics particles are
distinguishable. The reason is inherently quantum mechanical. In quan-
tum mechanics particles are indistinguishable. We also divide ω(E) by hdN
where h is Planck’s constant. This makes classical and quantum statistical
mechanics compatible for high temperatures.
We therefore redefine the microcanonical entropy of the system to be
 ω (E, N )   ω (E, N ) 
Λ Λ
SΛ (E, N ) = k log = k log , (5.21)
(n/d)! N!
where we put Planck’s constant h = 1. Then

d d d d
sN (ε, v) ≈ k log v + log N + log(4πmε) + − log( ) − d log h
2 2 2 2

1
− log(N !)
N
 
d+2 d d  h  d d
→k + log v + log ε − log √ − log( )
2 2 2 4πm 2 2
as N → ∞.

5.2 Thermodynamic function: Free energy


We shall prove that the canonical free energy density exists in the thermody-
namic limit for very general interactions. We consider a general interacting
gas of N identical particles of mass m in d dimensions, contained in the box
Λ of volume of V with elastic boundary conditions. The Hamiltonian for this
system is
N
(N ) 1 X 2
HΛ = p + U (r1 , . . . , rN ).
2m i=1 i

37
We have for the partition function ZΛ (β, N )
Z Z N Z
1 (N )
−βHΛ (x) 1 − βp
2
ZΛ (β, N ) = e dx = e 2m dp e−βU (q) dq
N ! hN d Γ N! R d Λ N
  12 N d Z Z
1 2πm −βU (q) 1 1
= e dq = e−βU (q) dq.
N ! h2 β Λ N N ! λdN
ΛN

We shall assume that the interaction potential is given by a pair interaction


potential φ : R → R, that is
X
U (q1 , . . . , qN ) = φ(|qi − qj |) , (q1 , . . . , qN ) ∈ RdN ,
1≤i<j≤N

where |x| denotes the norm for a the vector x ∈ Rd .


Definition 5.2 Let the pair potential function φ : R → R be given.
(i) The pair potential φ is tempered if there exists R > 0 such that
φ(|q|) ≤ 0 if |q| > R, q ∈ Rd .

(ii) The pair potential φ is stable if there exists B ≥ 0 such that for all
q1 , r2 , . . . , qN in Rd
X
φ(|qi − qj |) ≥ −N B.
1≤i<j≤N

(iii) If the pair potential φ is not stable then it is said to be catastrophic.

Recall that the free energy in a box Λ ⊂ Rd for an inverse temperature β


and particle number N is given by
1
AΛ (β, N ) = − log ZΛ (β, N ).
β
Theorem 5.3 (Fisher-Ruelle) Let φ be a stable and tempered pair poten-
tial. Let R be as above in Definition 5.2 and for ρ ∈ (0, ∞) let L0 be such that
ρ(L0 + R)d ∈ N. Let Ln = 2n (L0 + R) − R and let Λn be the cube centred at
the origin with side Ln and volume Vn = |Λn | = Ldn . Let Nn = ρ(L0 + R)d 2dn .
If
1
fn (β, ρ) = AΛ (β, Nn ),
Vn n
then limn→∞ fn (β, ρ) exists.

38
Figure 1: typical form of φ

39
R

(1) (2)
n-1 n-1

n R

(3) (4)
n-1 n-1

Figure 2: Λn contains 4 cubes of side length Ln−1

Proof. Note that limn→∞ Nn /Vn = ρ. Note also that Nn = 2d Nn−1 and
Ln = 2Ln−1 + R. Because of the last equation Λn contains 2d cubes of side
(i)
Ln−1 with a corridor of width R between them. Denote these by Λn−1 , i =
1, . . . , 2d and let
X
UN (q1 , . . . , qN ) = φ(|qi − qj |).
1≤i<j≤N

Let Zn = ZΛn (β, Nn ) and gn = N1n log Zn . It is sufficient to prove that gn


converges.
Z
1
Zn = dq1 . . . dqNn e−βUNn (q1 ,...,qNn ) .
Nn !λdNn ΛNn n

Let
(i) 0
Ωn = {(r1 , . . . rNn ) ∈ ΛN
n | each Λn−1 contains Nn−1 rk s}.
n

Note that for (q1 , . . . qNn ) ∈ Ωn there are no rk ’s in the corridor between the
(i)
Λn−1 .

Let
(1) (2)
Ω̃n = {(q1 , . . . qNn ) ∈ ΛN
n : q1 , . . . , qNn−1 ∈ Λn−1 , qNn−1 +1 , . . . , q2Nn−1 ∈ Λn−1 ,
n

(2d )
. . . , q(2d −1)Nn−1 +1 , . . . q2d Nn−1 ∈ Λn−1 }.

40
Since Ωn ⊂ ΛN
n
n

Z
1
Zn ≥ dq1 . . . dqNn e−βUNn (q1 ,...,qNn )
Nn !λdNn Ωn
Z
Nn ! 1 1
= dq1 . . . dqNn e−βUNn (q1 ,...qNn ) .
(Nn−1 !)2d Nn ! λdNn Ω̃n
Since φ is tempered, we get for q1 , . . . , qNn ∈ Ω̃n
UNn (q1 , . . . , qNn ) ≤UNn−1 (q1 , . . . , qNn−1 ) . . .
+ UNn−1 (q(2d −1)Nn−1 +1 , . . . , q2d Nn−1 ).
Thus
Z !2d
1 1
Zn ≥ dq1 . . . dqNn−1 e−βUNn−1 (q1 ,...,qNn−1 )
(Nn−1 !))2d λdNn Nn−1
Λn−1
d
= (Zn−1 )2 .
Therefore
1 1 1 d
gn = log Zn = d log Zn ≥ d log(Zn−1 )2 = gn−1 .
Nn 2 Nn−1 2 Nn−1
The sequence (gn )n∈N is increasing. To prove that gn converges it is sufficient
to show that gn is bounded from above. Since the pair potential φ is stable
we have
UNn (q1 , . . . , qNn ) ≥ −BNn
and therefore
Z
1
Zn = dq1 . . . dqNn e−βUNn (q1 ,...,qNn )
Nn !λdNn ΛN
n
n

Vn Nn βBNn
Z
1
≤ dq1 . . . dqNn eβBNn ≤ e .
Nn !λdNn ΛN
n
n Nn !λdNn
Thus
 
1 Nn
gn ≤ log Vn − log Nn ! − d log λ + βB = − log + log Nn
Nn Vn
1
− log Nn ! − d log λ + βB
Nn
≤ − log ρ + 2 − d log λ + βB
for large n since
 
Nn 1
lim = ρ and lim log Nn − log Nn ! = 1.
n→∞ Vn n→∞ Nn


41
5.3 Equivalence of ensembles
The equivalence of the Gibbs ensemble is the key problem of equilibrium
statistical mechanics. It goes back to Gibbs 1902, who conjectured that both
the canonical and the grandcanonical Gibbs ensemble are equivalent with
the microcanonical ensemble. The main difficulty to answer the question of
equivalence lies in the precise definition of the notion equivalence. Nowadays
the term can have three different meanings, each on a different level of infor-
mation. We briefly introduce these concepts, but refer for details to one of
the following research articles ([Ada01], [Geo95], [Geo93]).

Equivalence at the level of thermodynamic functions


Under general assumptions on the interaction potential, e.g. stability and
temperedness as in Subsection 5.2, one is able to prove the following thermo-
dynamic limits of the thermodynamic functions given by the three Gibbs en-
sembles. Let (Λn )n∈N be any cofinal sequence of boxes with corresponding se-
quences of energy densities (εn )n∈N and particle densities (ρn )n∈N . Then there
is a closed packing density ρ(cp) ∈ (0, ∞) and an energy density ε(ρ) ∈ (0, ∞)
such that the following limits exist under some additional requirements (de-
tails are in [Rue69]) depending on the specific model chosen.

Grandcanonical Gibbs ensemble Let β > 0 be the inverse temperature


and µ ∈ R the chemical potential. Then the function p(β, µ), defined by
1
βp(β, µ) = lim log ZΛn (β, µ),
n→∞ |Λn |

is called the pressure.


Canonical Gibbs ensemble Let β > 0 be the inverse temperature and
Nn
(ρn )n∈N with ρn → ρ ∈ (0, ρ(cp) ) a sequence of particle densities ρn = |Λ n|
.
Then the function f (β, ρ), defined by
1
−βf (β, ρ) = lim log ZΛn (β, ρn |Λn |),
n→∞ |Λn |

is called the free energy.


Microcanonical Gibbs ensemble Let (ρn )n∈N with ρn → ρ ∈ (0, ρ(cp) ) be
Nn
a sequence of particle densities ρn = |Λ n|
, and let (εn )n∈N with εn → ε ∈
En
(ε(ρ), ∞) be a sequence of energy densities εn = |Λ n|
. Then the function

1
s(ε, ρ) = lim log ωΛn (εn |Λn |), ρn |Λn |) is called entropy.
n→∞ |Λn |

42
Now equivalence at the level of thermodynamic functions is given if all three
thermodynamic functions in the thermodynamic limit are related to each
other by a Legendre-Fenchel transform for specific regions of parameter re-
gions of ε, ρ, β and µ. Roughly speaking this kind of equivalence is mainly
given in absence of phase transitions, i.e., only for those parameters where
there are no singularities. For details see the monograph [Rue69], where the
following transforms are established.

βp(β, µ) = sup {s(ε, ρ) − βε − ρµ}


ρ≤ρ(cp) ,ε>ε(ρ)

f (β, ρ) = inf {ε − β −1 s(ε, ρ)}


ε>ε(ρ)

p(β, µ) = sup {ρµ − f (β, ρ)}.


ρ≤ρ(cp)

Equivalence at the level of canonical and microcanonical Gibbs


measures
In Section 6 we introduce the concept of Gibbs measures. A similar concept
can be formulated for canonical Gibbs measures (see [Geo79] for details)
as well as for microcanonical Gibbs measures (see [Tho74] and [AGL78] for
details). The idea behind these concepts is roughly speaking to condition
outside any finite region on particle density and energy density events.
Equivalence at the level of canonical and microcanonical Gibbs measures is
then given if the microcanonical and canonical Gibbs measures are certain
convex combinations of Gibbs measures (see details in [Tho74], [AGL78] and
[Geo79]).

Equivalence at the level of states/measures


At the level of states/measures one is interested in any weak (in the sense
of probability measures, i.e., weak-∗-topology) limit points (accumulation
points) of the Gibbs ensembles. To define a consistent limiting procedure we
need an appropriate phase or configuration space for the infinite systems in
the thermodynamic limit. We consider here only continuous systems whose
phase space (configuration space) for any finite region Λ and finitely many
particles is ΓΛ . In Section 6 we introduce the corresponding configuration
space for lattice systems. Define

Γ = {ω ⊂ (Rd × Rd ) : ω = {(q, pq ) : q ∈ ω̂}},

where ω̂, the set of occupied positions, is a locally finite subset of Rd , and pq
is the momentum of the particle at position q. Let B denote the σ-algebra of

43
this set generated by counting variables (see [Geo95] for details). Then each
Gibbs ensemble can be extended trivially to a probability on (Γ, B) just by
putting the whole mass on a subset. Therefore it makes sense to consider
all weak limit points in the thermodynamic limit. If the limit points are not
unique, i.e., there are several accumulation points, one considers the whole
set of accumulation points closed appropriately as the set of equilibrium
states/measure or Gibbs measures.
Equivalence at the level of states/measures is given if all accumulation points
of the different Gibbs ensembles belong to the same set of equilibrium points
or the same set of Gibbs measure ([Geo93],[Geo95],[Ada01]).
In the next section we develop the mathematical theory for Gibbs mea-
sures without any limiting procedure.

6 Gibbs measures
In this section we introduce the mathematical concept of Gibbs measures,
which are natural candidates to be the equilibrium measures for infinite sys-
tems, i.e., for systems after taking the thermodynamic limit. We will restrict
our study from now on to lattice systems, i.e., the phase space is given as the
set of functions (configurations) on some countable discrete set with values
in a finite set, called the state space.

6.1 Definition
Let Zd the square lattice for dimensions d ≥ 1 and let E be any finite set.
d
Define Ω := E Z = {ω = (ωi )i∈Zd : ωi ∈ E} the set of configurations with
values in the state space E. Let E be the power set of E, and define the
d
σ−algebra F = E Z such that (Ω, F) is a measurable space. Denote the set
of all probability measures on (Ω, F) by P(Ω, F).

Definition 6.1 (Random field) Let µ ∈ P(Ω, F). Any family (σi )i∈Zd of
random variables which is defined on the probability space (Ω, F, µ) and which
takes values in (E, E) is called a random field.

If one considers the canonical setup, where σi : Ω → E are the projections


for any i ∈ Zd , a random field is synonymous with a probability measure
µ ∈ P(Ω, F). Let S = {Λ ⊂ Zd : |Λ| < ∞} be the set of finite volume
subsets of the square lattice Zd . Cylinder events are defined as {σΛ ∈ A}
for any A ∈ E Λ and any projection σΛ : Ω → E Λ for Λ ∈ S. Then F is the
smallest σ- algebra containing all cylinder events. If Λ ∈ S the σ− algebra
FΛ on Ω contains all cylinder events {σΛ00 ∈ A} for all A ∈ E and Λ0 ⊂ Λ.

44
If we return to our physical intuition we are interested in random fields for
which the so-called spin variables σi exhibit a particular type of dependence.
We employ a similar dependence structure as for Markov chains, where the
dependence is expressed as a condition on past events. This approach was
introduced by Dobrushin ([Dob68a],[Dob68b],[Dob68c]) and Lanford and Ru-
elle ([LR69]). Here, we condition on the complement of any finite set Λ ⊂ Zd .
To prescribe these conditional distributions of all finite collections of variables
we define the σ-algebras

TΛ = FZd \Λ for any Λ ∈ S. (6.22)

The intersection of all these σ-algebras is denoted by T = ∩Λ∈S TΛ and called


the tail-σ-algebra or tail-field.
The dependence structure will be described by some functions linking the
random variables and expressing the energy for a given dependence structure.

Definition 6.2 An interaction potential is a family Φ = (φA )A∈S of func-


tions φA : Ω → R such that the following holds.

(i) φA is FA -measurable for all A ∈ S,

(ii) For any Λ ∈ S and any configuration ω ∈ Ω the expression


X
HΛ (ω) = φA (ω) (6.23)
A∈S,A∩Λ6=∅

exists. The term exp(−βHΛ (ω)) is called the Boltzmann factor for
some parameter β > 0, where β is the inverse temperature.

Example 6.3 (Pair potential) Let φA = 0 whenever |A| > 2 and let
J : Zd × Zd → R, ϕ : E × E → R and ψ : E → R symmetric and measurable.
Then a general pair interaction potential is given by

 J(i, j)ϕ(ωi , ωj ) if A = {i, j}, i 6= j,
φA (ω) = J(i, i)ψ(ωi ) if A = {i}, for ω ∈ Ω.
0 if |A| > 2

We combine configurations outside and inside of any finite set of random


variable as follows. Let ω ∈ Ω and ξ ∈ ΩΛ , Λ ∈ S, be given. Then ξωZd \Λ ∈ Ω
with σΛ (ξωZd \Λ ) = ξ and σZd \Λ (ξωZd \Λ ) = ωZd \Λ . With this notation we can
define a nearest-neighbour Hamiltonian with given boundary condition.

45
Example 6.4 (Hamiltonian with boundary) Let Λ ∈ S and η ∈ Ω and
the functions J, ϕ and ψ as in Example 6.3 be given. Then
1 X X X
HΛη (ω) = J(i, j)ϕ(ωi , ωj ) + ϕ(ωi , ηj ) + J(i, i)ψ(ωi )
2 i∈Λ,j∈Λc , i∈Λ
i,j∈Λ,hi−ji=1
hi−ji=1

denotes a Hamiltonian in Λ with nearest-neighbour interaction and with con-


figurational boundary condition η ∈ Ω, where hx, yi = maxi∈{1,...,d} |xi − yi |
for x, y ∈ Zd . Instead of a given configurational boundary condition one can
model the free boundary condition and the periodic boundary condition as
well.

In the following we fix a probability measure λ ∈ P(E, E) on the state space


and call it the reference or a priori measure. Later we may also con-
sider the Lebesgue measure as reference measure. Choosing a probability
measure as a reference measure for finite sets Λ gives just a constant from
normalisation.

Definition 6.5 (Gibbs measure)

(i) Let η ∈ Ω, Λ ∈ S, β > 0 the inverse temperature and Φ be an interac-


tion potential. Define for any event A ∈ F
Z  
Φ −1
γΛ (A|η) = ZΛ (η) λΛ (dω)1lA (ωηZd \Λ ) exp − βHΛ (ωηZd \Λ )
ΩΛ
(6.24)
with normalisation or partition function
Z  
ZΛ (η) = λΛ (dω) exp − βHΛ (ωηZd \Λ ) .
ΩΛ

Then γΛΦ (·|η) is called the Gibbs distribution in Λ with boundary


condition ηZd \Λ , with interaction potential Φ, inverse temperature β
and reference measure λ.

(ii) A probability measure µ ∈ P(Ω, F) is a Gibbs measure for the inter-


action potential Φ and inverse temperature β if

µ(A|FΛ ) = γΛΦ (A|·) µ a.s. for all A ∈ F, Λ ∈ S, (6.25)

where γΛΦ is the Gibbs distribution for the parameter β (6.24). The set
of Gibbs measures for inverse temperature β with interaction potential
Φ is denoted by G(Φ, β).

46
(iii) An interaction potential Φ is said to exhibit a first-order phase transi-
tion if |G(Φ, β)| > 1 for some β > 0.

If the interaction potential Φ is known we may skip the explicit appearance


of the interaction potential and write instead G(β) for the set of Gibbs mea-
sure with inverse temperature β. However, the parameter β can ever be
incorporated in the interaction potential Φ.

Remark 6.6

(i) γΛΦ (A|·) is TΛ -measurable for any event A ∈ F.

(ii) The equation (6.25) is called the DLR-equation or DLR-condition


in honour of R. Dobrushin, O. Lanford and D. Ruelle.

6.2 The one-dimensional Ising model


In this subsection we study the one-dimensional Ising model. If the state
space E is finite, then one can show a one-to-one correspondence between
the set of all positive transition matrices and a suitable class of nearest-
neighbour interaction potentials such that the set G(Φ) of Gibbs measures
is the singleton with the Markov chain distribution. This is essentially for a
geometric reason in dimension one, the condition on the boundary is a two-
sided Markov chain, for details see [Geo88]. A simple one-dimensional model
which shows this equivalence was suggested 1920 by W. Lenz ([Len20]), and
its investigation by E. Ising ([Isi24]) was a first and important step towards
a mathematical theory of phase transitions. Ising discovered that this model
fails to exhibit a phase transition and he conjectured that this will hold also
in the multidimensional case. Nowadays we know that this is not true. In
Subsection 6.4 we discuss the multidimensional case.

Let E = {−1, +1} be the state space and consider the lattice Z. At
each site the spin can be downwards, i.e., −1, or be upwards, i.e., +1. The
nearest-neighbour interaction is modelled by a constant J, called the coupling
constant, through the expression Jωi ωj for any i, j ∈ Z with |i − j| = 1:

J > 0: Any two adjacent spins have minimal energy if and only if they
are aligned in that they have the same sign. This interaction is therefore
ferromagnetic.
J < 0: Any two adjacent spins prefer to point in opposite directions. Thus
this is a model of an antiferromagnet.

47
h ∈ R: A constant h describes the action of an external field (directed
upwards when h > 0).
Hence the nearest-neighbour interaction potential ΦJ,h = (φJ,h
A )A∈S reads


 −Jωi ωi+1 , if A = {i, i + 1},
J,h
φA (ω) = −hω , if A = {i}, for ω ∈ Ω. (6.26)
0 , else

We employ periodic boundary conditions, i.e., for Λ ⊂ Z finite and with


Λ = {1, . . . , |Λ|} we set ω|Λ|+1 = ω1 for any ω ∈ Ω. The Hamiltonian in Λ
with periodic boundary conditions reads
|Λ| |Λ|
(per)
X X
HΛ (ω) = −J ωi ωi+1 − h ωi . (6.27)
i=1 i=1

The partition function depends on the inverse temperature β > 0, the cou-
pling constant J and the external field h ∈ R, and is given by
X X  
(per)
ZΛ (β, J, h) = ··· exp − βHΛ (ω) . (6.28)
ω1 =±1 ω|Λ| =±1

We compute this by the one-dimensional version of transfer matrix formalism


introduced by [KW41] for the two-dimensional Ising model. More details
about this formalism and further investigations on lattice systems can be
found in [BL99a] and [BL99b]. Crucial is the identity
X X
ZΛ (β, J, h) = ··· Vω1 ,ω2 Vω2 ,ω3 · · · Vω|Λ|−1,|Λ| Vω|Λ| ,ω1
ω1 =±1 ω|Λ| =±1

with 1 1 
Vωi ωi+1 = exp βhωi + βJωi ωi+1 + βhωi+1
2 2
for any ω ∈ Ω and i = 1, . . . , |Λ|. Hence, ZΛ (β, J, h) = Trace V|Λ| with the
symmetric matrix
e−βJ
 β(J+h) 
e
V=
e−βJ eβ(J−h)
that has the eigenvalues
  12
βJ 2βJ 2 −2βJ
λ± = e cosh(βh) ± e sinh (βh) + e . (6.29)

This gives finally


|Λ| |Λ|
ZΛ (β, J, h) = λ+ + λ− . (6.30)

48
This is a smooth expression in the external field parameter h and the inverse
temperature β; it rules out the appearance of a discontinuous isothermal
magnetisation: so far, no phase transition. The thermodynamic limit of the
free energy per volume is
1 1
f (β, J, h) = lim log ZΛ (β, J, h) = − log λ+ , (6.31)
Λ↑Z β|Λ| β
 
because |Λ|−1 log ZΛ (β, J, h) = − log λ+ − |Λ|
1
1+( λλ−+ )|Λ| . The magnetisation
in the canonical ensemble is given as the partial derivative of the specific free
energy per volume,
sinh(βh)
m(β, J, h) = −∂h f (β, J, h) = q .
sinh2 (βh) + e−4βJ

This is symmetric, m(β, J, 0) = 0 and limh→±∞ m(β, J, h) = ±1 and for all


h 6= 0 we have |m(β, J, h)| > |m(β, 0, h)| saying that the absolute value of the
magnetisation is increased by the non-vanishing coupling constant J. The
set G(Φ, β) of Gibbs measures contains only one element, called µβJ,h , see
[Geo88] for the explicit construction of this measure as the corresponding
Markov chain distribution, here we outline only the main steps.
1.) The nearest-neighbour interaction ΦJ,h in (6.26) defines in the usual way
the Gibbs distributions γΛJ,h (·|ω) for any finite Λ ⊂ Z and any boundary
condition ω ∈ Ω. Define the function g : E 3 → (0, ∞) by
J,h
γ{i} (σi = y|ω) = g(ωi−1 , y, ωi+1 ), y ∈ E, i ∈ Z, ω ∈ Ω. (6.32)

We compute

g(x, y, z) = eβy(h+Jx+Jz) /2 cosh(β(h + Jx + Jz)) for x, y, z ∈ E. (6.33)

Fix any a ∈ E. Then the matrix


 
g(a, x, y)
Q= (6.34)
g(a, a, y) x,y∈E

is positive. By the well-known theorem of Perron and Frobenius we have


a unique positive eigenvalue q such that there is a strictly positive right
eigenvector r corresponding to q.
2.) The matrix PJ,h , defined as
 
Q(x, y)r(y)
PJ,h = (6.35)
qr(x) x,y∈E

49
is uniquely determined by the matrix Q and therefore by g in (6.33). Clearly,
PJ,h is stochastic. We then let µβJ,h ∈ P(Ω) denote (the distribution of) the
unique stationary Markov chain with transition matrix PJ,h . It is uniquely
defined by
n
Y
µβJ,h (σi = x0 , σi+1 = x1 , . . . , σi+n = xn ) = αPJ,h (x0 ) PJ,h (xi−1 , xi ), (6.36)
i=1

where i ∈ Z, n ∈ N, x0 , . . . , xn ∈ E, and αPJ,h satisfies αPJ,h PJ,h = αPJ,h .

The expectation at each site is given by


 − 21
EµJ,h (σi ) = e−4βJ + sinh2 (βh) sinh(βh) , i ∈ Z.

In the low temperature limit one is interested in the behaviour of the set
G(Φ, β) of Gibbs measures as β → ∞. A configuration ω ∈ Ω is called a
ground state of the interaction potential Φ if for each site i ∈ Z the pair
(ωi , ωi+1 ) is a minimal point of the function
1
ψ : {−1, +1}2 → R; (x, y) 7→ ψ(x, y) = −Jxy + h(x + y).
2
Note that the interaction potential Ψ = (ψA )A∈S with

ψ(σi , σi+1 ) , if A = {i, i + 1}
ψA =
0 , otherwise
is equivalent to the given nearest neighbour interaction potential Φ. We
denote the constant configuration with only upward-spins by ω+ (respectively
the constant configuration with only downward-spins by ω− ). The Dirac
measure on these constant configurations is denoted by δ+ respectively δ− .
Then, for h > 0, we get that
µβJ,h → δ+ weakly in sense of probability measures as β → ∞,
and hence ω+ is the unique ground state of the nearest-neighbour interaction
potential Φ. Similarly, for h < 0, we get that
µβJ,h → δ− weakly in sense of probability measures as β → ∞,
and hence ω− is the unique ground state of the nearest-neighbour interaction
potential Φ. In the case h = 0 the nearest neighbour interaction potential Φ
has precisely two ground states, namely ω+ and ω− , and hence we get
1 1
µβJ,0 → δ+ + δ− weakly in sense of probability measures as β → ∞.
2 2

50
6.3 Symmetry and symmetry breaking
Before we study the two-dimensional Ising model, we briefly discuss the role
of symmetries for Gibbs measures and their connections with phase transi-
tions. As is seen by the spontaneous magnetisation below the Curie tem-
perature, the spin system takes one of several possible equilibrium states
each of which is characterised by a well-defined direction of magnetisation.
In particular, these equilibrium states fail to be preserved by the spin re-
versal (spin-flip) transformation. Thus breaking of symmetries has some
connection with the occurrence of phase transitions.
Let T denote the set of transformations

τ : Ω → Ω, ω 7→ (τi ωτ∗−1 i )i∈Zd ,

where τ∗ : Zd → Zd is any bijection of the lattice Zd , and the τi : E → E, i ∈


Zd , are invertible measurable transformations of E with measurable inverses.
Each τ ∈ T is a composition of a spatial transformation τ∗ and the spin
transformations τi , i ∈ Zd , which act separately at distinct sites of the square
lattice Zd .

Example 6.7 (Spatial shifts) Denote by Θ = (θi )i∈Zd the group of all
spatial transformations or spatial shifts or shift transformations
θj : Ω → Ω, (ωi )i∈Zd 7→ (ωi−j )i∈Zd .

Example 6.8 (Spin-flip transformation) Let the state space E be a sym-


metric Borel set of R and define the spin-flip transformation

τ : Ω → Ω, (ω)i∈Zd 7→ (−ωi )i∈Zd .

Notation 6.9 The set of all translation invariant probability measures on Ω


is denoted by PΘ (Ω, F) = {µ ∈ P(Ω, F) : µ = µ ◦ θi−1 for any i ∈ Zd }.
The set of all translation invariant Gibbs measures for the interaction poten-
tial Φ and inverse temperature β is denoted by GΘ (Φ, β) = {µ ∈ G(Φ, β) : µ =
µ ◦ θi−1 for any i ∈ Zd }.

Definition 6.10 (Symmetry breaking) A symmetry τ ∈ T is said to be


broken if there exists some µ ∈ G(Φ, β) such that τ (µ) 6= µ for some β.

A direct consequence of symmetry breaking is that |G(Φ, β)| > 1, i.e.,


when there is a symmetry breaking the interaction potential exhibit a phase
transition. There are models where all possible symmetries are broken as
well as models where only a subset of symmetries is broken. A first exam-
ple is the one-dimensional inhomogeneous Ising model, which is probably the

51
simplest model showing symmetry breaking. The one-dimensional inhomoge-
neous Ising model on the lattice N has the inhomogeneous nearest-neighbour
interaction potential Φ = (φA )A∈S defined
P for−2J
a sequence (Jn )n∈N of real
numbers Jn > 0 for all n ∈ N such that n∈N e n < ∞, as follows

−Jn σn σn+1 , if A = {n, n + 1},
φA = .
0 otherwise ,
This model is spatial inhomogeneous, the potential Φ is invariant under the
spin-flip transformation τ , but some Gibbs measures are not invariant un-
der this spin-flip transformation (for details see [Geo88]) for β > 0. The
simplest spatial shift invariant model which exhibits a phase transition is
the two-dimensional Ising model, which we will study in the next subsec-
tion 6.4. This model breaks the spin-flip symmetry while the shift-invariance
is preserved. Another example of symmetry breaking is the discrete two-
dimensional Gaussian model by Shlosman ([Shl83]). Here the spatial shift
invariance is broken. More information can be found in [Geo88] or [GHM00].

6.4 The Ising ferromagnet in two dimensions


Let E = {−1, 1} be the state space and define the nearest-neighbour inter-
action potential Φ = (φA )A∈S as

−σi σj , if A = {i, j}, |i − j| = 1
φA = .
0 , otherwise
The interaction potential Φ is invariant under the spin flip transformation τ
and the shift-transformations θi , i ∈ Zd . Let δ+ , δ− be the Dirac measures for
the constant configurations ω+ ∈ Ω and ω− ∈ Ω. The interaction potential
takes its minimum at ω+ and ω− , hence ω+ and ω− are ground states for
the system. The ground state generacy implies a phase transition if ω+ , ω−
are stable in the sense that the set of Gibbs measure G(Φ, β) is attracted by
each of the measures δ+ and δ− for β → ∞. Let d denote the Lévy metric
compatible with weak convergence in the sense of probability measures.
Theorem 6.11 (Phase transition) Under the above assumptions it holds
lim d(GΘ (Φ, β), δ+ ) = lim d(GΘ (Φ, β), δ− ) = 0.
β→∞ β→∞

For sufficiently large β there exist two shift-invariant Gibbs measure µβ+ , µβ− ∈
GΘ (Φ, β) with τ (µβ+ ) = µβ− and

µβ− (ω0 ) = Eµβ (ω0 ) < 0 < µβ+ (ω0 ) = Eµβ (ω0 ).
− +

52
Remark 6.12
(i) µβ+ (ω0 ) is the mean magnetisation. Thus: The two-dimensional Ising
ferromagnet admits an equilibrium state/measure of positive magnetisa-
tion although there is no action of an external field. This phenomenon
is called spontaneous magnetisation.
(ii) GΘ (Φ, β) > 1 ⇔ µβ+ (ω0 ) > 0 goes back to [LL72]. Moreover, the Grif-

fith’s inequality implies that the magnetisation µβ+ (ω0 ) is a non-negative


non-decreasing function of β. Moreover
there is a critical inverse tem-
perature βc such that GΘ (Φ, β) = 1 when β < βc and GΘ (Φ, β) > 1
when β > βc . The value of βc is
1 1 √
βc = sinh−1 1 = log 1 + 2)
2 2
and the magnetisation for β ≥ βc is
 81
µβ+ (ω0 ) = 1 − (sinh 2β)−4 .

(iii) For the same model in three dimensions one has again µβ+ , µβ− ∈ GΘ (Φ, β),
but there also exist non-shift-invariant Gibbs measures ([Dob73]).

Proof of Theorem 6.11. Let Λ ⊂ Z2 be a centred cube. Denote by


BΛ = {{i, j} ⊂ Z2 : |i − j| = 1, {i, j} ∩ Λ 6= ∅}
the set of all nearest-neighbour bonds which emanate from sites in Λ. Each
bond b = {i, j} ∈ BΛ should be visualised as a line segment between i
and j. This line segment crosses a unique ”dual” line segment between two
nearest-neighbour sites u, v in the dual cube Λ∗ (shift by 12 in the canonical
directions). The associate set b∗ = {u, v} is called the dual bond of b, and we
write
B∗Λ = {b∗ : b ∈ BΛ } = {{u, v} ⊂ Λ∗ : |u − v| = 1}
for the set of all dual bonds. Note
1
b∗ = {u ∈ Λ+ : |u − (i + j)/2| = }.
2
A set c ⊂ B∗Λ is called a circuit of length l if c = {{u(k−1) , u(k) } : 1 ≤ k ≤ l} for
some (u(0) , . . . , u(l) ) with u(l) = u(0) , |{u(1) , . . . , u(l) }| = l and {u(k−1) , u(k) } ∈
B∗Λ , 1 ≤ k ≤ l. A circuit c surrounds a site a ∈ Λ if for all paths (i(0) , . . . , i(n) )
in Z2 with i(0) = a and i(n) ∈ / Λ and {i(m−1) , i(m) } ∈ BΛ for all 1 ≤ m ≤ n there
exits a m ∈ N with {i (m−1)
, i(m) }∗ ∈ c. We denote the set of circuits which
surround a by Ca . We need a first lemma.

53
Lemma 6.13 For all a ∈ Λ and l ≥ 1 we have

{c ∈ Ca : |c| = l} ≤ l3l−1 .

Proof. Each c ∈ Ca of length l contains at least one of the l dual bonds

{a + (k − 1, 0), a + (k, 0)}∗ k = 1, . . . , l,

which cross the horizontal half-axis from a to the right for example. The
remaining l − 1 dual bonds are successively added, at each step there are at
most 3 possible choices. 

The ingenious idea of Peierls ([Pei36]) was to look at circuits which occur in
a configuration. For each ω ∈ Ω we let

B∗Λ (ω) = {b∗ : b = {i, j} ∈ BΛ , ωi 6= ωj }

denote the set of all dual bonds in B∗ which cross a bond between spins of
opposite sign. A circuit c with c ⊂ B∗Λ (ω) is called a contour for ω. We let
ω outside of Λ be constant. As in Figure 3 we put outside + -spins. If a site
a ∈ Λ is occupied by a minus spin then a is surrounded by a contour for ω.
The idea for the proof of Theorem 6.11 is as follows. Fix ω+ boundary
condition outside of Λ. Then the minus spins in Λ form (with high proba-
bility) small islands in an ocean of plus spins. Then in the limit Λ ↑ Z2 one
obtains a µβ+ ∈ G(βΦ) which is close to the delta measure δ+ for β sufficiently
β β
large. As
δ+ and δ− are distinct, so are µ+ and µ− when β is large. Hence
G(βΦ) > 1 when β is large. We turn to the details. The next lemma just
ensures the existence of a contour for positive boundary conditions and one
minus spin in Λ. We just cite it without any proof.

Lemma 6.14 Let ω ∈ Ω with ωi = +1 for all i ∈ Λc and ωa = −1 for some


a ∈ Λ. Then there exists a contour for ω which surrounds a.

Now we are at the heart of the Peierls argument, which is formulated in the
next lemma.

Lemma 6.15 Suppose c ⊂ B∗Λ is a circuit. Then

γΛβΦ (c ⊂ B∗Λ (·)|ω) ≤ e−2β|c|

for all β > 0 and for all ω ∈ Ω.

54
+
+ + + + +
+ +
+ + + + + +
+ +

+ + - - - + +
-
+
+ +

+
+

+
- - +
+ + -
+ + - + +
- - -
- - - - - -
+ + +
+ + +
- - -
- - + - - - +
+ + +
+ + + +
-
+ - + - +
+ +
+ + + + + + +
-
+ - - + - - - + + + - + - +
-
+ - - + -
- - + + - + + + +
+

+ + - - - + + - - + + -
-

+ +
+ +

- - + + - + - + - -
- - -
+ - - - + + - - - - - - - +
+

+ - + + + - - - + - - - +
+ + +
+ + +

+ - - - - - - +
- -
+

+ +
+

+ - - + - - + + +
- -
+

+ + +
+ + + + + + + + + + + +
Figure 3: a contour for + boundary condition

55
Proof. Note that for all ξ ∈ Ω we have
X X
−HΛ (ξ) = ξi ξj = |BΛ | − (1 − ξi ξj )
{i,j}∈BΛ {i,j}∈BΛ

= |BΛ | − 2|{{i, j} ∈ BΛ : ξ 6= ξj }| = |BΛ | − 2|B∗Λ |.

Now we define two disjoint sets of configurations which we need later for an
estimation.
A1 = {ξ ∈ Ω : ξZd \Λ = ωZd \Λ , c ⊂ B∗Λ (ξ)}
A2 = {ξ ∈ Ω : ξZd \Λ = ωZd \Λ , c ∩ B∗Λ (ξ) = ∅}.
There is a mapping τc : Ω → Ω with

−ξ , if i is surrounded by c,
(τc ξ)i = ,
ξ , otherwise

which flips all spins in the interior of the circuit c. Moreover, for all {i, j} ∈
BΛ we have
ξi ξj , if {i, j}∗ ∈

/c
(τc ξ)i (τc ξ)j = ∗ ,
−ξi ξj , if {i, j} ∈ c
resulting in B∗Λ (τc ξ)4B∗Λ (ξ) = c (this was the motivation behind the defi-
nition of mappings), where 4 denotes the symmetric difference of sets. In
particular we get that τc is a bijection from A2 to A1 , and we have

HΛ (ξ) − HΛ (τc ξ) = 2|B∗Λ (ξ)| − 2|B∗Λ (τc ξ)| = −2|c|.

Now we can estimate with the help of the set of events A1 , A2


P P
ξ∈A exp(−βH Λ (ξ)) ξ∈A exp(−βHΛ (τc ξ))
γΛ (c ⊂ B∗Λ (·)|ω) ≤ P 1
= P 2
ξ∈A2 exp(−βHΛ (ξ)) ξ∈A2 exp(−βHΛ (ξ))
= exp(−2β|c|).

Now we finish our proof of Theorem 6.11. For β > 0 define


X
r(β) = 1 ∧ l(3e−2β )l ,
l≥1

where ∧ denotes the minimum, and note that r(β) → 0 as β → ∞. The


preceding lemmas yield
X X X
γΛ (ωa |ω + ) ≤ γΛ (c ⊂ B∗Λ (·)|ω + ) ≤ e−2β|c| ≤ l3l−2βl ,
c∈Ca c∈Ca l≥1

56
and thus γΛ (ωa |ω + ) ≤ r(β) for all a ∈ Z2 , β > 0 and Λ ⊂ Z2 . Choose
ΛN = [−N, N ]2 ∩ Z2 and define the approximating measures
1 X
νN+ = γΛN +i (·|ω + ).
|ΛN | i∈Λ
N

As P(Ω, F) is compact, the sequence (νN+ )N ∈N has a cluster point µβ+ and one
can even show that µβ+ ∈ GΘ (Φ, β) ([Geo88]). Our estimation above gives
then µβ+ (ωa = −1) ≤ r(β), and in particular one can show that

lim µβ+ = δ+ and lim d(GΘ (βΦ), δ+ ) = 0.


β→∞ β→∞

Note µβ− = τ (µβ+ ). Hence,

µβ+ (ω0 = −1|ω + ) = µβ− (ω0 = +1|ω − ).

If β is so large that µβ+ (ω0 = −1) ≤ r(β) < 31 , then


1
µβ+ (ω0 = −1) = µβ− (ω0 = +1) < .
3
But {ω0 = −1} ∪ {ω0 = +1} = Ω, and hence
2
µβ+ (ω0 = +1) = 1 − µβ+ (ω0 = −1) ≥ .
3


6.5 Extreme Gibbs measures


The set G(Φ, β) of Gibbs measures for some interaction potential Φ and
inverse temperature β > 0 is a convex set, i.e., if µ, ν ∈ G(Φ, β) and 0 < s < 1
then sµ + (1 − s)ν ∈ G(Φ, β). An extreme Gibbs measure (or in physics: a
pure state) is an extreme element µ of the convex set G(Φ, β). The set
of all extreme Gibbs measures is denoted by ex G(Φ, β). Below we give a
characterisation of extreme Gibbs measures. But first we briefly discuss
microscopic and macroscopic quantities. A real function f : Ω → R is
said to describe a macroscopic observable if f is measurable with respect
to the tail-σ-algebra T . The T -measurability of a function f means that the
value of f is not affected by the behaviour of any finite set of spins. For
example, the event
n 1 X o
lim ωi exists and belongs to B B ∈ BR ,
n→∞ |Λn |
i∈Λn

57
is a tail event in T for any cofinal sequence (Λn )n∈N with Λn ↑ Zd as n → ∞.
A function f describes a microscopic observable if it depends only on finitely
many spins. A function f : Ω → R is called cylinder function or local
function if it is FΛ -measurable for some Λ ∈ S. The function f is called
quasi-local if it can be approximated in supremum norm by local functions.
The following theorem gives a characterisation of extreme Gibbs measures.
It was invented by Lanford and Ruelle [LR69] and but was introduced earlier
in a weaker form by Dobrushin [Dob68a],[Dob68b].

Theorem 6.16 (Extreme Gibbs measures) A Gibbs measure µ ∈ G(Φ)


is extreme if and only if µ is trivial on the tail-σ-algebra T , i.e. if µ(A) = 0
or µ(A) = 1 for any A ∈ T .

Microscopic quantities are subject to rapid fluctuations in contrast to


macroscopic quantities. A probability measure µ ∈ P(Ω, F) describing the
equilibrium state of a given system is consistent with the observed empiri-
cal distributions of microscopic variables when it is a Gibbs measure. The
second requirement even gives that macroscopic quantities are constant with
probability one, and with Theorem 6.16 it follows that only extreme Gibbs
measures are an appropriate description of equilibrium states. For this rea-
son, an extreme Gibbs measure is often called a phase. However this term
should not be confused with the physical concept of a pure phase. Note
that the stable coexistence of distinct pure phases in separated regions of
space will also be represented by an extreme Gibbs measure (see Figure 4,
which was taken from [Aiz80]). This can be seen quite nicely in the three-
dimensional Ising model ([Dob73]), where a Gibbs measure is constructed via
Gibbs distributions whose boundary is on one half-space given by upward-
spins and on the other half-space given by downward-spins. It is a tempting
misunderstanding to believe that the coexistence of two pure phases was de-
scribed by a mixture like 21 (µ1 + µ2 ), µ1 , µ2 ∈ G(Φ, β). Such a mixture rather
corresponds to an uncertainty about the true phase of the system. See the
Figure 4 for an illustration of this fact.

6.6 Uniqueness
In this subsection we give a short intermezzo about the question of unique-
ness of Gibbs measures, i.e., the situation when there is a most one Gibbs
measure possible for the given interaction potential. One might guess that
this question has something to due with the dependence structure introduced
from the interaction potential. One is therefore led to check the dependence
structure of the conditional Gibbs distributions at one given lattice site. For

58
Figure 4: An extreme µ(coexistence) and mixture 21 (µ(water) + µ(ice) )

that, fix any i ∈ Zd and consider the ωj -dependence of the Gibbs distribu-
Φ
tion γ{i} (·|ω) for each j ∈ Zd and ω ∈ Ω for a given interaction potential Φ.
Introduce the matrix elements
Φ Φ
Ci,j (Φ) = sup ||γ{i} (·|ξ) − γ{i} (·|η)||,
ξ,η∈Ω,
ξ d =η d
Z \{j} Z \{j}

where ||·|| denotes the uniform distance of probability measures on E, which is


Φ
one half of the total variation distance. Note that γ{i} (·|ξ) ∈ P(E, E) for any
ξ ∈ Ω. The matrix (Ci,j )i,j∈Zd is called Dobrushin’s interdependence matrix.
A first guess describing the dependence structure would be to consider the
sum X
Ci,j (Φ),
j∈Zd

however this tells us nothing about the behaviour of the configuration ω ∈ Ω


at infinity.

Definition 6.17 An interaction potential Φ is said to satisfy Dobrushin’s


uniqueness condition if
X
C(Φ) = sup Ci,j (Φ) < 1. (6.37)
i∈Zd
j∈Zd

To provide a sufficient condition for Dobrushin’s condition to hold, define


the oscillation of any function f : Ω → R as

δ(f ) = sup |f (ξ) − f (η)|.


ξ,η∈Ω

Theorem 6.18 Let Φ be an interaction potential and d ≥ 1.


(i) If Dobrushin’s uniqueness condition (6.37) is satisfied, then |G(Φ)| ≤ 1.

59
(ii) If X
sup (|A| − 1)δ(φA ) < 2,
i∈Zd A3i

then Dobrushin’s uniqueness condition (6.37) is satisfied.

Proof. See [Geo88] and references therein. 

Example 6.19 (Lattice gas) Let E = {0, 1} and let the reference measure
be the counting measure. Let K : S → R be any function on the set of all
finite subsets of Zd and define for ω ∈ Ω the interaction potential Φ by
 Q
K(A) , if ωA = i∈A ωi = 1
φA (ω) =
0 , otherwise

for any A ∈ S. Note that δ(φA ) = |K(A)|. Thus uniqueness is given when-
ever X
sup (|A| − 1)|K(A)| < 4.
i∈Zd A3i

Example 6.20 (One-dimensional systems) Let Φ be a shift-invariant in-


teraction potential and d = 1. Then there is at most one Gibbs measure
whenever X
diam (A)δ(φA ) < ∞.
A∈S,
min A=0

6.7 Ergodicity
We look at the convex set PΘ (Ω, F) of all shift-invariant random fields on
Zd . PΘ (Ω, F) is always non-empty. We also consider the σ-algebra

I = {A ∈ F : θi A = A for all i ∈ Zd } (6.38)

of all shift-invariant events. A F-measurable function f : Ω → R is I-


measurable if and only if f is invariant, in that f ◦ θi = f for all i ∈ Zd . A
standard result in ergodic theory is the following theorem.

Theorem 6.21 (i) A probability measure µ ∈ PΘ (Ω, F) is extreme in


PΘ (Ω, F) if and only if µ is trivial on the invariant σ-algebra.

(ii) Each µ ∈ PΘ (Ω, F) is uniquely determined (within PΘ (Ω, F)) by its


restriction to I.

60
(iii) Distinct probability measures µ, ν ∈ ex PΘ (Ω, F) are mutually singular
on I in that there exists an A ∈ I such that µ(A) = 1 and ν(A) = 0.

Proof. Standard textbooks of ergodic theory or [Geo88]. 

Definition 6.22 (Ergodic measure) A probability measure µ ∈ PΘ (Ω, F)


is said to be ergodic (with respect to the shift-transformation group Θ) if
µ is trivial on the σ-algebra I of all shift-invariant events. In mathematical
physics such a µ is often called a pure state.

Proposition 6.23 (Characterisation of ergodic measures)


Let µ be a probability measure µ ∈ PΘ (Ω, F) and (ΛN )N ∈N any sequence of
cubes with ΛN ↑ Zd as N → ∞. Then the following statements are equivalent.

(i) µ is ergodic.

(ii) For all events A ∈ F,


1 X
lim sup µ(A ∩ θi B) − µ(A)µ(B) = 0.

N →∞ B∈F |ΛN |
i∈Λ N

(iii) For arbitrary cylinder events A and B,


1 X
lim µ(A ∩ θi B) = µ(A)µ(B).
N →∞ |ΛN |
i∈Λ N

One can show that each extreme measure is a limit of finite volume Gibbs
distributions with suitable boundary conditions. Now, what about ergodic
Gibbs measures? The ergodic Theorem 6.24 below gives an answer: If µ ∈
ex PΘ (Ω, F) and (ΛN )N ∈N a sequence of cubes with ΛN ↑ Zd as N → ∞ one
gets
1 X
µ(f ) = lim f (θi ω)
N →∞ |ΛN |
i∈Λ N

for µ-almost all ω ∈ Ω and bounded measurable function f : Ω → R. Thus


1 X
µ = lim δθ i ω for µ − a.a. ω ∈ Ω
N →∞ |ΛN |
i∈Λ N

in any topology which is generated by countably many evaluation mappings


ν 7→ ν(f ). For E finite, the weak topology (of probability measures) has this
property.

61
For any given measurable function f : Ω → R define
1 X
RN f = f ◦ θi N ∈ N. (6.39)
|ΛN | i∈Λ
N

The multidimensional ergodic theorem says something about the limiting be-
haviour of RN f as N → ∞. Let (ΛN )N ∈N be a cofinal sequence of boxes with
ΛN ↑ Zd as N → ∞.
Theorem 6.24 (Multidimensional Ergodic Theorem) Let a probabil-
ity measure µ ∈ PΘ (Ω, F) be given. For any measurable f : Ω → R with
µ(|f |) < ∞,
lim RN f = µ(f |I) µ − a.s.
N →∞

7 A variational characterisation of Gibbs mea-


sures
In this section we give a variational characterisation for translation invariant
Gibbs measures. This characterisation will prove useful in the study of Gibbs
measures and it has a close connection to the physical intuition, namely
that an equilibrium state minimises the free energy. This will be proved
rigorously in this section. Let us start with some heuristics and assume
for this purpose only that the set Ω of configurations is finite. Denote by
ν(ω) = Z −1 exp(−H(ω)) a Gibbs measure with suitable normalisation Z and
Hamiltonian H. The mean energy for any probability measure µ ∈ P(Ω) is
X
Eµ (H) = µ(H) = µ(ω)H(ω),
ω∈Ω

and its entropy is given by


X
H(µ) = − µ(ω) log µ(ω).
ω∈Ω

Now µ(H)−H(µ) = F (µ) is called the free energy of µ, and for any µ ∈ P(Ω)
we have
F (µ) ≥ − log Z and F (µ) = − log Z if and only if µ = ν.
To see this, apply Jensen’s inequality for the convex function ϕ(x) = x log x
and conclude by simple calculation
X  µ(ω)  X  µ(ω) 
µ(H) − H(µ) + log Z = µ(ω) log = ν(ω)ϕ
ω∈Ω
ν(ω) ω∈Ω
ν(ω)
X µ(ω) 
≥ϕ ν(ω) = ϕ(1) = 0,
ω∈Ω
ν(ω)

62
and as ϕ is strictly convex there is equality if and only if µ(ω)
ν(ω)
is a constant.
As Ω is finite one gets that µ = ν. If Ω is not finite one has to employ quite
some mathematical theory which we present briefly in the rest of this section.

Definition 7.1 (Relative entropy) Let A ⊂ F be a sub-σ-algebra of F


and µ, ν ∈ P(Ω, F) be two probability measures. Then
 R
ν(fA log fA ) = Ω fA (ω) log fA (ω)ν(dω) , if µ  ν on A
HA (µ|ν) = ,
∞ , otherwise

where fA is the Radon-Nikodym density of µ|A relative to ν|A (µ|A and ν|A
are the restrictions of the measures to the sub-σ-algebra A), is called the
relative entropy or Kullback-Leibler information or information di-
vergence of µ relative to ν on A.
If A = FΛ for some Λ ∈ S one writes
 R
ν(fΛ log fΛ ) = ΩΛ fΛ (ω) log fΛ (ω)ν(dω) , if µ  ν on Λ
HΛ (µ|ν) = ,
∞ , otherwise
 
where fΛ = dµ Λ
dνΛ
is the Radon-Nikodym density and µΛ and νΛ are the
marginals of µ and ν on Λ for the projection map σΛ : Ω → ΩΛ .

We collect the most important properties of the relative entropy in the fol-
lowing proposition.

Proposition 7.2 Let A ⊂ F a sub-σ-algebra of F and µ, ν ∈ P(Ω, F) any


two probability measures. Then

(a) HA (µ|ν) ≥ 0,

(b) HA (µ|ν) = 0 if and only if µ = ν on A,

(c) HA is an increasing function of A,

(d) H(·|·) is convex.

We now connect the relative entropy to our previous definition of the entropy
functional in Section 3. For this let any finite signed a priori measure λ on
(E, E) be given. Note that the a priori or reference measure need not be
normalised to one (probability measure), and the following notion depends
on the choice of this reference measure. Recall that λΛ denotes the product
measure on ΩΛ = (E Λ , E Λ ).

63
Notation 7.3 Let µ ∈ P(Ω, F). The function

SΛ = −HΛ (µ|σΛ−1 (λΛ ))

is called the entropy of µ in Λ relative to the reference/a priori measure λ.

If the reference measure λ is the counting measure we get back Shannon’s


formula X
HΛ (µ) = − µ(σΛ = ξ) log µ(σΛ = ξ) ≥ 0
ξ∈ΩΛ

for the entropy. We wish to show that the thermodynamic limit of the entropy
exists, i.e.wish to show that
1
h(µ) = lim HΛ (µ)
n→∞ |Λn | n
exists for any cofinal sequence (Λn )n∈N of finite volume boxes in S. Essential
device for the proof of the existence of this limit is the following sub-additivity
property.

Proposition 7.4 (Strong Sub-additivity) Let Λ, ∆ ∈ S and µ ∈ P(Ω, F)


be given. Then

HΛ (µ) + H∆ (µ) ≥ HΛ∩∆ (µ) + HΛ∪∆ (µ). (7.40)

Proof. A proof is given in [Rue69],[Isr79] and in [Geo88]. 

Equipped with this inequality we go further and assume Λ ∩ ∆ = ∅ (note


H∅ (µ) = 0) and observe that for a translation invariant probability measure
µ ∈ PΘ (Ω, F) we get that

HΛ+i (µ) = HΛ (µ) for any Λ ∈ S and any i ∈ Zd .

Denote by Sr.B. the set of all rectangular boxes in Zd .


Lemma 7.5 Suppose that the function a : Sr.B. → [−∞, ∞) satisfies
(i) a(Λ + i) = a(Λ) for all Λ ∈ Sr.B. , i ∈ Zd ,
(ii) a(Λ) + a(∆) ≥ a(Λ ∪ ∆) for Λ, ∆ ∈ Sr.B. , Λ ∩ ∆ = ∅,
(Λn )n∈N a cofinal sequence of cubes with Λn ↑ Zd as n → ∞. Then the limit
1 1
lim a(Λn ) = inf a(∆) (7.41)
n→∞ |Λn | ∆∈Sr.B. |∆|

exists in [−∞, ∞).

64
Proof. Choose
1
c > α := inf a(∆)
∆∈Sr.B. |∆|
1
and let ∆ ∈ Sr.B. be such that |∆| a(∆) < c. Denote by Nn the number of
disjoint translates of ∆ contained in Λn . Then Λn is split into Nn translates
of ∆ and a remainder in the boundary layer. Choose Nn as large as possible.
Then limn→∞ N|Λn |∆|
n|
= 1. The sub-additivity gives

a(Λn ) ≤ Nn a(∆) + (|Λn | − Nn |∆|)a({0}).

Hence,
1
α ≤ lim sup a(Λn ) = lim sup Nn−1 |∆|−1 a(Λn )
n→∞ |Λn | n→∞
−1
< |∆| a(∆) < c.
Letting c tend to α gives the proof of the lemma. 

Now, both Proposition 7.4 and Lemma 7.5 provide the main steps of the
proof of the following theorem.

Theorem 7.6 (Specific entropy) Fix a finite signed reference measure λ


on the measurable state space (E, E). Let µ ∈ PΘ (Ω, F) be a translation
invariant probability measure and (Λn )n∈N a cofinal sequence of boxes with
Λn ↑ Zd as n → ∞. Then,
(a)
1
h(µ) = lim HΛn (µ)
n→∞ |Λn |

exists in [−∞, λ(E)].

(b) h : PΘ (Ω, F) → R, µ 7→ h(µ), is affine and upper semi-continuous. The


level sets {h ≥ c}, c ∈ R, are compact with respect to the weak topology
of probability measures.

Notation 7.7 h(µ) is called the specific entropy per site of µ ∈ PΘ (Ω, F)
relative to the reference measure λ.

Proof of Theorem 7.6. The existence of the specific entropy was proved
first by Shannon ([Sha48]) for the case d = 1, |E| < ∞ and λ the counting
measure. Extensions are due to McMillan ([McM53]) and Breiman ([Bre57]).
The multidimensional version of Shannon’s result is due to Robinson and
Ruelle ([RR67]). The first two assertions of (b) can already be found in
[RR67], an explicit proof of this can be found in [Isr79]. 

65
Now the following question arises. What happens if we take instead of the
reference measure any Gibbs measure and evaluate the relative entropy? We
analyse this question in the following. To define the specific energy of a
translation invariant probability measure it proves useful to introduce the
following function. Let Φ = (φA )A∈S be a translation invariant interaction
potential. Define the function fΦ : Ω → R as
X
fΦ = |A|−1 φA . (7.42)
A30

In the following theorem we prove the existence of the specific energy. To


derive an expression which is independent of any chosen boundary condition,
we formulate the theorem for an arbitrary sequence of boundary conditions,
which applies also to the case of periodic and free boundary conditions.

Theorem 7.8 (Specific energy) Let µ ∈ PΘ (Ω, F), Φ be a translation


invariant interaction potential, (Λn )n∈N be a cofinal sequence of boxes with
Λn ↑ Zd as n → ∞ and (ωn )n∈N be a sequence of configurations ωn ∈ Ω.
Then the specific energy
1
Eµ (fΦ ) = µ(fΦ ) = lim µ(HΛωnn ) (7.43)
n→∞ |Λn |

exists.

Notation 7.9 (Specific free energy) Eµ (fΦ ) or µ(fΦ ) is called the spe-
cific (internal) energy per site of µ relative to Φ. The quantity f (µ) =
µ(fΦ ) − h(µ) is called the specific free energy of µ for Φ.

Proof of Theorem 7.8. For the proof see any of the books [Geo88],[Rue69]
or [Isr79]. The proof goes back to Dobrushin [Dob68b] and Ruelle [Rue69].


We continue our investigations with the previously occurred question of the


relative entropy with respect to a given Gibbs measure.

Theorem 7.10 (Pressure) Let µ ∈ PΘ (Ω, F) and γ ∈ GΘ (Φ, β), β > 0, Φ


be a translation invariant interaction potential, (Λn )n∈N be a cofinal sequence
of boxes with Λn ↑ Zd as n → ∞ and (ωn )n∈N be a sequence of configurations
ωn ∈ Ω. Then,
1
(a) P (Φ) = limn→∞ |Λn |
log ZΛn (ωn ) exists.

66
1
(b) The limit limn→∞ H (µ|γ)
|Λn | Λn
exists and equals

h(µ|Φ) = P (Φ) + µ(fΦ ) − h(µ) = P (Φ) + f (µ). (7.44)

Notation 7.11 P = P (Φ) is called the pressure and specific Gibbs free
energy.

Proof of Theorem 7.10. We just give the main idea of the proof. Details
can be found in [Isr79],[Geo88] and go back to [GM67]. Let Λ ∈ S and
ω ∈ Ω fixed. Recall that the marginal of µ on Λ is a probability measure on
(ΩΛ , FΛ ) as well as the conditional Gibbs distribution γΛ (·|ω) for any given
configuration ω ∈ Ω. Then compute
Z Z
dµΛ dµΛ
HΛ (µ|γΛ (·|ω)) = µΛ (dξ) log (ξ) = µΛ (dξ) log Λ (ξ)
ΩΛ dγΛ ΩΛ dλ
Z Λ

− µΛ (dξ) log (ξ)
ΩΛ dγΛ
Z
= −HΛ (µ) + µΛ (dξ)HΛ (ξωZd \Λ ) + log ZΛ (ω).
ΩΛ

We can draw an easy corollary, which is the first part of the variational
principle for Gibbs measures.

Corollary 7.12 (First part variational principle) For a translation in-


variant interaction potential Φ and µ ∈ PΘ (Ω, F) we have h(µ|φ) ≥ 0. If
moreover µ ∈ GΘ (Φ, β) then h(µ|Φ) = 0.

Proof. The assertions are due to Dobrushin ([Dob68a]) and Lanford and
Ruelle ([LR69]). 

The next theorem gives the reversed direction and a summary of the whole
variational principle.

Theorem 7.13 (Variational principle) Let Φ be a translation invariant


interaction potential, (Λn )n∈N a cofinal sequence of boxes with Λn ↑ Zd as
n → ∞ and µ ∈ PΘ (Ω, F). Then,
1
(a) Let µ ∈ PΘ (Ω, F) be such that lim inf n→∞ H (µ|ν)
|Λn | Λn
= 0. Then
µ ∈ GΘ (Φ, β).

(b) h(µ|Φ) ≥ 0 and h(µ|Φ) = 0 if and only if µ ∈ GΘ (Φ, β).

67
(c) h(·|Φ) : PΘ (Ω, F) → [0, ∞] is an affine lower semi continuous func-
tional which attains its minimum 0 on the set GΘ (Φ, β). Equivalently,
GΘ (Φ) is the set on which the specific free energy functional

f : PΘ (Ω, F) → [0, ∞]

attains its minimum −P (Φ).

Proof. This variational principle is due to Lanford and Ruelle ([LR69]).


A transparent proof which reveals the significance of the relative entropy is
due to Föllmer ([Föl73]). 

8 Large deviations theory


In this section we give a short view on large deviations theory. We motivate
this by the simple coin tossing model. We finish with some recent large
deviations results for Gibbs measures, which we can only discuss briefly.

8.1 Motivation
Consider the coin tossing experiment. The microstates are elements of the
configuration space Ω = {0, 1}N equipped with the product measure Pν ,
where ν ∈ P({0, 1}) is given as ν = ρ0 δ0 +ρ1 δ1 with ρ0 +ρ1 = 1. If ρ0 = ρ1 = 21
we have a ”fair” coin. Recall the projections σj : Ω → {0, 1}, j ∈ N, and
consider the mean
N
1 X
SN (ω) = σj (ω) for ω ∈ Ω.
N j=1

If mν denotes the mean (mν = 12 for a fair coin), the weak law of large
numbers (WLLN) tells us that for ε > 0

Pν (SN ∈ (mν − ε, mν + ε)) → 1 as N → ∞,

and for ε > small enough and z 6= mν

Pν (SN ∈ (z − ε, z + ε)) → 0 as N → ∞.

In particular one can even prove exponential decay of the latter probability,
which we sketch briefly. The problem of decay of probabilities of rare events

68
is the main task of large deviations theory. For simplicity we assume now
that mν = 21 . Then

1
F (z, ε) = − lim log Pν (SN ∈ (z − ε, z + ε))
N →∞ N
= inf I(x),
x∈(z−ε,z+ε)

where the function I is defined as



x log 2x + (1 − x) log 2(1 − x) , x ∈ [0, 1]
I(x) = ,
∞,x∈ / [0, 1]

where as usual 0 log 0 = 0. Since F (z, ε) → I(z) as ε → 0, we may (heuristi-


cally) write
Pν (SN ∈ (z − ε, z + ε)) ≈ exp(−N I(z))
for N large and ε small. The term I(z) measures the randomness of z and
z = 21 = mν is the macrostate which is compatible with the most microstates
(I( 12 ) = 0). The mean SN gives only very limited information. If we want
to know more about the whole random process we might go over to the
empirical measure
N
1 X
LN (ω) = δσ (ω) ∈ P({0, 1}) for any ω ∈ Ω;
N j=1 j

or even to the empirical field


N −1
1 X
RN (ω) = δ k (N ) ∈ P(Ω),
N k=0 T ω

where T 0 = id and (T ω)j = ωj+1 is the shift and ω (N ) is the periodic contin-
uation of the restriction of ω to ΛN .

The latter example can be connected to our experience with Gibbs measures
and distributions as follows. Let ΛN = [−N, N ]d ∩ Zd , N ∈ N, and define the
periodic empirical field as

(per) 1 X
RN (ω) = δ (N ) ∈ PΘ (Ω, F) for all ω ∈ Ω,
|ΛN | k∈Λ θk ω
N

where ω (N ) ∈ Ω is the periodic continuation of the restriction of ω onto ΛN


to the whole lattice Zd . Here, the periodic continuation ensures that the

69
periodic empirical field is translation invariant. The LLN is not available
in general, it is then replaced by some ergodic theorem. For example if
(per)
µ ∈ PΘ (Ω, F) is an ergodic measure, then RN ⇒ µ µ-a.s. as N → ∞.
Going back to the coin tossing example the distributions of SN , LN and RN
under the product measure Pν are the following probability measures
−1
Pν ◦ SN ∈ P([0, 1])
−1
Pν ◦ LN ∈ P(P([0, 1]))
−1
Pν ◦ RN ∈ P(P(Ω)).

The WLLN implies for all of these probabilities exponential decay of the rare
events given by a function I as the rate in N . This will be generalised in the
next subsection, where such functions I are called rate functions.

8.2 Definition
In the following we consider the general setup, i.e. we let X denote a Polish
space and equip it with the corresponding Borel-σ-algebra BX .

Definition 8.1 (Rate function) A function I : X → [0, ∞] is called a


rate function if

(1) I 6= ∞,

(2) I is lower semi continuous,

(3) I has compact level sets.

Definition 8.2 (Large deviations principle) A sequence (PN )N ∈N of prob-


ability measures PN ∈ P(X, BX ) on X is said to satisfy the large deviations
principle with rate (speed) N and rate function I if the following upper and
lower bound hold,
1
lim sup log PN (C) ≤ − inf I(x) for C ⊂ X closed,
N →∞ N x∈C
(8.45)
1
lim inf log PN (O) ≥ − inf I(x) for O ⊂ X open.
N →∞ N x∈O

Let us consider the following situation. Let (Xi )i∈N be i.i.d. real-valued
random variables, i.e., there is a probability space (Ω, F, P) such that each
random variable has the distribution µ = P ◦ X1−1 ∈ P(R, BR ). Denote
−1
the distribution of the mean SN by µN = PN ◦ SN ∈ P(R, BR ). For this
situation there is the following theorem about a large deviations principle

70
for the sequence (µN )N ∈N . Before we formulate that theorem we need some
further definitions. For µ ∈ P(R, BR ) let
Z 
Λµ (λ) = log exp(λx)µ(dx) λ ∈ R,
R

be the logarithmic moment generating function. It is known that Λµ is lower


semi continuous and Λµ (λ) ∈ (−∞, ∞], λ ∈ R. The Legendre-Fenchel trans-
form Λ∗µ of Λµ is given by
Λ∗µ (x) = sup{λx − Λµ (λ)} x ∈ R.
λ∈R

Theorem 8.3 (Cramér’s Theorem) Let (Xi )i∈N be i.i.d. real-valued ran-
dom variables with distribution µ ∈ P(R, BR ) and let µN denote the distribu-
tion of the mean SN . Assume further that Λµ (λ) < ∞ for all λ ∈ R. Then
the sequence (µN )N ∈N satisfies a large deviations principle with rate function
given by the limit of the Legendre-Fenchel transform Λ∗µ of the logarithmic
moment generating function ΛµN , i.e., for any measurable Γ ∈ BR
1
lim sup log µN (Γ) ≤ − inf Λ∗µ (x)
N →∞ N x∈Γ
(8.46)
1
lim inf log µN (Γ) ≥ − inf◦ Λ∗µ (x).
N →∞ N x∈Γ

Proof. See [DZ98] or [Dor99]. 

An important tool in proving large deviations principle is the following al-


ternative version of the well-known Varadhan Lemma ([DZ98]).
Theorem 8.4 (Tilted LDP) Let the sequence (PN )N ∈N of probability mea-
sures PN ∈ P(X, BX ) satisfy a large deviations principle with rate (speed) N
and rate function I. Let F : X → R be a continuous function that is bounded
from above. Define
Z
JN (S) = eN F (x) PN (dx) , S ∈ BX .
S

Then the sequence (PNF )N ∈N of probability measures PNF ∈ P(X, BX ) defined


by
JN (S)
PNF (S) = , S ∈ BX
JN (X)
satisfies a large deviations principle on X with rate N and rate function
I F (x) = sup{F (y) − I(y)} − (F (x) − I(x)).
y∈X

71
Proof. See [dH00] or [DZ98] for the original version of Varadhan’s Lemma
and [Ell85] or [Dor99] for a version as in the theorem.


8.3 Some results for Gibbs measures


We present some results on large deviations principles for Gibbs measures.
We assume the set-up of Section 6 and Section 7. Let Φ an interaction
potential and note that the expectation of the interaction potential with the
periodic empirical field is given by
(per)
hRN (ω), Φi = |ΛN |−1 HΛ(per)
N
(ω) , ω ∈ Ω, (8.47)
where HΛ(per)
N
is the Hamiltonian in ΛN with interaction potential Φ and peri-
odic boundary conditions. Recall that γΛΦ,ωN
denotes the Gibbs distribution in
ΛN with configurational boundary condition ω ∈ Ω and γΛΦ,perN
the Gibbs dis-
tribution in ΛN with periodic boundary condition. Further, if µ ∈ GΘ (Φ, β)
is a Gibbs measure, h(·|µ) = h(·|Φ) denotes the specific relative entropy with
respect to the Gibbs measure µ with respect to the given interaction poten-
tial Φ. Denote by e(Ω) the evaluation σ-algebra for the probability measures
on Ω. Note that the mean energy h·, Φi can be identified as a linear form on
a vector space of finite range interaction potentials. In particular we define
τΛΨN (ω) = hRN
(per)
(ω), Ψi ω ∈ Ω
for any interaction potential Ψ with finite range. In the limit N → ∞ one
gets a linear functional τ on the vector space V of all interaction potentials
with finite range (see [Isr79] and[Geo88] for details on this vector space).
Theorem 8.5 (LDP for Gibbs measures) Let ΛN = [−N, N ]d ∩Zd , β >
0 and Φ be an interaction potential with finite range. Then the following
assertions hold.
(per) −1
(a) Let µ ∈ GΘ (Φ, β) be given. Then the sequence (µ ◦ (RN ) )N ∈N of
(per) −1
probability measures µ ◦ (RN ) ∈ P(P(Ω, F), e(Ω)) satisfies a large
deviations principle with rate (speed) |ΛN | and rate function h(·|µ).
(b) Let γΛΦ,ω
N
be the Gibbs distribution in ΛN with boundary condition ω ∈ Ω.
Then for any closed set F ⊂ P(Ω, F) and any open set G ⊂ P(Ω, F),
1
lim sup log sup γΛΦ,ω (per)
(RN ∈ F ) ≤ − inf {h(ν) + hν, Φi + P (Φ)},
N →∞ |ΛN |
N ν∈F
ω∈Ω
1
lim inf log sup γΛΦ,ω (per)
(RN ∈ G) ≥ − inf {h(ν) + hν, Φi + P (Φ)}.
N →∞ |ΛN | ω∈Ω
N ν∈G
(8.48)

72
(c) Let K ⊂ in V ∗ be a measurable subset. Then
1
lim sup log sup γΛΦ,ω (τΛN ∈ K) ≤ − inf {JVΦ (τ )}
N →∞ |ΛN |
N
ω∈Ω τ ∈K
(8.49)
1
lim inf log sup γΛΦ,ω (τΛN ∈ K) ≥ − inf ◦ {JVΦ (τ )},
N →∞ |ΛN | ω∈Ω
N τ ∈K

with JVΦ (τ ) = P (Φ) + inf Ψ∈V {τ (Ψ) + P (Ψ + Φ)}.

Proof. The part (a) can be found in [Geo88] and in [FO88] or alternatively
in [Oll88], all for the case that the state space E is finite. If E is an arbitrary
measurable space, see [Geo93]. Part (b) is in [Geo93] and [Oll88], and part
(c) in [Geo93]. Note that the restrictions on the interaction potential can
even be relaxed, see [Geo93]. The corresponding theorems for continuous
systems can be found in [Geo95]. 

A last remark to part (c) of the Theorem 8.5.

Theorem 8.6 (Equivalence of ensembles) Let ΛN = [−N, N ]d ∩ Zd and


Φ be an interaction potential with finite range. Let K ⊂ R be a measurable
set of energy densities. Then there is an interaction potential Ψ ∈ V such
that Ψ + Φ ∈ V and it holds.

accN →∞ γΛΦ,per (·|τΛΦN ∈ K) ⊂ GΘ (Φ + Ψ).



N
(8.50)

Proof. See [Geo93] and [Geo95] and [LPS95]. Observe that the periodic
boundary conditions are crucial for this result (they ensure the translation
invariance). Translation invariance provides, as we know from Section 7, a
variational characterisation for Gibbs measures. There exists no proof for
configurational boundary conditions for dimension d ≥ 2. For the case d = 1
see [Ada01]. 

9 Models
We present here some important models in statistical mechanics. For more
models for lattice systems see [BL99a] and [BL99b]. The last example in this
section is the continuous Ising model which is an effective model for interfaces
and plays an important role for many investigations.

73
9.1 Lattice Gases
We consider here a system of particles occupying a set Λ ⊂ Zd with |Λ| = V .
Here |Λ| denotes the number of sites in Λ. At each point of Λ there is at
most one particle. For i ∈ Λ we set ωi = 1 if there is a particle at the site i
and set ωi = 0 otherwise. Any ω ∈ Ω := {0, 1}Λ is called a configuration. For
a configuration ω we have the Hamiltonian HΛ (ω). The canonical partition
function is X
ZΛ (β, N ) = e−βHΛ (ω) .
P ω∈Ω,
i∈Λ ωi =N

Note that there is no need here for N ! since the particles are indistinguishable.
The thermal wavelength λ is put equal to 1. The grandcanonical partition
function is then
V
X X V
X X P
ZΛ (β, µ) = eβN µ e−βHΛ (ω) = e−β(HΛ (ω)−µ i∈Λ ωi )

N =0 P ω∈Ω, N =0 P ω∈Ω,
i∈Λ ωi =N i∈Λ ωi =N
X P
= e−β(HΛ (ω)−µ i∈Λ ωi )
.
ω∈Ω

The thermodynamic functions are defined in the usual way. The probability
for a configuration ω ∈ {0, 1}Λ is
P
e−β(HΛ (ω)−µ i∈Λ ωi )
.
ZΛ (β, µ)
The Hamiltonian is of the form
X
HΛ (ω) = ωi ωj φ(qi − qj ),
i,j∈Λ, i6=j

where qi is the position vector of the site i ∈ Λ. However this is too difficult
to solve in general. We consider two simplifications of the potential energy:

Mean-field Models: φ is taken to be a constant. Therefore


X
HΛ (ω) = −λ ωi ωj .
i,j∈Λ, i6=j

Take λ > 0, otherwise the interaction potential is not tempered. When


ωi = 1 for all i ∈ Λ, HΛ (ω) = −λ V (V2−1) and therefore HΛ is not stable. For
HΛ to be stable we must take λ = Vγ with γ > 0. Thus
γ X
HΛ (ω) = − ωi ωj .
V i,j∈Λ, i6=j

74
Note that for Mean-field models the lattice structure is not important since
the interaction does not depend on the location of the lattice sites and there-
fore we can take Λ = {1, 2, . . . , V }.

Nearest-neighbour Models: In these models we take



−J , if |qi − qj | = 1
φ(qi − qj ) = ,
0 , if |qi − qj | > 1

that is the interaction is only between nearest neighbours and is then equal
to −J, J ∈ R. If we denote a pair of neighbouring sites i and j by hi, ji we
have X
HΛ (ω) = −J ωi ωj .
hi,ji

Note that J can be negative or positive.

9.2 Magnetic Models


In magnetic models at each site of Λ there is a dipole or spin. This spin could
be pointing upwards or downwards, that is, along the direction of the external
magnetic field or in the opposite direction. For i ∈ Λ we set σi = 1 if the spin
at the site i is pointing upwards and σi = −1 if it is pointing downwards.
The term σ ∈ {−1, 1}Λ is called a configuration. For a configuration σ we
have an energyP E(σ) and an interaction with an external magnetic field of
strength h, −h i∈Λ σi . The partition function is then
X P
ZΛ (β, h) = e−β(E(σ)−h i∈Λ σi )
.
σ∈{−1,1}Λ

The free energy per lattice site is


1
fΛ (β, h) = − log ZΛ (β, h).
βV

The probability for a configuration σ ∈ {−1, 1}Λ is


P
e−β(E(σ)−h i∈Λ σi )
.
ZΛ (β, h)

The total magnetic moment is the random variable


X
MΛ (σ) = σi
i∈Λ

75
and therefore
P
( i∈Λ σi )e−β(E(σ)−h i∈Λ σi )
P P
σ∈{−1,1}Λ 1 ∂
E(MΛ ) = = log ZΛ (β, h).
ZΛ (β, h) β ∂h
Then if mΛ (β, h) denotes the mean magnetisation per lattice site we have
E(MΛ ) ∂
mΛ (β, h) = = − fΛ (β, h).
V ∂h
Note that
∂2 β
2
fΛ (β, h) = − E(MΛ − E(MΛ ))2 ≤ 0.
∂h V
Therefore h 7→ fΛ (β, h) is concave. If E(σ) = E(−σ), then

fΛ (β, −h) = fΛ (β, h).

If Λl is a sequence of regions tending to infinity and if

lim fΛl (β, h) = f (β, h),


l→∞

then h 7→ f (β, h) is also concave and if it is differentiable



m(β, h) := lim mΛl (β, h) = − f (β, h).
l→∞ ∂h
Relation between Lattice Gas and Magnetic Models

We can relate the Lattice Gas to a Magnetic Model and vice versa by the
transformation
ωi = (σi + 1)/2
or
σi = 2ωi − 1.
This gives
X 1 X 1
HΛ (ω) − µ σi = E(σ) − (a + µ) σi − (b + µ)V,
i∈Λ
2 i∈Λ 2

where a and b are constants. Therefore


1 1
πΛ (β, µ) = (b + µ) − fΛ (β, a + µ)
2 2
and
1 1
ρΛ (β, µ) = (1 + mΛ (β, a + µ)).
2 2

76
9.3 Curie-Weiss model
We study here the Curie-Weiss Model, which is a mean-field model given by
the interaction energy
V
!2
α X α X α
E(σ) = − σi σj = − σi + , σ ∈ {−1, 1}Λ ,
V i≤i<j≤V
2V i=1
2

where α > 0 and Λ is any finite set with |Λ| = V . We sketch here only
some explicit calculations, more on the model can be found in the books
[Ell85],[Dor99], [Rei98], and [TKS92]. The partition function is given by
X PV
ZΛ (β, h) = e−β(E(σ)−h i=1 σi ) .
σ∈{−1,1}V

For ν = βα this becomes


 !2 
V V
ν
X ν X X
ZΛ (β, h) = e− 2 exp  σi + βh σi  .
2V i=1 i=1
σ∈{−1,1}V

Note that ZΛ (β, −h) = ZΛ (β, h). In the identity


Z ∞
1 2 √
e− 2 y dy = 2π,
−∞

put y = x − a. This gives


Z ∞
1 2 √ 1 2
e(− 2 x +ax)
dx = 2πe 2 a
−∞

or Z ∞
1
1 2
a 1 2
e = √
2 e(− 2 x +ax) dx.
2π −∞
p ν PV 
Using this identity with a = V i=1 σi we get

" V
#

Z r
− ν2
X 1 1 2  ν X
ZΛ (β, h) = e √ exp − x + x + βh σi dx

−∞ 2 V i=1
σ∈{−1,1}V
Z ∞ " r V
#
ν 1 1 2
X  ν X
= e− 2 √ e− 2 x exp x + βh σi dx.
2π −∞ V
V i=1
σ∈{−1,1}

77
Now
X V
X
exp(κ σi ) = (2 cosh κ)V .
σ∈{−1,1}V i=1

Therefore
 V
" #
1
Z ∞  r2
− ν2 − 21 x2
ZΛ (β, h) = e √ e 2 cosh x + βh dx.
2π −∞ V

Putting η = √x , we get
νV

 12 Z ∞ V
νη 2
 
− ν2 V νV
ZΛ (β, h) = e 2 exp(− ) cosh(νη + βh) dη
2π −∞ 2
  12 Z ∞
ν νV
= e− 2 2V eV G(h,η) dη,
2π −∞

where
1
G(h, η) = − νη 2 + log cosh(νη + βh).
2
The free energy per lattice site is
 
1 ν 1 1 νV
fΛ (β, h) = − log ZΛ (β, h) = − log 2 − log
βV 2βV β 2βV 2π
Z ∞
1
− log eV G(h,η) dη.
βV −∞

Therefore by Laplace’s Theorem (see for example [Ell85] or [Dor99]), the free
energy per lattice site in the thermodynamic limit is
Z ∞
1 1
f (β, h) = − log 2 − lim log eV G(h,η) dη
β V →∞ βV −∞
1 1
= − log 2 − sup G(h, η).
β β η∈R

Suppose that the supremum of G(h, η) is attained at η(h). Then


1 1
f (β, h) = − log 2 − G(h, η(h))
β β
and
∂G
(h, η(h)) = −νη(h) + ν tanh(ν(h, η(h)) + βh) = 0,
∂η

78
y y=
y= tanh ( +h )

y= tanh ( )

− h/ m0 (h )

Figure 5: h > 0, ν > 1

or
η(h) = tanh(νη(h) + βh).
The mean magnetisation per site in thermodynamic limit is
∂ 1 ∂
m(β, h) = − f (β, h) = G(h, η(h))
∂h β ∂h
∂G ∂η
= tanh(νη(h) + βh) + (h, η(h)) (h)
∂η ∂h
= η(h),
∂G
since ∂η
(h, η(h)) = 0.

Since
f (β, −h) = f (β, h) and m(β, −h) = −m(β, h),
it is sufficient to consider the case h ≥ 0 (see Figure 7 and 8). The expression
m0 = limh↓0 m(h) is called the spontaneous magnetisation, this is the
mean magnetisation as the magnetic field is decreased to zero,

m0 = lim m(h) = lim η(h).


h↓0 h↓0

We have from above 


= 0 , if ν ≤ 1
m0 .
> 0 , if ν > 1

79
y y=

y= tanh ( +h )
y= tanh ( )

− h/ (h )

Figure 6: h > 0, ν ≤ 1

-f ( ,
h )

em
0

lo p
s
1_ ln!2

Figure 7: ν > 1

80
-f ( ,
h )

_1 ln!2

Figure 8: ν ≤ 1

Let Tc = αk ; Tc is called the Curie Point. T ≥ Tc corresponds to ν ≤ 1 (see


figure 8) and T < Tc to ν > 1 (see Figure 7).

> 0 , when T < Tc
m0 .
= 0 , when T ≥ Tc

We have a phase transition at the Curie Point corresponding to the onset of


spontaneous magnetisation.

We can consider this model from the point of view of a lattice gas. Consider
a lattice gas with potential energy
V
!2 V
γ X γ X γ X
HΛ (ω) = − ωi ωj = − ti + ωi
V i≤i<j≤V
2V i=1
2V i=1

(σi +1)
Let ti = 2
. Then
V V
X 1 X
ωi = ( σi + V ).
i=1
2 i=1

81
Therefore
V
!2 V V
γ X γX γV γ X γ
HΛ (ω) = − σi − σi − + σi + .
8V i=1
4 i=1 8 4V i=1 4

We can neglect the last two terms, because γ is small and the expectation of
a single spin is zero, and we take
V
!2 V
γ X γX γV
HΛ (ω) = − σi − σi − .
8V i=1
4 i=1 8

Then
V V
!2 V
X γ X γ µ X γ µ
HΛ (ω) − µ ωi = − σi −( + ) σi − ( + )V
i=1
8V i=1
4 2 i=1 8 2
V
γ µ X γ µ
= −E(σ) − ( + ) σi − ( + )V.
4 2 i=1 8 2
γ
with α = 4
and
γ 1 γ 1
π(β, µ) = ( + µ) − f (β, + µ)
8 2 4 2
and
1 γ 1
ρ(β, µ) = (1 + m(β, + µ)).
2 4 2
Let µ0 = − γ2 , then

γ 1 1
π(β, µ) = ( + µ) − f (β, (µ − µ0 ))
8 2 2
and
1 1
ρ(β, µ) = (1 + m(β, (µ − µ0 ))).
2 2
4
If β > γ , then π(β, µ) has a discontinuity in its derivative at µ0 and ρ(β, µ)
has a discontinuity at µ0 (see Figure 9).

82
π( , µ )

µ
µ0

( , µ )

µ
µ0

4
Figure 9: β > γ

83
9.4 Continuous Ising model
In the continuous Ising model the state space E = {−1, +1} is replaced by
d
the real numbers R. Let Ω = RZ denote the space of configurations. Due
to the non-compactness of the state space severe mathematical difficulties
arise. We note that the continuous Ising model can be seen as an effective
model describing the height of an interface, here the functions φ ∈ Ω give the
height of an interface for some reference height; and any collection (σx )x∈Zd or
probability measure P ∈ P(Ω, F) is called random field of heights. Details
about this model can be found in [Gia00] and [Fun05]. One first considers
the so-called massive model, where there is a mass m > 0 implying a self-
interaction. Let Λ ∈ S, ψ ∈ Ω and m > 0. We write synonymously φx = φ(x)
for φ ∈ Ω. Nearest neighbour heights do interact with an elastic interaction
potential V : R → R, which we assume to be strictly convex with quadratic
growth, and which depends only on the difference in the heights of the nearest
2
neighbours. In the simplest case V (r) = r2 one gets the Hamiltonian
X m2 1 X
HΛψ (φ) = φ2x + (φx − φy )2 ,
x∈Λ
2 4d x,y∈Λ
|x−y|=1

with φx = ψx for x ∈ Λc . The interface here is said to be anchored at ψ


outside of Λ. A random interface anchored at ψ outside of Λ is given by the
Gibbs distribution
1 ψ
γΛψ (dφ) = e−βHΛ (φ) λψΛ (dφ),
ZΛ (ψ)
where Y Y
λψΛ (dφ) = dφx δψx (dφx )
x∈ x∈Λ
/

is the product of the Lebesgue measure at each single site in Λ and the Dirac
measure at ψx for x ∈ Λc . The term λψΛ is called reference measure in Λ with
boundary ψ. The thermodynamic limit exists for the model with m > 0
in any dimension. However, for the most interesting case m = 0 this exists
only for d ≥ 3. These models are called massless models or harmonic
crystals. The interesting feature of these models is that there are infinitely
many Gibbs measures due to the continuous symmetry. Hence we are in
a regime of phase transitions (see [BD93] for some rigorous results for this
regime). The massless models have been studied intensively during the last
fifteen years (see [Gia00] for an overview). The main technique applied is
the random walk representation. This can be achieved when one employs
summation by parts to obtain a discrete elliptic problem.

84
Figure 10: height-functions φ : Zd → R

This gives also the hint that we need d ≥ 3 due to this random walk represen-
tation and the transience of the random walk. Luckily, if one goes over to the
random field of gradient, i.e. the field derived with the discrete gradient
mapping from the random field of heights, one has the existence of infinite
Gibbs measure for any dimension ([Gia00],[Fun05]). However, one looses the
product structure of the reference measure and one has to deal with the
curl free condition. The fundamental result concerning these gradient Gibbs
measures is given in [FS97]. For a recent review see [Fun05].

85
References
[AA68] V.I. Arnold and A. Avez. Ergodic problems of classical mechanics.
Benjamin, New York, 1968.

[Ada01] S. Adams. Complete Equivalence of the Gibbs Ensembles for one-


dimensional Markov Systems. Journal Stat. Phys., 105(5/6), 2001.

[AGL78] M. Aizenman, S. Goldstein, and J.L. Lebowitz. Conditional Equi-


librium and the Equivalence of Microcanonical and Grandcanon-
ical Ensembles in the Thermodynamic limit. Commun. Math.
Phys., 62:279–302, 1978.

[Aiz80] M. Aizenman. Instability of phase coexistence and translation in-


variance in two dimensions. Number 116 in Lecture Notes in
Physics. Springer, 1980.

[AL06] S. Adams and J.L. Lebowitz. About Fluctuations of the Kinetic


Energy in the Microcanonical Ensemble. in preparation, 2006.

[Bal91] R. Balian. From Microphysics to Macrophysics - Methods and


Applications of Statistical Physics. Springer-Verlag, Berlin, 1991.

[Bal92] R. Balian. From Microphysics to Macrophysics - Methods and


Applications of Statistical Physics. Springer-Verlag, Berlin, 1992.

[BD93] E. Bolthausen and J.D. Deuschel. Critical Large Deviations For


Gaussian Fields In The Phase Transition Regime, I. Ann. Probab.,
21(4):1876–1920, 1993.

[Bir31] G.D. Birkhoff. Proof of the ergodic theorem. Proc. Nat. Acad. Sci.
USA, 17:656–600, 1931.

[BL99a] G.M. Bell and D.A. Lavis. Statistical Mechanics of Lattice Sys-
tems, volume I. Springer-Verlag, 2nd edition, 1999.

[BL99b] G.M. Bell and D.A. Lavis. Statistical Mechanics of Lattice Sys-
tems, volume II. Springer-Verlag, 1999.

[Bol84] L. Boltzmann. Über die Eigenschaften monozyklischer und an-


derer damit verwandter Systeme, volume III of Wissenschaftliche
Abhandlungen. Chelsea, New York, 1884. reprint 1968.

[Bol74] L. Boltzmann. Theoretical physics and philosophy writings. Reidel,


Dordrecht, 1974.

86
[Bre57] L. Breiman. The individual ergodic theorem of information theory.
Ann. Math. Stat., 28:809–811, 1957.

[CK81] I. Csiszár and J. Körner. Information Theory, Coding Theorems


for Discrete Memoryless Systems. Akdaémiai Kiadó, Budapest,
1981.

[dH00] F. den Hollander. Large Deviations. American Mathematical So-


ciety, 2000.

[Dob68a] R.L. Dobrushin. The description of a random field by means of


conditional probabilities and conditions of its regularity. Theor.
Prob. Appl., 13:197–224, 1968.

[Dob68b] R.L. Dobrushin. Gibbsian random fields for lattice systems with
pairwise interactions. Funct. Anal. Appl., 2:292–301, 1968.

[Dob68c] R.L. Dobrushin. The problem of uniqueness of a Gibbs random


field and the problem of phase transition. Funct. Anal. Appl.,
2:302–312, 1968.

[Dob73] R.L. Dobrushin. Investigation of Gibbsian states for three dimen-


sional lattice systems. Theor. Prob. Appl., 18:253–271, 1973.

[Dor99] T.C. Dorlas. Statistical Mechanics, Fundamentals and Model So-


lutions. IOP, 1999.

[DZ98] A. Dembo and O. Zeitouni. Large Deviations Techniques and Ap-


plications. Springer Verlag, 1998.

[EL02] G. Emch and C. Liu. The Logic of Thermostatistical Physics.


Springer, Budapest, 2002.

[Ell85] R. S. Ellis. Entropy, Large Deviations and Statistical Mechanics.


Springer-Verlag, 1985.

[FO88] H. Föllmer and S. Orey. Large Deviations For The Empirical Field
Of A Gibbs Measure. Ann. Probab., 16(3):961–977, 1988.

[Föl73] H. Föllmer. On entropy and information gain in random fields.


Probab. Theory Relat. Fields, 53:147–156, 1973.

[FS97] T. Funaki and H. Spohn. Motion by Mean Curvature from the


Ginzburg-Landau ∇φ Interface Model. Commun. Math. Phys.,
185:1–36, 1997.

87
[Fun05] T. Funaki. Stochastic Interface Models, volume 1869 of Lecture
Notes in Mathematics, pages 1–178. Springer, 2005.

[Gal99] G. Gallavotti. Statistical Mechanics: A short Treatise. Springer-


Verlag, 1999.

[Geo79] H. O. Georgii. Canonical Gibbs Measures. Lecture Notes in Math-


ematics. Springer, 1979.

[Geo88] H. O. Georgii. Gibbs Measures and Phase Transitions. De Gruyter,


1988.

[Geo93] H.O. Georgii. Large deviations and maximum entropy principle


for interacting random fields on Zd . Ann. Probab., 21:1845–1875,
1993.

[Geo95] H.O. Georgii. The Equivalence of Ensembles for Classical Systems


of Particles. Journal Stat. Phys., 80(5/6):1341–1378, 1995.

[GHM00] H.O. Georgii, O. Häggström, and C. Maes. The random geometry


of equilibrium phases, volume 18 of Phase transitions and Critical
phenomena, pages 1–142. Academic Press, London, 2000.

[Gia00] G. Giacomin. Anharmonic Lattices, Random Walks and Random


Interfaces, volume I of Recent research developments in statisti-
cal physics, vol. I, Transworld research network, pages 97–118.
Transworld research network, 2000.

[Gib02] J.W. Gibbs. Elementary principles of statistical mechanics, devel-


oped with special reference to the rational foundations of thermo-
dynamics. Scribner, New York, 1902.

[GM67] G. Gallavotti and S. Miracle-Sole. Statistical mechanics of lattice


systems. Commun. Math. Phys., 5:317–324, 1967.

[Hua87] K. Huang. Statistical Mechanics. Wiley, 1987.

[Isi24] E. Ising. Beitrag zur theorie des ferro- und paramagnetismus. Dis-
sertation, Mathematisch-Naturwissenschaftliche Fakultät der Uni-
versität Hamburg, 1924.

[Isr79] R. B. Israel. Convexity in the Theory of Lattice Gases. Princeton


University Press, 1979.

88
[Jay89] E.T. Jaynes. Papers on probability, statistics and statistical
physics. Kluwer, Dordrect, 2nd edition, 1989.

[Khi49] A.I. Khinchin. Mathematical Foundations of Statistical Mechanics.


Dover Publications, 1949.

[Khi57] A.I. Khinchin. Mathematical Foundations of Information Theory.


Dover Publications, 1957.

[Kur60] R. Kurth. Axiomatics of Classical Statistical Mechanics. Pergamon


Press, 1960.

[KW41] H.A. Kramers and G.H. Wannier. Statistics of the two-dimensional


ferromagnet I-II. Phys. Rev., 60:252–276, 1941.

[Len20] W. Lenz. Beitrag zum vertsändnis der magnetischen erscheinungen


in festen körpern. Physik. Zeitschrift, 21:613–615, 1920.

[LL72] J.L. Lebowitz and A.M. Löf. On the uniqueness of the equilibrium
state for Ising spin systems. Commun. Math. Phys., 25:276–282,
1972.

[LPS95] J.T. Lewis, C.E. Pfister, and W.G. Sullivan. Entropy, concentra-
tion of probability and conditional limit theorems. Markov Process.
Related Fields, 1(3):319–386, 1995.

[LR69] O.E. Lanford and D. Ruelle. Observables at infinity and states


with short range correlations in statistical mechanics. Commun.
Math. Phys., 13:194–215, 1969.

[McM53] B. McMillan. The basic theorem of information theory. Ann. Math.


Stat., 24:196–214, 1953.

[Min00] R. A. Minlos. Introduction to Mathematical Statistical Physics.


AMS, 2000.

[Oll88] S. Olla. Large Deviations for Gibbs Random Fields. Probab. Th.
Rel. Fields, 77:343–357, 1988.

[Pei36] R. Peierls. On Ising’s model of ferromagnetism. Proc. Ca,bridge


Phil. Soc., 32:477–481, 1936.

[Rei98] L.E. Reichl. A Modern Course in Statistical Physics. Wiley, New


York, 2nd edition, 1998.

89
[RR67] D.W. Robinson and D. Ruelle. Mean entropy of states in classical
statistical mechanics. Commun. Math. Phys., 5:288–300, 1967.

[Rue69] D. Ruelle. Statistical Mechanics: Rigorous Results. Addison-


Wesley, 1969.

[Rue78] D. Ruelle. Thermodynamic formalism: The Mathematical Struc-


tures of Classical Equilibrium. Addison-Wesley, 1978.

[Sha48] C.E. Shannon. A mathematical theory of communication. Bell


System Techn. J., 27:379–423, 1948.

[Shl83] S.B. Shlosman. Non-translation-invariant states in two dimensions.


Commun. Math. Phys., 87:497–504, 1983.

[SW49] C.E. Shannon and W. Weaver. The mathematical theory of infor-


mation. University of Illinois Press, Budapest, 1949.

[Tho74] R.L. Thompson. Equilibrium States on Thin Energy Shells. Mem-


oirs of the American Mathematical Society. AMS, 1974.

[Tho79] C. J. Thompson. Mathematical Statistical Mechanics. Princeton


University Press, 1979.

[Tho88] C. J. Thompson. Classical Equilibrium Statistical Mechanics.


Clarendon, 1988.

[TKS92] M. Toda, R. Kubo, and N. Saitô. Statistical Physics I - Equilibrium


Statistical Mechanics. Number 30 in Solid-State Sciences. Springer,
New York, 1992.

90

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy