Theory F 2012
Theory F 2012
Theory F 2012
Jrg Schmalian
Institute for Theory of Condensed Matter (TKM)
Karlsruhe Institute of Technology
Summer Semester, 2012
ii
Contents
1 Introduction 1
2 Thermodynamics 3
2.1 Equilibrium and the laws of thermodynamics . . . . . . . . . . . 4
2.2 Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Example of a Legendre transformation . . . . . . . . . . . 12
2.3 Gibbs Duhem relation . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Summary of probability theory 15
4 Equilibrium statistical mechanics 19
4.1 The maximum entropy principle . . . . . . . . . . . . . . . . . . 19
4.2 The canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Spin
l
2
particles within an external eld (paramagnetism) 23
4.2.2 Quantum harmonic oscillator . . . . . . . . . . . . . . . . 26
4.3 The microcanonical ensemble . . . . . . . . . . . . . . . . . . . . 28
4.3.1 Quantum harmonic oscillator . . . . . . . . . . . . . . . . 28
5 Ideal gases 31
5.1 Classical ideal gases . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.1.1 The nonrelativistic classical ideal gas . . . . . . . . . . . . 31
5.1.2 Binary classical ideal gas . . . . . . . . . . . . . . . . . . 34
5.1.3 The ultra-relativistic classical ideal gas . . . . . . . . . . . 35
5.1.4 Equipartition theorem . . . . . . . . . . . . . . . . . . . . 37
5.2 Ideal quantum gases . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2.1 Occupation number representation . . . . . . . . . . . . . 38
5.2.2 Grand canonical ensemble . . . . . . . . . . . . . . . . . . 40
5.2.3 Partition function of ideal quantum gases . . . . . . . . . 41
5.2.4 Classical limit . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2.5 Analysis of the ideal fermi gas . . . . . . . . . . . . . . . 44
5.2.6 The ideal Bose gas . . . . . . . . . . . . . . . . . . . . . . 47
5.2.7 Photons in equilibrium . . . . . . . . . . . . . . . . . . . . 50
5.2.8 MIT-bag model for hadrons and the quark-gluon plasma . 53
5.2.9 Ultrarelativistic fermi gas . . . . . . . . . . . . . . . . . . 55
iii
iv CONTENTS
6 Interacting systems and phase transitions 59
6.1 The classical real gas . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Classication of Phase Transitions . . . . . . . . . . . . . . . . . 61
6.3 Gibbs phase rule and rst order transitions . . . . . . . . . . . . 62
6.4 The Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.4.1 Exact solution of the one dimensional model . . . . . . . 64
6.4.2 Mean eld approximation . . . . . . . . . . . . . . . . . . 65
6.5 Landau theory of phase transitions . . . . . . . . . . . . . . . . . 67
6.6 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.7 Renormalization group . . . . . . . . . . . . . . . . . . . . . . . . 80
6.7.1 Perturbation theory . . . . . . . . . . . . . . . . . . . . . 80
6.7.2 Fast and slow variables . . . . . . . . . . . . . . . . . . . 82
6.7.3 Scaling behavior of the correlation function: . . . . . . . . 84
6.7.4 --expansion of the c
d
-theory . . . . . . . . . . . . . . . . 85
6.7.5 Irrelevant interactions . . . . . . . . . . . . . . . . . . . . 89
7 Density matrix and uctuation dissipation theorem 91
7.1 Density matrix of subsystems . . . . . . . . . . . . . . . . . . . . 93
7.2 Linear response and uctuation dissipation theorem . . . . . . . 95
8 Brownian motion and stochastic dynamics 97
8.1 Langevin equation . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.2 Random electrical circuits . . . . . . . . . . . . . . . . . . . . . . 99
9 Boltzmann transport equation 103
9.1 Transport coecients . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.2 Boltzmann equation for weakly interacting fermions . . . . . . . 104
9.2.1 Collision integral for scattering on impurities . . . . . . . 106
9.2.2 Relaxation time approximation . . . . . . . . . . . . . . . 107
9.2.3 Conductivity . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.2.4 Determining the transition rates . . . . . . . . . . . . . . 108
9.2.5 Transport relaxation time . . . . . . . . . . . . . . . . . . 110
9.2.6 H-theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
9.2.7 Local equilibrium, Chapman-Enskog Expansion . . . . . . 112
Preface
These lecture notes summarize the main content of the course Statistical Me-
chanics (Theory F), taught at the Karlsruhe Institute of Technology during
the summer semester 2012. They are based on the graduate course Statistical
Mechanics taught at Iowa State University between 2003 and 2005.
v
vi PREFACE
Chapter 1
Introduction
Many particle systems are characterized by a huge number of degrees of freedom.
However, in essentially all cases a complete knowledge of all quantum states is
neither possible nor useful and necessary. For example, it is hard to determine
the initial coordinates and velocities of 10
23
Ar-atoms in a high temperature
gas state, needed to integrate Newtons equations. In addition, it is known from
the investigation of classical chaos that in classical systems with many degrees
of freedom the slightest change (i.e. lack of knowledge) in the initial conditions
usually causes dramatic changes in the long time behavior as far as the positions
and momenta of the individual particles are concerned. On the other hand the
macroscopic properties of a bucket of water are fairly generic and dont seem to
depend on how the individual particles have been placed into the bucket. This
interesting observation clearly suggests that there are principles at work which
ensure that only a few variables are needed to characterize the macroscopic
properties of this bucket of water and it is worthwhile trying to identify these
principles as opposed to the eort to identify all particle momenta and positions.
The tools and insights of statistical mechanics enable us to determine the
macroscopic properties of many particle systems with known microscopic Hamil-
tonian, albeit in many cases only approximately. This bridge between the mi-
croscopic and macroscopic world is based on the concept of a lack of knowledge
in the precise characterization of the system and therefore has a probabilistic
aspect. This is indeed a lack of knowledge which, dierent from the proba-
bilistic aspects of quantum mechanics, could be xed if one were only able to
fully characterize the many particle state. For nite but large systems this is an
extraordinary tough problem. It becomes truly impossible to solve in the limit
of innitely many particles. It is this limit of large systems where statistical
mechanics is extremely powerful. One way to see that the "lack of knowledge"
problem is indeed more fundamental than solely laziness of an experimentalist is
that essentially every physical system is embedded in an environment and only
complete knowledge of system and environment allows for a complete charac-
terization. Even the observable part of our universe seems to behave this way,
denying us full knowledge of any given system as a matter of principle.
1
2 CHAPTER 1. INTRODUCTION
Chapter 2
Thermodynamics
Even though this course is about statistical mechanics, it is useful to summarize
some of the key aspects of thermodynamics. Clearly these comments cannot
replace a course on thermodynamics itself. Thermodynamics and statistical
mechanics have a relationship which is quite special. It is well known that
classical mechanics covers a set of problems which are a subset of the ones
covered by quantum mechanics. Even more clearly is nonrelativistic mechanics
a "part of" relativistic mechanics. Such a statement cannot be made if one
tries to relate thermodynamics and statistical mechanics. Thermodynamics
makes very general statements about equilibrium states. The observation that
a system in thermodynamic equilibrium does not depend on its preparation
in the past for example is being beautifully formalized in terms of exact and
inexact dierentials. However, it also covers the energy balance and eciency of
processes which can be reversible or irreversible. Using the concept of extremely
slow, so called quasi-static processes it can then make far reaching statements
which only rely on the knowledge of equations of state like for example
j\ = /
I
T (2.1)
in case of a dilute gas at high temperatures. Equilibrium statistical mechanics
on the other hand provides us with the tools to derive such equations of state
theoretically, even though it has not much to say about the actual processes, like
for example in a Diesel engine. The latter may however be covered as part of
he rapidly developing eld of non-equilibrium statistical mechanics. The main
conclusion from these considerations is that it is useful to summarize some, but
fortunately not necessary to summarize all aspects of thermodynamics for this
course.
3
4 CHAPTER 2. THERMODYNAMICS
2.1 Equilibrium and the laws of thermodynam-
ics
Thermodynamics is based on four laws which are in short summarized as:
0. Thermodynamic equilibrium exists and is characterized by a temperature.
1. Energy is conserved.
2. Not all heat can be converted into work.
3. One cannot reach absolute zero temperature.
Zeroth law: A closed system reaches after long time the state of thermo-
dynamic equilibrium. Here closed stands for the absence of directed energy,
particle etc. ux into or out of the system, even though a statistical uctuation
of the energy, particle number etc. may occur. The equilibrium state is then
characterized by a set of variables like:
volume, \
electric polarization, P
magetization, M
particle numbers,
I
of particles of type i
etc.
This implies that it is irrelevant what the previous volume, magnetization
etc. of the system were. The equilibrium has no memory! If a function of
variables does not depend on the way these variables have been changed it can
conveniently written as a total dierential like d\ or d
I
etc.
If two system are brought into contact such that energy can ow from one
system to the other. Experiment tells us that after suciently long time they
will be in equilibrium with each other. Then they are said to have the same
temperature. If for example system is in equilibrium with system 1 and with
system C, it holds that 1 and C are also in equilibrium with each other. Thus,
the temperature is the class index of the equivalence class of the thermodynamic
equilibrium. There is obviously large arbitrariness in how to chose the temper-
ature scale. If T is a given temperature scale then any monotonous function
t (T) would equally well serve to describe thermodynamic systems. The tem-
perature is typically measured via a thermometer, a device which uses changes
of the system upon changes of the equilibrium state. This could for example be
the volume of a liquid or the magnetization of a ferromagnet etc.
Classically there is another kinetic interpretation of the temperature as the
averaged kinetic energy of the particles
/
I
T =
2
8
-
lIn
. (2.2)
2.1. EQUILIBRIUM AND THE LAWS OF THERMODYNAMICS 5
We will derive this later. Here we should only keep in mind that this relation
is not valid within quantum mechanics, i.e. fails at low temperatures. The
equivalence index interpretation given above is a much more general concept.
First law: The rst law is essentially just energy conservation. The total
energy is called the internal energy l. Below we will see that l is nothing else
but the expectation value of the Hamilton operator. Changes, dl of l occur
only by causing the system to do work, c\, or by changing the heat content,
cQ. To do work or change heat is a process and not an equilibrium state and
the amount of work depends of course on the process. Nevertheless, the sum of
these two contributions is a total dierential
1
dl = cQc\ (2.3)
which is obvious once one accepts the notion of energy conservation, but which
was truly innovative in the days when R. J. Mayer (1842) and Joule (1843-49)
realized that heat is just another energy form.
The specic form of c\ can be determined from mechanical considerations.
For example we consider the work done by moving a cylinder in a container.
Mechanically it holds
c\ = F ds (2.4)
1
A total dierential of a function z = f (x
i
) with i = 1; ; n, corresponds to dz =
P
i
@f
@x
1
dx
i
. It implies that z
x
(1)
i
x
(2)
i
=
R
C
P
i
@f
@x
1
dx
i
;with contour C connecting
x
(2)
i
with x
(1)
i
, is independent on the contour C. In general, a dierential
P
i
F
i
dx
i
is total if
@F
1
@x
=
@F
@x
1
, which for F
i
=
@f
@x
1
coresponds to the interchangability of the order in which the
derivatives are taken.
6 CHAPTER 2. THERMODYNAMICS
where F is the force exerted by the system and ds is a small distance change
(here of the wall). The minus sign in c\ implies that we count energy which
is added to a system as positive, and energy which is subtracted from a system
as negative. Considering a force perpendicular to the wall (of area ) it holds
that the pressure is just
j =
[F[
. (2.5)
If we analyze the situation where one pushes the wall in a way to reduce the
volume, then F and ds point in opposite directions, and and thus
c\ = jd: = jd\. (2.6)
Of course in this case c\ 0 since d\ = d: < 0. Alternatively, the wall is
pushed out, then F and ds point in the same direction and
c\ = jd: = jd\.
Now d\ = d: 0 and c\ < 0. Note that we may only consider an inni-
tesimal amount of work, since the pressure changes during the compression. To
calculate the total compressional work one needs an equation of state j (\ ).
It is a general property of the energy added to or subtracted from a system
that it is the product of an intensive state quantity (pressure) and the change
of an extensive state quantity (volume).
More generally holds
c\ = jd\ EdPHdM
I
j
I
d
I
(2.7)
where E, H and j
I
are the electrical and magnetic eld and the chemical poten-
tial of particles of type i. P is the electric polarization and Mthe magnetization.
To determine electromagnetic work c\
tn
= EdPHdM is in fact rather sub-
tle. As it is not really relevant for this course we only sketch the derivation and
refer to the corresponding literature: J. A. Stratton, Electromagnetic Theory,
Chap. 1, McGraw-Hill, New York, (1941) or V. Heine, Proc. Cambridge Phil.
Sot., Vol. 52, p. 546, (1956), see also Landau and Lifshitz, Electrodynamics of
Continua.
Finally we comment on the term with chemical potential j
I
. Essentially by
denition holds that j
I
is the energy needed to add one particle in equilibrium
to the rest of the system, yielding the work j
I
d
I
.
Second Law: This is a statement about the stability of the equilibrium
state. After a closed system went from a state that was out of equilibrium
(right after a rapid pressure change for example) into a state of equilibrium
it would not violate energy conservation to evolve back into the initial out of
equilibrium state. In fact such a time evolution seems plausible, given that
the micoscopic laws of physics are invariant under time reversal The content of
the second law however is that the tendency to evolve towards equilibrium can
2.1. EQUILIBRIUM AND THE LAWS OF THERMODYNAMICS 7
only be reversed by changing work into heat (i.e. the system is not closed
anymore). We will discuss in some detail how this statement can be related to
the properties of the micoscopic equations of motion.
Historically the second law was discovered by Carnot. Lets consider the
Carnot process of an ideal gas
1. Isothermal (T = co::t.) expansion from volume \
l
\
2
:
\
2
\
l
=
j
l
j
2
(2.8)
Since l of an ideal gas is solely kinetic energy ~ T, it holds dl = 0 and
thus
^Q = ^\ =
_
\2
\1
c\ =
_
\2
\1
jd\
= /
I
T
_
\2
\1
d\
\
= /
I
T log
_
\
2
\
l
_
(2.9)
2. Adiabatic (cQ = 0) expansion from \
2
\
3
with
^Q = 0 (2.10)
The system will lower its temperature according to
\
3
\
2
=
_
T
|
T
l
_
3/2
. (2.11)
This can be obtained by using
dl = CdT =
/
I
T
\
d\ (2.12)
8 CHAPTER 2. THERMODYNAMICS
and C =
3
2
/
I
and integrating this equation.
3. Isothermal compression \
3
\
d
at T
l
where similar to the rst step:
^Q
3!4
= /
I
T log
_
\
d
\
3
_
(2.13)
4. Adiabatic compression to the initial temperature and volume, i.e.
^Q = 0 (2.14)
\
l
\
d
=
_
T
l
T
|
_
3/2
. (2.15)
As expected follows that ^l
ioi
= 0, which can be obtained by using ^\ =
C (T
l
T
|
) for the rst adiabatic and ^\ = C (T
|
T
l
) for the second.
On the other hand ^Q
ioi
0, which implies that the system does work since
^\
ioi
= ^Q
ioi
. As often remarked, for the eciency (ratio of the work done
by the heat absorbed) follows j =
].Vtot]
.Q1!2
< 1.
Most relevant for our considerations is however the observation:
^Q
l2
T
|
^Q
3!4
T
l
= /
1
_
log
_
\
2
\
l
_
log
_
\
d
\
3
__
= 0 (2.16)
Thus, it holds
_
cQ
T
= 0. (2.17)
This implies that (at least for the ideal gas) the entropy
do =
cQ
T
(2.18)
is a total dierential and thus a quantity which characterizes the state of a
system. This is indeed the case in a much more general context.
It then follows that in equilibrium, for a closed system (where of course
cQ = 0) the entropy fullls
do = 0. (2.19)
Experiment says that this extremum is a maximum. The equilibrium is appar-
ently the least structured state possible at a given total energy. In this sense it
is very tempting to interpret the maximum of the entropy in equilibrium in a
way that o is a measure for the lack of structure, or disorder.
It is already now useful to comment on the microscopic, statistical interpre-
tation of this behavior and the origin of irreversibility. In classical mechanics
a state of motion of particles is uniquely determined by the 8 coordinates
and 8 momenta (
I
, j
I
) of the particles at a certain time. The set (
I
, j
I
)
is also called the microstate of the system, which of course varies with time.
Each microstate (
I
, j
I
) corresponds to one point in a 6-dimensional space,
2.2. THERMODYNAMIC POTENTIALS 9
the phase space. The set (
I
, j
I
) i.e., the microstate, can therefore be identi-
ed with a point in phase space. Let us now consider the diusion of a gas
in an initial state (
I
(t
0
) , j
I
(t
0
)) from a smaller into a larger volume. If one
is really able to reverse all momenta in the nal state (
I
(t
}
) , j
I
(t
}
)) and to
prepare a state (
I
(t
}
) , j
I
(t
}
)), the process would in fact be reversed. From
a statistical point of view, however, this is an event with an incredibly small
probability. For there is only one point (microstate) in phase space which leads
to an exact reversal of the process, namely (
I
(t
}
) , j
I
(t
}
)). The great major-
ity of microstates belonging to a certain macrostate, however, lead under time
reversal to states which cannot be distinguished macroscopically from the nal
state (i.e., the equilibrium or Maxwell- Boltzmann distribution). The funda-
mental assumption of statistical mechanics now is that all microstates which
have the same total energy can be found with equal probability. This, however,
means that the microstate (
I
(t
}
) , j
I
(t
}
)) is only one among very many other
microstates which all appear with the same probability.
As we will see, the number \ of microstates that is compatible with a
given macroscopic observable is a quantityclosely related to the entropy of this
macrostate. The larger \, the more probable is the corresponding macrostate,
and the macrostate with the largest number \
nax
of possible microscopic re-
alizations corresponds to thermodynamic equilibrium. The irreversibility that
comes wit the second law is essentially stating that the motion towards a state
with large \ is more likely than towards a state with smaller \.
Third law: This law was postulated by Nernst in 1906 and is closely related
to quantum eects at low T. If one cools a system down it will eventually drop
into the lowest quantum state. Then, there is no lack of structure and one
expects o 0. This however implies that one can not change the heat content
anymore if one approaches T 0, i.e. it will be increasingly harder to cool
down a system the lower the temperature gets.
2.2 Thermodynamic potentials
The internal energy of a system is written (following the rst and second law)
as
dl = Tdo jd\ jd (2.20)
where we consider for the moment only one type of particles. Thus it is obviously
a function
l (o, \, ) (2.21)
with internal variables entropy, volume and particle number. In particular dl =
0 for xed o, \ , and . In case one considers a physical situation where indeed
these internal variables are xed the internal energy is minimal in equilibrium.
Here, the statement of an extremum (minimum) follows from the conditions
0l
0r
I
= 0 with r
I
= o, \ , or . (2.22)
10 CHAPTER 2. THERMODYNAMICS
with
dl =
I
0l
0r
I
dr
I
(2.23)
follows at the minimum dl = 0. Of course this is really only a minimum if
the leading minors of the Hessian matrix 0
2
l, (0r
I
0r
j
2
2
=
j
2
4
(2.51)
and it follows
dq =
j
2
dj = rdj (2.52)
as desired.
2.3 Gibbs Duhem relation
Finally, a very useful concept of thermodynamics is based on the fact that
thermodynamic quantities of big systems can be either considered as extensive
(proportional to the size of the system) of intensive (do not change as function
of the size of the system).
Extensive quantities:
volume
particle number
magnetization
entropy
Intensive quantities:
pressure
chemical potential
magnetic eld
temperature
Interestingly, the internal variables of the internal energy are all extensive
l = l (o, \,
I
) . (2.53)
Now if one increases a given thermodynamic system by a certain scale,
\ `\
I
`
I
o `o (2.54)
we expect that the internal energy changes by just that factor l `l i.e.
l (`o.`\, `
I
) = `l (o, \,
I
) (2.55)
14 CHAPTER 2. THERMODYNAMICS
whereas the temperature or any other intensive variable will not change
T (`o.`\, `
I
) = T (o, \,
I
) . (2.56)
Using the above equation for l gives for ` = 1 - and small -:
l ((1 -) o, ...) = l (o, \,
I
)
0l
0o
-o
0l
0\
-\
I
0l
0
I
-
I
.
= l (o, \,
I
) -l (o, \,
I
) (2.57)
Using the fact that
T =
0l
0o
j =
0l
0\
j
I
=
0l
0
I
(2.58)
follows
l ((1 -) o, ...) = l (o, \,
I
) -l (o, \,
I
)
= l (o, \,
I
) -
_
To j\
I
j
I
I
_
(2.59)
which then gives
l = To j\
I
j
I
I
. (2.60)
Since
dl = Tdo jd\ jd (2.61)
it follows immediately
0 = odT \ dj
I
dj
I
(2.62)
This relationship is useful if one wants to study for example the change of the
temperature as function of pressure changes etc. Another consequence of the
Gibbs Duhem relations is for the other potentials like
1 = j\
I
j
I
I
. (2.63)
or
\ = j\ (2.64)
which can be very useful.
Chapter 3
Summary of probability
theory
We give a very brief summary of the key aspects of probability theory that are
needed in statistical mechanics. Consider a physical observable r that takes
with probability j (r
I
) the value r
I
. In total there are such possible values,
i.e. i = 1, , . The observable will with certainty take one out of the
values, i.e. the probability that r is either j (r
l
) or j (r
2
) or ... or j (r
) is
unity:
I=l
j (r
I
) = 1. (3.1)
The probability is normalized.
The mean value of r is given as
r =
I=l
j (r
I
) r
I
. (3.2)
Similarly holds for an arbitrary function ) (r) that
) (r) =
I=l
j (r
I
) ) (r
I
) , (3.3)
e.g. ) (r) = r
n
yields the :-th moment of the distribution function
r
n
=
I=l
j (r
I
) r
n
I
. (3.4)
The variance of the distribution is the mean square deviation from the averaged
value: _
(r r)
2
_
=
r
2
_
r
2
. (3.5)
15
16 CHAPTER 3. SUMMARY OF PROBABILITY THEORY
If we introduce )
|
(r) = oxp(tr) we obtain the characteristic function:
c (t) = )
|
(r) =
I=l
j (r
I
) oxp(tr
I
)
=
o
n=0
t
n
:!
I=l
j (r
I
) r
n
I
=
o
n=0
r
n
:!
t
n
. (3.6)
Thus, the Taylor expansion coecients of the characteristic function are iden-
tical to the moments of the distribution j (r
I
).
Consider next two observables r and j with probability 1 (r
I
\j
) that r
takes the value r
I
and j becomes j
I
. Let j (r
I
) the distribution function of r
and (j
) = j (r
I
) (j
1 (r
I
\j
) ,
(j
) =
I
1 (r
I
\j
) . (3.8)
Thus, it follows
r j =
I,
(r
I
j
) 1 (r
I
\j
) = r j . (3.9)
Consider now
rj =
I,
r
I
j
1 (r
I
\j
) (3.10)
Suppose the two observables are independent, then follows
rj =
I,
r
I
j
j (r
I
) (j
) = r j . (3.11)
This suggests to analyze the covariance
C (r, j) = (r r) (j j)
= rj 2 r j r j
= rj r j (3.12)
The covariance is therefore only nite when the two observables are not inde-
pendent, i.e. when they are correlated. Frequently, r and j do not need to be
17
distinct observables, but could be the same observable at dierent time or space
arguments. Suppose r = o (r, t) is a spin density, then
(r, r
t
; t, t
t
) = (o (r, t) o (r, t)) (o (r
t
, t
t
) o (r
t
, t
t
))
is the spin-correlation function. In systems with translations invariance holds
(r, r
t
; t, t
t
) = (r r
t
; t t
t
). If now, for example
(r; t) = c
:/
c
|/r
then and t are called correlation length and time. They obviously determine
over how-far or how-long spins of a magnetic system are correlated.
18 CHAPTER 3. SUMMARY OF PROBABILITY THEORY
Chapter 4
Equilibrium statistical
mechanics
4.1 The maximum entropy principle
The main contents of the second law of thermodynamics was that the entropy
of a closed system in equilibrium is maximal. Our central task in statistical
mechanics is to relate this statement to the results of a microscopic calculation,
based on the Hamiltonian, H, and the eigenvalues
Hc
I
= 1
I
c
I
(4.1)
of the system.
Within the ensemble theory one considers a large number of essentially iden-
tical systems and studies the statistics of such systems. The smallest contact
with some environment or the smallest variation in the initial conditions or
quantum preparation will cause uctuations in the way the system behaves.
Thus, it is not guaranteed in which of the states c
I
the system might be, i.e.
what energy 1
I
it will have (remember, the system is in thermal contact, i.e. we
allow the energy to uctuate). This is characterized by the probability j
I
of the
system to have energy 1
I
. For those who dont like the notion of ensembles, one
can imagine that each system is subdivided into many macroscopic subsystems
and that the uctuations are rather spatial.
If one wants to relate the entropy with the probability one can make the
following observation: Consider two identical large systems which are brought
in contact. Let j
l
and j
2
be the probabilities of these systems to be in state 1
and 2 respectively. The entropy of each of the systems is o
l
and o
2
. After these
systems are combined it follows for the entropy as a an extensive quantity that
o
ioi
= o
l
o
2
(4.2)
and for the probability of the combined system
j
ioi
= j
l
j
2
. (4.3)
19
20 CHAPTER 4. EQUILIBRIUM STATISTICAL MECHANICS
The last result is simply expressing the fact that these systems are independent
whereas the rst ones are valid for extensive quantities in thermodynamics. Here
we assume that short range forces dominate and interactions between the two
systems occur only on the boundaries, which are negligible for suciently large
systems. If now the entropy o
I
is a function of j
I
it follows
o
I
~ log j
I
. (4.4)
It is convenient to use as prefactor the so called Boltzmann constant
o
I
= /
I
log j
I
(4.5)
where
/
I
= 1.88068 10
23
J K
l
= 8.61788 10
5
oVK
l
. (4.6)
The averaged entropy of each subsystems, and thus of each system itself is then
given as
o = /
I
A
I=l
j
I
log j
I
. (4.7)
Here, we have established a connection between the probability to be in a given
state and the entropy o. This connection was one of the truly outstanding
achievements of Ludwig Boltzmann.
Eq.4.7 helps us immediately to relate o with the degree of disorder in the
system. If we know exactly in which state a system is we have:
j
I
=
_
1 i = 0
0 i ,= 0
==o = 0 (4.8)
In the opposite limit, where all states are equally probable we have instead:
j
I
=
1
A
==o = /
I
log A. (4.9)
Thus, if we know the state of the system with certainty, the entropy vanishes
whereas in case of the complete equal distribution follows a large (a maximal
entropy). Here A is the number of distinct state
The fact that o = /
I
log A is indeed the largest allowed value of o follows
from maximizing o with respect to j
I
. Here we must however keep in mind that
the j
I
are not independent variables since
A
I=l
j
I
= 1. (4.10)
This is done by using the method of Lagrange multipliers summarized in a
separate handout. One has to minimize
1 = o `
_
A
I=l
j
I
1
_
= /
I
A
I=l
j
I
log j
I
`
_
A
I=l
j
I
1
_
(4.11)
4.2. THE CANONICAL ENSEMBLE 21
We set the derivative of 1 with respect to j
I
equal zero:
01
0j
= /
I
(log j
I
1) ` = 0 (4.12)
which gives
j
I
= oxp
_
`
/
I
1
_
= 1 (4.13)
independent of i! We can now determine the Lagrange multiplier from the
constraint
1 =
A
I=l
j
I
=
A
I=l
1 = A1 (4.14)
which gives
j
I
=
1
A
, \
I
. (4.15)
Thus, one way to determine the entropy is by analyzing the number of states
A (1, \, ) the system can take at a given energy, volume and particle number.
This is expected to depend exponentially on the number of particles
A ~ oxp(:) , (4.16)
which makes the entropy an extensive quantity. We will perform this calculation
when we investigate the so called microcanonical ensemble, but will follow a
dierent argumentation now.
4.2 The canonical ensemble
Eq.4.9 is, besides the normalization, an unconstraint extremum of o. In many
cases however it might be appropriate to impose further conditions on the sys-
tem. For example, if we allow the energy of a system to uctuate, we may still
impose that it has a given averaged energy:
1 =
A
I=l
j
I
1
I
. (4.17)
If this is the case we have to minimize
1 = o `
_
A
I=l
j
I
1
_
/
I
,
_
A
I=l
j
I
1
I
1
_
= /
I
A
I=l
j
I
log j
I
`
_
A
I=l
j
I
1
_
/
I
,
_
A
I=l
j
I
1
I
1
_
(4.18)
We set the derivative of 1 w.r.t j
I
equal zero:
01
0j
= /
I
(log j
I
1) ` /
I
,1
I
= 0 (4.19)
22 CHAPTER 4. EQUILIBRIUM STATISTICAL MECHANICS
which gives
j
I
= oxp
_
`
/
I
1 ,1
I
_
=
1
7
oxp(,1
I
) (4.20)
where the constant 7 (or equivalently the Lagrange multiplier `) are determined
by
7 =
I
oxp(,1
I
) (4.21)
which guarantees normalization of the probabilities.
The Lagrange multiplier , is now determined via
1 =
1
7
I
1
I
oxp(,1
I
) =
0
0,
log 7. (4.22)
This is in general some implicit equation for , given 1. However, there is a
very intriguing interpretation of this Lagrange multiplier that allows us to avoid
solving for , (1) and gives , its own physical meaning.
For the entropy follows (log j
I
= ,1
I
log 7)
o = /
I
A
I=l
j
I
log j
I
= /
I
A
I=l
j
I
(,1
I
log 7)
= /
I
, 1 /
I
log 7 = /
I
,
0
0,
log 7 /
I
log 7 (4.23)
If one substitutes:
, =
1
/
I
T
(4.24)
it holds
o = /
I
T
0 log 7
0T
/
I
log 7 =
0 (/
I
T log 7)
0T
(4.25)
Thus, there is a function
1 = /
I
T log 7 (4.26)
which gives
o =
01
0T
(4.27)
and
1 =
0
0,
,1 = 1 ,
01
0,
= 1 To (4.28)
Comparison with our results in thermodynamics lead after the identication of
1 with the internal energy l to:
T : temperature
1 : free energy. (4.29)
Thus, it might not even be useful to ever express the thermodynamic variables
in terms of 1 = l, but rather keep T.
The most outstanding results of these considerations are:
4.2. THE CANONICAL ENSEMBLE 23
the statistical probabilities for being in a state with energy 1
I
are
j
I
~ oxp
_
1
I
/
I
T
_
(4.30)
all thermodynamic properties can be obtained from the so called partition
function
7 =
I
oxp(,1
I
) (4.31)
within quantum mechanics it is useful to introduce the so called density
operator
j =
1
7
oxp(,H) , (4.32)
where
7 = li oxp(,H) (4.33)
ensures that lij = 1. The equivalence between these two representations
can be shown by evaluating the trace with respect to the eigenstates of
the Hamiltonian
7 =
n
:[ oxp(,H) [: =
n
:[: oxp(,1
n
)
=
n
oxp(,1
n
) (4.34)
as expected.
The evaluation of this partition sum is therefore the major task of equilib-
rium statistical mechanics.
4.2.1 Spin
1
2
particles within an external eld (paramag-
netism)
Consider a system of spin-
l
2
particles in an external eld B =(0, 0, 1) charac-
terized by the Hamiltonian
H = qj
I
I=l
:
:,I
1 (4.35)
where j
I
=
t~
2nc
= 0.2710
2d
J 1
l
= 0.671 41 k
I
K,1 is the Bohr magneton.
Here the operator of the projection of the spin onto the .-axis, :
:,I
, of the
particle at site i has the two eigenvalues
:
:,I
=
1
2
(4.36)
24 CHAPTER 4. EQUILIBRIUM STATISTICAL MECHANICS
Using for simplicity q = 2 gives with o
I
= 2:
:,I
= 1, the eigenenergies are
characterized by the set of variables o
I
1
]S1]
= j
I
I=l
o
I
1. (4.37)
Dierent states of the system can be for example
o
l
= 1, o
2
= 1,..., o
= 1 (4.38)
which is obviously the ground state if 1 0 or
o
l
= 1, o
2
= 1,..., o
= 1 (4.39)
etc. The partition function is now given as
7 =
]S1]
oxp
_
,j
I
I=l
o
I
1
_
=
S1=l
S2=l
...
S
1
=l
oxp
_
,j
I
I=l
o
I
1
_
=
S1=l
S2=l
...
S
1
=l
I=l
oxp(,j
I
o
I
1)
=
S1=l
c
o
B
S11
S2=l
c
o
B
S21
...
S
1
=l
c
o
B
S
1
1
=
_
S=l
c
o
B
S1
_
= (7
l
)
where 7
l
is the partition function of only one particle. Obviously statistical
mechanics of only one single particle does not make any sense. Nevertheless
the concept of single particle partition functions is useful in all cases where the
Hamiltonian can be written as a sum of commuting, non-interacting terms and
the particles are distinguishable, i.e. for
H =
I=l
/(X
I
) (4.40)
with wave function
[w =
I
[:
I
(4.41)
with
/(X
I
) [:
I
= -
n
[:
I
. (4.42)
It holds that 7
= (7
l
)
where 7
l
= li oxp(,/).
4.2. THE CANONICAL ENSEMBLE 25
For our above example we can now easily evaluate
7
l
= c
o
B
1
c
o
B
1
= 2 cosh(,j
I
1) (4.43)
which gives
1 = /
I
T log
_
2 cosh
_
j
I
1
/
I
T
__
(4.44)
For the internal energy follows
l = 1 =
0
0,
log 7 = j
I
1lanh
_
j
I
1
/
I
T
_
(4.45)
which immediately gives for the expectation value of the spin operator
:
:
I
=
1
2
lanh
_
j
I
1
/
I
T
_
. (4.46)
The entropy is given as
o =
01
0T
= /
I
log
_
2 cosh
_
j
I
1
/
I
T
__
j
I
1
T
lanh
_
j
I
1
/
I
T
_
(4.47)
which turns out to be identical to o =
IJ
T
.
If 1 = 0 it follows l = 0 and o = /
I
log 2, i.e. the number of congura-
tions is degenerate (has equal probability) and there are 2
such congurations.
For nite 1 holds
o (/
1
T j
1
1) /
1
_
log 2
1
2
_
j
I
1
/
I
T
_
2
...
_
(4.48)
whereas for
o (/
1
T j
1
1)
j
I
1
T
c
2,
B
T
!
B
J
j
I
1
T
_
1 2c
2,
B
T
!
B
J
_
2j
I
1
T
c
2,
B
T
!
B
J
0 (4.49)
in agreement with the third law of thermodynamics.
The Magnetization of the system is
' = qj
I
I=l
:
I
(4.50)
with expectation value
' = j
I
lanh
_
j
I
1
/
I
T
_
=
01
01
, (4.51)
26 CHAPTER 4. EQUILIBRIUM STATISTICAL MECHANICS
i.e.
d1 = ' d1 odT. (4.52)
This enables us to determine the magnetic susceptibility
=
0 '
01
=
j
2
I
/
I
T
=
C
T
. (4.53)
which is called the Curie-law.
If we are interested in a situation where not the external eld, but rather
the magnetization is xed we use instead of 1 the potential
H = 1 ' 1 (4.54)
where we only need to invert the '-1 dependence, i.e. with
1(') =
/
I
T
j
I
lanh
l
_
'
j
1
_
. (4.55)
4.2.2 Quantum harmonic oscillator
The analysis of this problem is very similar to the investigation of ideal Bose
gases that will be investigated later during the course. Lets consider a set of
oscillators. The Hamiltonian of the problem is
H =
I=l
_
j
2
I
2:
/
2
r
2
I
_
where j
I
and r
I
are momentum and position operators of independent quan-
tum Harmonic oscillators. The energy of the oscillators is
1 =
I=l
~.
0
_
:
I
1
2
_
(4.56)
with frequency .
0
=
_
/,: and zero point energy 1
0
=
2
~.
0
. The integer :
I
determine the oscillator eigenstate of the i-th oscillator. It can take the values
from 0 to innity. The Hamiltonian is of the form H =
I=l
/(j
I
, r
I
) and it
follows for the partition function
7
= (7
l
)
.
The single oscillator partition function is
7
l
=
o
n=0
c
o~.0(n
1
2
)
= c
o~.0/2
o
n=0
c
o~.0n
= c
o~.0/2
1
1 c
o~.0
4.2. THE CANONICAL ENSEMBLE 27
This yields for the partition function
log 7
= log 7
l
= ,~.
0
,2 log
_
1 c
o~.0
_
which enables us to determine the internal energy
1 =
0
0,
log 7
= ~.
0
_
1
c
o~.0
1
1
2
_
.
The mean value of the oscillator quantum number is then obviously given as
:
I
= : =
1
c
o~.0
1
,
which tells us that for /
1
T ~.
0
, the probability of excited oscillator states
is exponentially small. For the entropy follows from
1 = /
1
T log 7
= ~.
0
,2 /
1
T log
_
1 c
~.
0
!
T
J
_
(4.57)
that
o =
01
0T
= /
1
log
_
1 c
~.
0
!
T
J
_
~.
0
T
1
c
~.
0
!
T
J
1
= /
1
((: 1) log (1 :) : log :) (4.58)
As T 0 (i..e. for /
1
T ~.
0
) holds
o /
1
~.
0
/
1
T
c
~.
0
!
T
J
0 (4.59)
in agreement with the 3
:J
law of thermodynamics, while for large /
1
T ~.
0
follows
o /
1
log
_
/
1
T
~.
0
_
. (4.60)
For the heat capacity follows accordingly
C = T
0o
0T
= /
1
_
~.
0
/
1
T
_
2
1
4 sinh
2
_
~.0
|
T
T
_. (4.61)
Which vanishes at small T as
C /
1
_
~.
0
/
1
T
_
2
c
~.
0
!
T
J
(4.62)
while it reaches a constant value
C /
1
(4.63)
as T becomes large. The last result is a special case of the equipartition theorem
that will be discussed later.
28 CHAPTER 4. EQUILIBRIUM STATISTICAL MECHANICS
4.3 The microcanonical ensemble
The canonical and grand-canonical ensemble are two alternatives approaches to
determine the thermodynamic potentials and equation of state. In both cases we
assumed uctuations of energy and in the grand canonical case even uctuations
of the particle number. Only the expectation numbers of the energy or particle
number are kept xed.
A much more direct approach to determine thermodynamic quantities is
based on the microcanonical ensemble. In accordance with our earlier analysis
of maximizing the entropy we start from
o = /
I
log A (1) (4.64)
where we consider an isolated system with conserved energy, i.e. take into
account that the we need to determine the number of states with a given energy.
If o (1) is known and we identity 1 with the internal energy l we obtain the
temperature from
1
T
=
0o (1)
01
\,
. (4.65)
4.3.1 Quantum harmonic oscillator
Let us again consider a set of oscillators with energy
1 = 1
0
I=l
~.
0
:
I
(4.66)
and zero point energy 1
0
=
2
~.
0
. We have to determine the number or
realizations of :
I
of a given energy. For example A (1
0
) = 1. There is one
realization (:
I
= 0 for all i) to get 1 = 1
0
. There are realization to have an
energy 1 = 1
0
h.
0
, i.e. A (1
0
h.
0
) = . The general case of an energy
1 = 1
0
'~.
0
can be analyzed as follows. We consider ' black balls and
1 white balls. We consider a sequence of the kind
/
l
/
2
.../
n1
n/
l
/
2
.../
n2
n....n/
l
/
2
.../
n
1
(4.67)
where /
l
/
2
.../
n1
stands for :
I
black balls not separated by a white ball. We need
to nd the number of ways how we can arrange the 1 white balls keeping
the total number of black balls xed at '. This is obviously given by
A (1
0
'~.
0
) =
_
' 1
1
_
=
(' 1)!
( 1)!'!
(4.68)
This leads to the entropy
o = /
I
(log (' 1)! log ( 1)! '!) (4.69)
4.3. THE MICROCANONICAL ENSEMBLE 29
For large it holds log ! = (log 1)
o = /
I
log
_
'
_
' log
_
'
'
_
Thus it holds
, =
1
/
1
0o
01
=
1
/
1
1
~.
0
0o
0'
=
1
~.
0
log
_
1
'
_
(4.70)
Thus it follows:
' =
c
o~.0
1
(4.71)
Which nally gives
1 = ~.
0
_
1
c
o~.0
1
1
2
_
(4.72)
which is the result obtained within the canonical approach.
It is instructive to analyze the entropy without going to the large and '
limit. Then
o = /
I
log
I( ')
I() I(' 1)
(4.73)
and
, =
1
~.
0
(c ( ') c (' 1)) (4.74)
where c (.) =
I
0
(:)
I(r)
is the digamma function. It holds
c (1) = log 1
1
21
1
121
2
(4.75)
for large 1, such that
, =
1
~.
0
log
_
'
'
_
1
2
( ') '
(4.76)
and we recover the above result for large , '. Here we also have an approach
to make explicit the corrections to the leading term. The canonical and micro-
canonical obviously dier for small system since , log
_
1
~.0
JJ0
_
is the exact
result of the canonical approach for all system sizes.
Another way to look at this is to write the canonical partition sum as
7 =
I
c
oJ1
=
1
^1
_
d1A (1) c
oJ
=
1
^1
_
d1c
oJS(J)/|B
(4.77)
If 1 and o are large (proportional to ) the integral over energy can be esti-
mated by the largest value of the integrand, determined by
,
1
/
I
0o (1)
01
= 0. (4.78)
30 CHAPTER 4. EQUILIBRIUM STATISTICAL MECHANICS
This is just the above microcanonical behavior. At the so dened energy it
follows
1 = /
I
T log 7 = 1 To (1) . (4.79)
Again canonical and microcanonical ensemble agree in the limit of large systems,
but not in general.
Chapter 5
Ideal gases
5.1 Classical ideal gases
5.1.1 The nonrelativistic classical ideal gas
Before we study the classical ideal gas within the formalism of canonical en-
semble theory we summarize some of its thermodynamic properties. The two
equations of state (which we assume to be determined by experiment) are
l =
8
2
/
I
T
j\ = /
I
T (5.1)
We can for example determine the entropy by starting at
dl = Tdo jd\ (5.2)
which gives (for xed particle number)
do =
8
2
/
I
dT
T
/
I
d\
\
(5.3)
Starting at some state T
0
,\
0
with entropy o
0
we can integrate this
o (T, \ ) = o
0
(T, \ )
8
2
/
I
log
T
T
0
/
I
log
\
\
0
= /
I
_
:
0
(T, \ ) log
_
_
T
T
0
_
3/2
\
\
0
__
. (5.4)
Next we try to actually derive the above equations of state. We start from
the Hamiltonian
H =
I
j
2
I
2:
. (5.5)
31
32 CHAPTER 5. IDEAL GASES
and use that a classical system is characterized by the set of three dimensional
momenta and positions p
I
, x
I
. This suggests to write the partition sum as
7 =
]p1,x1]
oxp
_
,
I
j
2
I
2:
_
. (5.6)
For practical purposes it is completely sucient to approximate the sum by an
integral, i.e. to write
,r
) (j, r) =
^j^r
^j^r
,r
) (j, r)
_
djdr
^j^r
) (j, r) . (5.7)
Here ^j^r is the smallest possible unit it makes sense to discretize momentum
and energy in. Its value will only be an additive constant to the partition func-
tion, i.e. will only be an additive correction to the free energy ~ 8T log ^j^r.
The most sensible choice is certainly to use
^j^r = / (5.8)
with Plancks quantum / = 6.6260710
3d
J s. Below, when we consider ideal
quantum gases, we will perform the classical limit and demonstrate explicitly
that this choice for ^j^r is the correct one. It is remarkable, that there seems
to be no natural way to avoid quantum mechanics in the description of a classical
statistical mechanics problem. Right away we will encounter another "left-over"
of quantum physics when we analyze the entropy of the ideal classical gas.
With the above choice for ^j^r follows:
7 =
I=l
_
d
J
j
I
d
J
r
I
/
J
oxp
_
,
j
2
I
2:
_
= \
I=l
_
d
J
j
I
/
J
oxp
_
,
j
2
I
2:
_
=
_
\
`
J
_
, (5.9)
where we used
_
dj oxp
_
cj
2
_
=
_
c
, (5.10)
and introduced:
` =
_
,/
2
2:
(5.11)
which is the thermal de Broglie wave length. It is the wavelength obtained via
/
I
T = C
~
2
/
2
X
2:
and /
X
=
2
`
(5.12)
5.1. CLASSICAL IDEAL GASES 33
with C some constant of order unity. It follows which gives ` =
_
c|
2
o
2n
=
_
c|
2
2n|BT
, i.e. C = 1,.
For the free energy it follows
1 (\, T) = /
I
T log 7 = /
I
T log
_
\
`
3
_
(5.13)
Using
d1 = odT jd\ (5.14)
gives for the pressure:
j =
01
0\
T
=
/
I
T
\
, (5.15)
which is the well known equation of state of the ideal gas. Next we determine
the entropy
o =
01
0T
\
= /
I
log
_
\
`
3
_
8/
I
T
0 log `
0T
= /
I
log
_
\
`
3
_
8
2
/
I
(5.16)
which gives
l = 1 To =
8
2
/
I
T. (5.17)
Thus, we recover both equations of state, which were the starting point of
our earlier thermodynamic considerations. Nevertheless, there is a discrepancy
between our results obtained within statistical mechanics. The entropy is not
extensive but has a term of the form log \ which overestimates the number
of states.
The issue is that there are physical congurations, where (for some values
1
.
and 1
1
)
j
l
= 1
.
and j
2
= 1
1
(5.18)
as well as
j
2
= 1
.
and j
l
= 1
1
, (5.19)
i.e. we counted them both. Assuming indistinguishable particles however, all
what matters are whether there is some particle with momentum 1
.
and some
particle with 1
1
, but not which particles are in those states. Thus we assumed
particles to be distinguishable. There are however ! ways to relabel the par-
ticles which are all identical. We simply counted all these congurations. The
proper expression for the partition sum is therefore
7 =
1
!
I=l
_
d
J
j
I
d
J
r
I
/
J
oxp
_
,
j
2
I
2:
_
=
1
!
_
\
`
J
_
(5.20)
34 CHAPTER 5. IDEAL GASES
Using
log ! = (log 1) (5.21)
valid for large , gives
1 = /
I
T log
_
\
`
3
_
/
I
T (log 1)
= /
I
T
_
1 log
_
\
`
3
__
. (5.22)
This result diers from the earlier one by a factor /
I
T (log 1) which
is why it is obvious that the equation of state for the pressure is unchanged.
However, for the entropy follows now
o = /
I
log
_
\
`
3
_
2
/
I
(5.23)
Which gives again
l = 1 To =
8
2
/
I
T. (5.24)
The reason that the internal energy is correct is due to the fact that it can be
written as an expectation value
l =
l
!
I=l
_
J
d
1J
d
r1
|
d
I
2
1
2n
oxp
_
,
2
1
2n
_
l
!
I=l
_
J
d
1J
d
r1
|
d
oxp
_
,
2
1
2n
_
(5.25)
and the factor ! does not change the value of l. The prefactor ! is called
Gibbs correction factor.
The general partition function of a classical real gas or liquid with pair
potential \ (x
I
x
I=l
d
J
j
I
d
J
r
I
/
J
oxp
_
_
,
I=l
j
2
I
2:
,
I,=l
\ (x
I
x
)
_
_
. (5.26)
5.1.2 Binary classical ideal gas
We consider a classical, non-relativistic ideal gas, consisting of two distinct types
of particles. There are
.
particles with mass '
.
and
1
particles with mass
'
1
in a volume \ . The partition function is now
7 =
\
T
`
3
.
.
`
3
T
.
.
!
1
!
(5.27)
5.1. CLASSICAL IDEAL GASES 35
and yields with log ! = (log 1)
1 =
.
/
I
T
_
1 log
_
\
.
`
3
.
__
1
/
I
T
_
1 log
_
\
1
`
3
.
__
(5.28)
This yields
j =
.
1
\
/
1
T (5.29)
for the pressure and
o =
.
/
I
log
_
\
.
`
3
.
_
1
/
I
log
_
\
1
`
3
1
_
2
(
.
1
) /
I
(5.30)
We can compare this with the entropy of an ideal gas of =
.
1
particles.
o
0
= (
.
1
) /
I
log
_
\
(
.
1
) `
3
_
2
/
I
(5.31)
It follows
o o
0
=
.
/
I
log
_
(
.
1
) `
3
.
`
3
.
_
1
/
I
log
_
(
.
1
) `
3
1
`
3
1
_
(5.32)
which can be simplied to
o o
0
= /
I
_
clog
_
`
3
.
`
3
c
_
(1 c) log
_
`
3
1
`
3
(1 c)
__
(5.33)
where c =
.
T
. The additional contribution to the entropy is called mixing
entropy.
5.1.3 The ultra-relativistic classical ideal gas
Our calculation for the classical, non-relativistic ideal gas can equally be applied
to other classical ideal gases with an energy momentum relation dierent from
- (p) =
2
2n
. For example in case of relativistic particles one might consider
- (p) =
_
:
2
c
d
c
2
j
2
(5.34)
which in case of massless particles (photons) becomes
- (p) = cj. (5.35)
Our calculation for the partition function is in many steps unchanged:
7 =
1
!
I=l
_
d
3
j
I
d
3
r
I
/
J
oxp(,cj
I
)
=
1
!
_
\
/
3
_
__
d
3
j oxp(,cj)
_
(5.36)
36 CHAPTER 5. IDEAL GASES
The remaining momentum integral can be performed easily
1 =
_
d
3
j oxp(,cj) = 4
_
o
0
j
2
dj oxp(,cj)
=
4
(,c)
3
_
o
0
drr
2
c
r
=
8
(,c)
3
. (5.37)
This leads to
7 =
1
!
_
8\
_
/
I
T
/c
_
3
_
(5.38)
where obviously the thermal de Broglie wave length of the problem is now given
as
` =
/c
/
I
T
. (5.39)
For the free energy follows with log ! = (log 1) for large :
1 = /
I
T
_
1 log
_
8\
`
3
__
. (5.40)
This allows us to determine the equation of state
j =
01
0\
=
/
I
T
\
(5.41)
which is identical to the result obtained for the non-relativistic system. This is
because the volume dependence of any classical ideal gas, relativistic or not, is
just the \
I=l
l
o=l
_
o
j
2
I,o
1
o
2
I,o
_
(5.51)
38 CHAPTER 5. IDEAL GASES
where i is the particle index and c the index of additional degrees of freedom,
like components of the momentum or an angular momentum. Here j
I,o
and
I,o
are the generalized momentum and coordinates of classical mechanics. The
internal energy is then written as
H =
_
I=l
l
o=l
J1cJr1c
|
H oxp(H)
_
I=l
l
o=l
J1cJr1c
|
oxp(H)
=
l
o=l
_
_
dj
o
o
j
2
o
c
o.c
2
c
_
dj
o
c
o.c
2
c
_
d
o
1
o
2
o
c
o1cj
2
c
_
d
o
c
o1cj
2
c
_
=
l
o=l
_
/
1
T
2
/
1
T
2
_
= |/
1
T. (5.52)
Thus, every quadratic degree of freedom contributes by a factor
|BT
2
to the
internal energy. In particular this gives for the non-relativistic ideal gas with
| = 8,
o
=
l
2n
and 1
o
= 0 that H =
3
2
/
I
T as expected. Additional
rotational degrees of freedoms in more complex molecules will then increase
this number.
5.2 Ideal quantum gases
5.2.1 Occupation number representation
So far we have considered only classical systems or (in case of the Ising model or
the system of non-interacting spins) models of distinguishable quantum spins. If
we want to consider quantum systems with truly distinguishable particles, one
has to take into account that the wave functions of fermions or bosons behave
dierently and that states have to be symmetrized or antisymmetric. I.e. in
case of the partition sum
7 =
n
w
n
c
o1
w
n
_
(5.53)
we have to construct many particle states which have the proper symmetry
under exchange of particles. This is a very cumbersome operation and turns
out to be highly impractical for large systems.
The way out of this situation is the so called second quantization, which
simply respects the fact that labeling particles was a stupid thing to begin with
and that one should characterize a quantum many particle system dierently. If
the label of a particle has no meaning, a quantum state is completely determined
if one knows which states of the system are occupied by particles and which
5.2. IDEAL QUANTUM GASES 39
not. The states of an ideal quantum gas are obviously the momenta since the
momentum operator
p
l
=
~
i
\
l
(5.54)
commutes with the Hamiltonian of an ideal quantum system
H =
l=l
p
2
l
2:
. (5.55)
In case of interacting systems the set of allowed momenta do not form the
eigenstates of the system, but at least a complete basis the eigenstates can be
expressed in. Thus, we characterize a quantum state by the set of numbers
:
1
, :
2
, ...:
1
(5.56)
which determine how many particles occupy a given quantum state with mo-
mentum j
l
, j
2
, ...j
1
. In a one dimensional system of size 1 those momentum
states are
j
l
=
~2|
1
(5.57)
which guarantee a periodic wave function. For a three dimensional system we
have
p
lo,l,lz
=
~2 (|
r
e
r
|
|
:
e
:
)
1
. (5.58)
A convenient way to label the occupation numbers is therefore :
p
which deter-
mined the occupation of particles with momentum eigenvalue p. Obviously, the
total number of particles is:
=
p
:
p
(5.59)
whereas the energy of the system is
1 =
p
:
p
- (p) (5.60)
If we now perform the summation over all states we can just write
7 =
]np]
oxp
_
,
p
:
p
- (p)
_
c
,
P
p
np
(5.61)
where the Kronecker symbol c
,
P
p
np
ensures that only congurations with cor-
rect particle number are taken into account.
40 CHAPTER 5. IDEAL GASES
5.2.2 Grand canonical ensemble
At this point it turns out to be much easier to not analyze the problem for xed
particle number, but solely for xed averaged particle number . We already
expect that this will lead us to the grand-canonical potential
\ = 1 j = l To j (5.62)
with
d\ = odT jd\ dj (5.63)
such that
0\
0j
= . (5.64)
In order to demonstrate this we generalize our derivation of the canonical en-
semble starting from the principle of maximum entropy. We have to maximize
o = /
I
A
I=l
j
I
log j
I
. (5.65)
with A the total number of macroscopic states, under the conditions
1 =
A
I=l
j
I
(5.66)
1 =
A
I=l
j
I
1
I
(5.67)
=
A
I=l
j
I
I
(5.68)
where
I
is the number of particles in the state with energy 1
I
. Obviously we
are summing over all many body states of all possible particle numbers of the
system. We have to minimize
1 = o `
_
A
I=l
j
I
1
_
/
I
,
_
A
I=l
j
I
1
I
1
_
/
I
i
_
A
I=l
j
I
_
(5.69)
We set the derivative of 1 w.r.t j
I
equal zero:
01
0j
= /
I
(log j
I
1) ` /
I
,1
I
/
I
i
I
= 0 (5.70)
which gives
j
I
= oxp
_
`
/
I
1 ,1
I
i
I
_
=
1
7
oxp(,1
I
i
I
) (5.71)
5.2. IDEAL QUANTUM GASES 41
where the constant 7
=
I
oxp(,1
I
i
I
) (5.72)
which guarantees normalization of the probabilities.
The Lagrange multiplier , is now determined via
1 =
1
7
I
1
I
oxp(,1
I
i
I
) =
0
0,
log 7
. (5.73)
whereas
=
1
7
I
oxp(,1
I
i
I
) =
0
0i
log 7
(5.74)
For the entropy o =
:I
T
follows (log j
I
= ,1
I
i
I
log 7
)
o = /
I
A
I=l
j
I
log j
I
= /
I
A
I=l
j
I
(,1
I
i
I
log 7)
= /
I
, 1 /
I
i /
I
log 7
=
= 1
i
,
/
I
T log 7
/
I
,
0
0,
log 7 /
I
log 7 (5.75)
which implies
\ = /
I
T log 7
. (5.76)
We assumed again that , =
l
|BT
which can be veried since indeed o =
J:
JT
is fullled. Thus we can identify the chemical potential
j =
i
,
(5.77)
which indeed reproduces that
=
0
0i
log 7
= ,
0
0j
log 7
=
0\
0j
. (5.78)
Thus, we can obtain all thermodynamic variables by working in the grand
canonical ensemble instead.
5.2.3 Partition function of ideal quantum gases
Returning to our earlier problem of non-interacting quantum gases we therefore
nd
7
=
]np]
oxp
_
,
p
:
p
(- (p) j)
_
(5.79)
42 CHAPTER 5. IDEAL GASES
for the grand partition function. This can be rewritten as
7
=
np
1
np
2
...
p
c
onp(:(p))
=
p
np
c
onp(:(p))
(5.80)
Fermions: In case of fermions :
p
= 0, 1 such that
7
II
=
p
_
1 c
o(:(p))
_
(5.81)
which gives (FD stands for Fermi-Dirac)
\
II
= /
I
T
p
log
_
1 c
o(:(p))
_
(5.82)
Bosons: In case of bosons :
p
can take any value from zero to innity and we
obtain
np
c
onp(:(p))
=
np
_
c
o(:(p))
_
np
=
1
1 c
o(:(p))
(5.83)
which gives (BE stands for Bose-Einstein)
7
II
=
p
_
1 c
o(:(p))
_
l
(5.84)
as well as
\
II
= /
I
T
p
log
_
1 c
o(:(p))
_
. (5.85)
5.2.4 Classical limit
Of course, both results should reproduce the classical limit. For large tempera-
ture follows via Taylor expansion:
\
cIass
= /
I
T
p
c
o(:(p))
. (5.86)
This can be motivated as follows: in the classical limit we expect the mean
particle distance, d
0
(
)
\
d
J
0
) to be large compared to the de Broglie wave
length `, i.e. classically
d `. (5.87)
The condition which leads to Eq.5.86 is c
o(:(p))
1. Since - (p) 0 this is
certainly fullled if c
o
1. Under this condition holds for the particle density
\
=
1
\
p
c
o(:(p))
= c
o
_
d
J
j
/
J
oxp(,- (p))
=
c
o
`
J
(5.88)
5.2. IDEAL QUANTUM GASES 43
where we used that the momentum integral always yields the inverse de Broglie
length
J
. Thus, indeed if c
o(:(p))
1 it follows that we are in the classical
limit.
Analyzing further, Eq.5.86, we can write
p
) (- (p)) =
^p
^p
p
) (- (p)) = ^p
l
_
d
3
j) (- (p)) (5.89)
with
^p =
_
/
1
_
3
(5.90)
due to
p
lo,l,lz
=
~2 (|
r
e
r
|
|
:
e
:
)
1
(5.91)
such that
\
cIass
= /
I
T\
_
d
3
j
/
3
c
o(:(p))
(5.92)
In case of the direct calculation we can use that the grand canonical parti-
tion function can be obtained from the canonical partition function as follows.
Assume we know the canonical partition function
7 () =
I for xed
c
oJ1()
(5.93)
then the grand canonical sum is just
7
(j) =
o
=0
I for xed
c
o(J1())
=
o
=0
c
o
7 () (5.94)
Applying this to the result we obtained for the canonical partition function
7 () =
1
!
_
\
`
3
_
=
1
!
_
\
_
d
3
j
/
3
c
o:(p)
_
(5.95)
Thus
7
(j) =
o
=0
1
!
_
\
_
d
3
j
/
3
c
o(:(p))
_
= oxp
_
\
_
d
3
j
/
3
c
o(:(p))
_
(5.96)
and it follows the expected result
\
cIass
= /
I
T\
_
d
3
j
/
3
c
o(:(p))
(5.97)
44 CHAPTER 5. IDEAL GASES
From the Gibbs Duhem relation
l = To j\ j. (5.98)
we found earlier
\ = j\ (5.99)
for the grand canonical ensemble. Since
=
0\
cIass
0j
= ,\
cIass
(5.100)
follows
j\ = /
I
T (5.101)
which is the expected equation of state of the grand-canonical ensemble. Note,
we obtain the result, Eq.5.86 or Eq.5.92 purely as the high temperature limit
and observe that the indistinguishability, which is natural in the quantum limit,
"survives" the classical limit since our result agrees with the one obtained from
the canonical formalism with Gibbs correction factor. Also, the factor
l
|
3
in the
measure follows naturally.
5.2.5 Analysis of the ideal fermi gas
We start from
\ = /
I
T
p
log
_
1 c
o(:(p))
_
(5.102)
which gives
=
0\
0j
=
p
c
o(:(p))
1 c
o(:(p))
=
p
1
c
o(:(p))
1
=
p
:
p
(5.103)
i.e. we obtain the averaged occupation number of a given quantum state
:
p
=
1
c
o(:(p))
1
(5.104)
Often one uses the symbol ) (- (p) j) = :
p
. The function
) (.) =
1
c
o.
1
(5.105)
is called Fermi distribution function. For T = 0 this simplies to
:
p
=
_
1 - (p) < j
0 - (p) j
(5.106)
States below the energy j are singly occupied (due to Pauli principle) and states
above j are empty. j(T = 0) = 1
I
is also called the Fermi energy.
5.2. IDEAL QUANTUM GASES 45
In many cases will we have to do sums of the type
1 =
p
) (- (p)) =
\
/
3
_
d
3
j) (- (p)) (5.107)
these three dimensional integrals can be simplied by introducing the density
of states
j (.) = \
_
d
3
j
/
3
c (. - (p)) (5.108)
such that
1 =
_
d.j (.) ) (.) (5.109)
We can determine j (.) by simply performing a substitution of variables . =
- (j) if - (p) = - (j) only depends on the magnitude [p[ = j of the momentum
1 =
\ 4
/
3
_
j
2
dj) (- (j)) =
\ 4
/
3
_
d.
dj
d.
j
2
(.) ) (.) (5.110)
such that
j (.) = \
4:
/
3
_
2:. = \
0
_
. (5.111)
with
0
=
dt
|
3
_
2:
3/2
. Often it is more useful to work with the density of states
per particle
j
0
(.) =
j (.)
=
\
0
_
.. (5.112)
We use this approach to determine the chemical potential as function of
for T = 0.
=
_
j
0
(.) :(.) d. =
_
JF
0
j
0
(.) d. = \
0
_
JF
0
.
l/2
d. = \
2
8
0
1
3/2
I
(5.113)
which gives
1
I
=
~
2
2:
_
6
2
\
_
2/3
(5.114)
If \ = d
3
it holds that 1
I
~
~
2
2n
d
2
. Furthermore it holds that
j
0
(1
J
) =
\
2:
4
2
~
2
_
6
2
\
_
l/3
=
8
2
1
1
I
(5.115)
Equally we can analyze the internal energy
l =
0
0,
log 7
=
0
0,
p
log
_
1 c
o(:(p))
_
=
p
- (p)
c
o(:(p))
1
(5.116)
46 CHAPTER 5. IDEAL GASES
such that
l =
p
- (p) :
p
=
_
j (.) .:(. j) d. =
8
1
I
(5.117)
At nite temperatures, the evaluation of the integrals is a bit more subtle.
The details, which are only technical, will be discussed in a separate handout.
Here we will concentrate on qualitative results. At nite but small temperatures
the Fermi function only changes in a regime /
I
T around the Fermi energy. In
case of metals for example the Fermi energy with d 1 10
leads to 1
I
1...10oV i.e. 1
J
,/
I
10
d
...10
5
K which is huge compared to room temperature.
Thus, metals are essentially always in the quantum regime whereas low density
systems like doped semiconductors behave more classically.
If we want to estimate the change in internal energy at a small but nite
temperature one can argue that there will only be changes of electrons close to
the Fermi level. Their excitation energy is ~ /
I
T whereas the relative number
of excited states is only j
0
(1
J
) /
I
T. Due to j
0
(1
J
) ~
l
JF
it follows in metals
j
0
(1
J
) /
I
T 1. We therefore estimate
l
8
1
I
j
0
(1
J
) (/
I
T)
2
... (5.118)
at lowest temperature. This leads then to a specic heat at constant volume
c
\
=
C
\
=
1
0l
0T
~ 2/
2
I
j
0
(1
J
) T = T (5.119)
which is linear, with a coecient determined by the density of states at the
Fermi level. The correct result (see handout3 and homework 5) is
=
2
8
/
2
I
j
0
(1
J
) (5.120)
which is almost identical to the one we found here. Note, this result does not
depend on the specic form of the density of states and is much more general
than the free electron case with a square root density of states.
Similarly one can analyze the magnetic susceptibility of a metal. Here the
energy of the up ad down spins is dierent once a magnetic eld is applied, such
that a magnetization
' = j
1
(
|
)
= j
1
_
_
JF
0
j
0
(. j
1
1) j
0
(. j
1
1)
_
d. (5.121)
For small eld we can expand j
0
(. j
1
1) j
0
(.)
J
0
(.)
J.
j
1
1 which gives
' = 2j
2
1
1
_
JF
0
0j
0
(.)
0.
d.
= 2j
2
1
1j
0
(1
J
) (5.122)
5.2. IDEAL QUANTUM GASES 47
This gives for the susceptibility
=
0'
01
= 2j
2
1
j
0
(1
J
) . (5.123)
Thus, one can test the assumption to describe electrons in metals by considering
the ratio of and C
\
which are both proportional to the density of states at
the Fermi level.
5.2.6 The ideal Bose gas
Even without calculation is it obvious that ideal Bose gases behave very dier-
ently at low temperatures. In case of Fermions, the Pauli principle enforced the
occupation of all states up to the Fermi energy. Thus, even at T = 0 are states
with rather high energy involved. The ground state of a Bose gas is clearly
dierent. At T = 0 all bosons occupy the state with lowest energy, which is
in our case p = 0. An interesting question is then whether this macroscopic
occupation of one single state remains at small but nite temperatures. Here,
a macroscopic occupation of a single state implies
lim
)o
:
p
0. (5.124)
We start from the partition function
\
II
= /
I
T
p
log
_
1 c
o(:(p))
_
(5.125)
which gives for the particle number
=
0\
0j
=
p
1
c
o(:(p))
1
. (5.126)
Thus, we obtain the averaged occupation of a given state
:
p
=
1
c
o(:(p))
1
. (5.127)
Remember that Eq.5.126 is an implicit equation to determine j(). We
rewrite this as
=
_
d.j (.)
1
c
o(.)
1
. (5.128)
The integral diverges if j 0 since then for . j
_
d.
j (.)
, (. j)
(5.129)
if j (j) ,= 0. Since j (.) = 0 if . < 0 it follows
j _ 0. (5.130)
48 CHAPTER 5. IDEAL GASES
The case j = 0 need special consideration. At least for j (.) ~ .
l/2
, the above
integral is convergent and we should not exclude j = 0.
Lets proceed by using
j (.) = \
0
_
. (5.131)
with
0
=
dt
|
3
_
2:
3/2
. Then follows
\
=
0
_
o
0
d.
_
.
c
o(.)
1
<
0
_
o
0
d.
_
.
c
o.
1
=
0
(/
1
T)
3/2
_
o
0
dr
r
l/2
c
r
1
(5.132)
It holds
_
o
0
dr
r
l/2
c
r
1
=
_
2
_
8
2
_
2.82 (5.133)
We introduce
/
1
T
0
= a
0
~
2
:
_
\
_
2/3
(5.134)
with
a
0
=
2
_
3
2
_
2/3
8.81. (5.135)
The above inequality is then simply:
T
0
< T. (5.136)
Our approach clearly is inconsistent for temperatures below T
0
(Note, except for
prefactors, /
1
T
0
is a similar energy scale than the Fermi energy in ideal fermi
systems). Another way to write this is that
<
_
T
T
0
_
3/2
. (5.137)
Note, the right hand side of this equation does not depend on . It reects
that we could not obtain all particle states below T
0
.
The origin of this failure is just the macroscopic occupation of the state with
p = 0. It has zero energy but has been ignored in the density of states since
j (. = 0) = 0. By introducing the density of states we assumed that no single
state is relevant (continuum limit). This is obviously incorrect for p = 0. We
can easily repair this if we take the state p = 0 explicitly into account.
=
p,0
1
c
o(:(p))
1
1
c
o
1
(5.138)
5.2. IDEAL QUANTUM GASES 49
for all nite momenta we can again introduce the density of states and it follows
=
_
d.j (.)
1
c
o(.)
1
1
c
o
1
(5.139)
The contribution of the last term
0
=
1
c
o
1
(5.140)
is only relevant if
lim
)o
0. (5.141)
If j < 0,
0
is nite and lim
)o
0
)
= 0. Thus, below the temperature T =
T
0
the chemical potential must vanish in order to avoid the above inconsistency.
For T < T
0
follows therefore
=
_
T
T
0
_
3/2
0
(5.142)
which gives us the temperature dependence of the occupation of the p = 0 state:
If T < T
0
0
=
_
1
_
T
T
0
_
3/2
_
. (5.143)
and
0
= 0 for T T
0
. Then j < 0.
For the internal energy follows
l =
_
d.j (.) .
1
c
o(.)
1
(5.144)
which has no contribution from the "condensate" which has . = 0. The way
the existence of the condensate is visible in the energy is via j(T < T
0
) = 0
such that for T < T
0
l = \
0
_
d.
.
3/2
c
o.
1
= \
0
(/
1
T)
5/2
_
o
0
dr
r
3/2
c
r
1
(5.145)
It holds again
_
o
0
dr
r
32
t
o
l
=
3
d
_
(,2) 1.78. This gives
l = 0.77 /
1
T
_
T
T
0
_
3/2
(5.146)
leading to a specic heat (use l = cT
5/2
)
C =
0l
0T
=
2
cT
3/2
=
2
l
T
~ T
3/2
. (5.147)
50 CHAPTER 5. IDEAL GASES
This gives
o =
_
T
0
c (T
t
)
T
t
dT
t
=
2
c
_
T
0
T
tl/2
dT
t
=
8
c T
3/2
=
8
l
T
(5.148)
which leads to
\ = l To j =
2
8
l (5.149)
The pressure below T
0
is
j =
0\
0\
=
8
0l
0\
= 0.08
:
3/2
h
3
(/
1
T)
5/2
(5.150)
which is independent of \ . This determines the phase boundary
j
c
= j
c
(
c
) (5.151)
with specic volume =
\
)
at the transition:
j
c
= 1.0
~
2
:
5/3
. (5.152)
5.2.7 Photons in equilibrium
A peculiarity of photons is that they do not interact with each other, but only
with charged matter. Because of this, photons need some amount of matter to
equilibrate. This interaction is then via absorption and emission of photons, i.e.
via a mechanism which changes the number of photons in the system.
Another way to look at this is that in relativistic quantum mechanics, parti-
cles can be created by paying an energy which is at least :c
2
. Since photons are
massless (and are identical to their antiparticles) it is possible to create, without
any energy an arbitrary number of photons in the state with - (p) = 0. Thus,
it doesnt make any sense to x the number of photons. Since adding a photon
with energy zero to the equilibrium is possible, the chemical potential takes the
value j = 0. It is often argued that the number of particles is adjusted such
that the free energy is minimal:
JJ
J
= 0, which of course leads with j =
JJ
J
to
the same conclusion that j vanishes.
Photons are not the only systems which behave like this. Phonons, the exci-
tations which characterize lattice vibrations of atoms also adjust their particle
number to minimize the free energy. This is most easily seen by calculating the
canonical partition sum of a system of
ai
atoms vibrating in a harmonic po-
tential. As usual we nd for non-interacting oscillators that 7 (
ai
) = 7 (1)
at
with
7 (1) =
o
n=0
c
o~.0(n
1
2
)
= c
{~.
0
2
1
1 c
o~.0
(5.153)
Thus, the free energy
1 =
ai
~.
0
2
/
I
T
ai
log
_
1 c
o~.0
_
(5.154)
5.2. IDEAL QUANTUM GASES 51
is (ignoring the zero point energy) just the grand canonical potential of bosons
with energy ~.
0
and zero chemical potential. The density of states is
j (-) =
ai
c (- ~.
0
) . (5.155)
This theory can be more developed and one can consider coupled vibrations be-
tween dierent atoms. Since any system of coupled harmonic oscillators can be
mapped onto a system of uncoupled oscillators with modied frequency modes
we again obtain an ideal gas of bosons (phonons). The easiest way to determine
these modied frequency for low energies is to start from the wave equation for
sound
1
c
2
s
0
2
n
0t
2
= \
2
n (5.156)
which gives with the ansatz n(r,t) = n
0
oxp(i (.t q r)) leading to . (q) =
c
s
[q[, with sound velocity c
s
. Thus, we rather have to analyze
1 =
q
~. (q)
2
/
I
T
q
log
_
1 c
o~.(q)
_
(5.157)
which is indeed the canonical (or grand canonical since j = 0) partition sum
of ideal bosons. Thus, if we do a calculation for photons we can very easily
apply the same results to lattice vibrations at low temperatures. The energy
momentum dispersion relation for photons is
- (p) = c [p[ (5.158)
with velocity of light c. This gives
l =
p
- (p)
1
c
o:(p)
1
=
_
d-
j (-) -
c
o:
1
. (5.159)
The density of states follows as:
1 =
p
1 (- (p)) =
\
/
3
_
d
3
j1 (cj)
=
\
/
3
4
_
j
2
dj1 (cj) =
4\
c
3
/
3
_
d--
2
1 (-) . (5.160)
which gives for the density of states
j (-) = q
4\
c
3
/
3
-
2
. (5.161)
Here q = 2 determines the number of photons per energy. This gives the radia-
tion energy as function of frequency - =h.:
l =
q\ ~
c
3
2
2
_
d.
.
3
c
o~.
1
(5.162)
52 CHAPTER 5. IDEAL GASES
The energy per frequency interval is
dl
d.
=
q\ ~
c
3
2
2
.
3
c
oh.
1
. (5.163)
This gives the famous Planck formula which was actually derived using thermo-
dynamic arguments and trying to combine the low frequency behavior
dl
d.
=
q\ /
I
T
c
3
2
2
.
2
(5.164)
which is the Rayleigh-Jeans law and the high frequency behavior
dl
d.
=
q\ ~
c
3
2
2
.
3
c
o~.
(5.165)
which is Wiens formula.
In addition we nd from the internal energy that r = ,~.
l =
q\
~
3
c
3
2
2
(/
I
T)
d
_
dr
r
3
c
r
1
(5.166)
=
q\
2
~
3
c
3
80
(/
I
T)
d
(5.167)
where we used
_
dr
r
3
t
o
l
=
t
4
l5
. l can then be used to determine all other
thermodynamic properties of the system.
Finally we comment on the applicability of this theory to lattice vibrations.
As discussed above, one important quantitative distinction is of course the value
of the velocity. The sound velocity, c
s
, is about 10
6
times the value of the
speed of light c = 2.00 10
S
ms
l
. In addition, the specic symmetry of
a crystal matters and might cause dierent velocities for dierent directions of
the sound propagation. Considering only cubic crystals avoids this complication.
The option of transverse and longitudinal vibrational modes also changes the
degeneracy factor to q = 8 in case of lattice vibrations. More fundamental
than all these distinctions is however that sound propagation implies that the
vibrating atom are embedded in a medium. The interatomic distance, a, will
then lead to an lower limit for the wave length of sound (` 2c) and thus to
an upper limit ~ /,(2a) =h
t
o
. This implies that the density of states will be
cut o at high energies.
l
jLonons
= q
4\
c
3
s
/
3
_
|B0
T
0
d-
-
3
c
o:
1
. (5.168)
The cut o is expressed in terms of the Debye temperature 0
1
. The most
natural way to determine this scale is by requiring that the number of atoms
equals the total integral over the density of states
ai
=
_
|B0
T
0
j (-) d- = q
4\
c
3
/
3
_
|B0
T
0
-
2
d- = q
4\
8c
3
/
3
(/
I
0
1
)
3
(5.169)
5.2. IDEAL QUANTUM GASES 53
This implies that the typical wave length at the cut o, `
1
, determined by
/c`
l
1
= /
I
0
1
is
ai
\
= q
4
8
`
3
1
(5.170)
If one argues that the number of atoms per volume determines the interatomic
spacing as \ =
dt
3
a
3
ai
leads nally to `
1
= 8.7a as expected. Thus, at low
temperatures T 0
1
the existence of the upper cut o is irrelevant and
l
jLonons
=
2
\
c
3
s
~
3
q
80
(/
I
T)
d
(5.171)
leading to a low temperature specic heat C ~ T
3
, whereas for high tempera-
tures T 0
1
l
jLonons
=
2
\
8c
3
s
~
3
q
80
/
I
T (/
I
0
1
)
3
=
ai
/
I
T (5.172)
which is the expected behavior of a classical system. This makes us realize
that photons will never recover this classical limit since they do not have an
equivalent to the upper cut o 0
1
.
5.2.8 MIT-bag model for hadrons and the quark-gluon
plasma
Currently, the most fundamental building blocks of nature are believed to be
families of quarks and leptons. The various interactions between these particles
are mediated by so called intermediate bosons. In case of electromagnetism the
intermediate bosons are photons. The weak interaction is mediated by another
set of bosons, called \ and 7. In distinction to photons these bosons turn out to
be massive and interact among each other (remember photons only interact with
electrically charged matter, not with each other). Finally, the strong interaction,
which is responsible for the formation of protons, neutrons and other hadrons,
is mediated by a set of bosons which are called gluons. Gluons are also self
interacting. The similarity between these forces, all being mediated by bosons,
allowed to unify their description in terms of what is called the standard model.
A particular challenge in the theory of the strong interaction is the forma-
tion of bound states like protons etc. which can not be understood by using
perturbation theory. This is not too surprising. Other bound states like Cooper
pairs in the theory of superconductivity or the formation of the Hydrogen atom,
where proton and electron form a localized bound state, are not accessible using
perturbation theory either. There is however something special in the strong
interaction which goes under the name asymptotic freedom. The interaction
between quarks increases (!) with the distance between them. While at long
distance, perturbation theory fails, it should be possible to make some progress
on short distances. This important insight by Wilczek, Gross and Politzer led
to the 2004 Nobel price in physics. Until today hadronization (i.e. formation
54 CHAPTER 5. IDEAL GASES
of hadrons) is at best partly understood and qualitative insight was obtained
mostly using rather complex (and still approximate) numerical simulations.
In this context one should also keep in mind that the mass of the quarks
(:
u
MoV, :
J
10MoV) is much smaller than the mass of the proton
:
1GoV. Here we use units with c = 1 such that masses are measured in
energy units. Thus, the largest part of the proton mass stems from the kinetic
energy of the quarks in the proton.
A very successful phenomenological theory with considerable predictive power
are the MIT and SLAC bag models. The idea is that the connement of quarks
can be described by assuming that the vacuum is dia-electric with respect to
the color-electric eld. One assumes a spherical hadron with distance 1. The
hadron constantly feels an external pressure from the outside vacuum. This is
described by an energy
l
1
=
4
8
11
3
(5.173)
where the so called bag constant 1 is an unknown constant. Since l
1
11 =
1j with pressure j and bag area it holds that 1 is an external pressure act-
ing on the bag. To determine 1 requires to solve the full underlying quantum
chromodynamics of the problem. Within the bag, particles are weakly interact-
ing and for our purposes we assume that they are non-interacting, i.e. quarks
and gluons are free fermions and bosons respectively. Since these particles are
conned in a nite region their typical energy is
- (p) cj c
/
1
(5.174)
and the total energy is of order
l = c
/
1
4
8
11
3
(5.175)
where : is the number of ... in the bag. Minimizing this w.r.t. 1 yields
1
0
=
_
c/
4
_
l/d
1
l/d
(5.176)
using a the known size of a proton 1
0
1fm = 10
l3
cm gives 1 60MoV,fm
3
.
In units where energy, mass, frequency and momentum are measured in electron
volts and length in inverse electron volts (c =
|
2t
= 1) this yields 1 160MoV.
Note,
Using this simple picture we can now estimate the temperature needed to
melt the bag. If this happens the proton should seize to be a stable hadron and
a new state of matter, called the quark gluon plasma, is expected to form. This
should be the case when the thermal pressure of the gluons and quarks becomes
larger than the bag pressure 1
j
Q
j
c
= 1 (5.177)
5.2. IDEAL QUANTUM GASES 55
Gluons and quarks are for simplicity assumed to be massless. In case of gluons
it follows, just like for photons, that (/
1
= 1)
j
c
= q
c
2
00
T
d
(5.178)
where q
c
= 16 is the degeneracy factor of the gluon. The calculation for quarks
is more subtle since we need to worry about the chemical potential of these
fermions. In addition, we need to take into account that we can always thermally
excite antiparticles. Thus we discuss the ultrarelativistic Fermi gas in the next
paragraph in more detail.
5.2.9 Ultrarelativistic fermi gas
In the ultrarelativistic limit /
1
T can be as large as :c
2
and we need to take into
account that fermions can generate their antiparticles (e.g. positrons in addition
to electrons are excited). electrons and positrons (quarks and antiquarks) are
always created and annihilated in pairs.
The number of observable electrons is
t
=
:,0
1
c
o(:)
1
(5.179)
Since positrons are just 1not observable electrons at negative energy, it follows
=
:<0
_
1
1
c
o(:)
1
_
=
:,0
1
c
o(:)
1
(5.180)
The particle excess is then the one unaected by creation and annihilation of
pairs
=
(5.181)
We conclude that electrons and positrons (quarks and antiquarks) can be consid-
ered as two independent ideal fermi systems with positive energy but chemical
potential of opposite sign
j
t
= j
= j. (5.182)
It follows with - = cj
log 7
= q
p
_
log
_
1 c
o(:(p))
_
log
_
1 c
o(:(p))
__
= q
4\
/
3
c
3
_
.
2
d. log
_
1 c
o(.)
_
log
_
1 c
o(.)
_
(5.183)
Performing a partial integration gives
log 7
= q
4\
/
3
c
3
,
8
_
.
3
d.
_
1
c
o(.)
1
1
c
o(.)
1
_
(5.184)
56 CHAPTER 5. IDEAL GASES
substitute r = , (. j) and j = , (. j)
log 7
= q
4\
/
3
c
3
1
8
_
_
_
o
o
dr
_
r
o
j
_
3
c
or
1
_
o
o
dj
_
o
j
_
3
c
or
1
_
_
= q
4\
/
3
c
3
,
3
8
_
_
o
0
dr
(r ,j)
3
c
or
1
_
o
0
dj
(j ,j)
3
c
or
1
_
0
o
dr
(r ,j)
3
c
or
1
_
o
0
dj
(j ,j)
3
c
or
1
_
(5.185)
The rst two integrals can be directly combined, the last two after substitution
j = r
log 7
= q
4\
/
3
c
3
,
3
8
_
_
o
0
dr
2r
3
6r(,j)
2
c
or
1
_
o
0
d..
3
_
(5.186)
with . = r ,j. Using
_
dr
r
3
c
r
1
=
7
d
120
_
dr
r
c
r
1
=
2
12
(5.187)
follows nally
log 7
=
q\
/
3
c
3
4
8
(/T)
3
_
7
d
120
2
2
_
j
/T
_
2
1
4
_
j
/T
_
d
_
(5.188)
Similarly it follows
=
q4\
/
3
c
3
(/T)
3
_
2
8
j
/T
1
8
_
j
/T
_
d
_
(5.189)
It follows for the pressure immediately
j =
q
/
3
c
3
4
8
(/T)
d
_
7
d
120
2
2
_
j
/T
_
2
1
4
_
j
/T
_
d
_
(5.190)
Using these results we can now proceed and determine, for a given density
of nucleons (or quarks) the chemical potential at a given temperature. For
example, in order to obtain about ve times nuclear density
:
Q
= 2.
1
fm
3
(5.191)
at a temperature T 10MoV one has a value j 2.0T.
5.2. IDEAL QUANTUM GASES 57
Using the above value for the bag constant we are then in a position to
analyze our estimate for the transition temperature of the quark gluon plasma
j
Q
j
c
= 1 (5.192)
which leads to
1 = T
d
c
_
87
2
00
_
j
c
T
_
2
1
2
2
_
j
c
T
c
_
d
_
(5.193)
For example at j
c
= 0 it follows T
c
0.71
l/d
112MoV and for T
c
= 0 it
holds j
c
= 2.11
l/d
886MoV.
To relate density and chemical potential one only has to analyze
j = r
r
3
2
(5.194)
with r = j,T and j =
5d
T
3
with q
Q
= 12.
58 CHAPTER 5. IDEAL GASES
Chapter 6
Interacting systems and
phase transitions
6.1 The classical real gas
In our investigation of the classical ideal gas we ignored completely the fact that
the particles interact with each other. If we continue to use a classical theory,
but allow for particle-particle pair interactions with potential l (r
I
r
), we
obtain for the partition function
7 =
_
d
3
jd
3
r
/
3
!
c
o
P
1
p
2
1
2r
P
1
I(r1r)
with d
3
r =
I
d
3
r and similar for d
3
j. The integration over momentum can
be performed in full analogy to the ideal gas and we obtain:
7 =
Q
(\, T)
!`
J
(6.1)
where we have:
Q
(\, T) =
_
d
3
r oxp
_
_
,
I<
l (r
I
r
)
_
_
=
_
d
3
r
I<
c
oI1
(6.2)
If ,l
I
1 it still holds that c
oI1
1 and an expansion is nontrivial, however
one can consider instead
)
I
= c
oI1
1 (6.3)
59
60 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
which is also well dened in case of large l
I
. Then
I<
(1 )
I
) = 1
I<
)
I
I<|,l<n
)
I|
)
ln
... (6.4)
And it follows
Q
(\, T) = \
\
l
( 1)
2
_
d
3
r
_
c
oI(:)
1
_
(6.5)
where we took into account that there are
(l)
2
pairs i < ,. If we set
a (T) =
_
d
3
r
_
c
oI(:)
1
_
(6.6)
follows
7 =
\
!`
J
_
1
2
2\
a (T)
_
(6.7)
It follows for the equation of state that
j =
01
0\
= /
I
T
0 log 7
0\
=
/
I
T
\
/
I
T
o
2
_
\
_
2
1
o
2
_
\
_
2
/
I
T
\
_
1
a
2
\
_
(6.8)
An often used potential in this context is the Lennard-Jones potential
l (r) = l
0
_
_
r
0
r
_
l2
2
_
r
0
r
_
6
_
(6.9)
which has a minimum at r = r
0
and consists of a short range repulsive and a
longer range attractive part. For simplicity we approximate this by
l (r) =
_
r < r
0
l
0
_
:0
:
_
6
r _ r
0
(6.10)
called Sutherland potential. Then
a (T) = 4
_
:0
0
r
2
dr 4
_
o
:0
r
2
_
c
oI0(
r
0
r
)
6
1
_
dr (6.11)
expanding for small potential gives with 4,
_
o
:0
r
2
l
0
_
:0
:
_
6
dr =
dt
3
r
3
0
,l
0
such
that
a (T) =
4
8
r
3
0
(1 ,l
0
) . (6.12)
This gives with =
\
j =
/
I
T
\
_
1
2
8
r
3
0
(1 ,l
0
)
_
(6.13)
6.2. CLASSIFICATION OF PHASE TRANSITIONS 61
or
j
2
8
2
r
3
0
l
0
=
/
I
T
_
1
2r
3
0
8
_
= /
I
T
_
2r
3
0
8
_
l
(6.14)
which gives
_
j
a
2
_
( /) = /
I
T (6.15)
which is the van der Waals equation of state with
a =
2
8
r
3
0
l
0
/ =
2r
3
0
8
(6.16)
The analysis of this equation of state yields that for temperatures below
T
cv
=
a
27//
I
=
l
0
27/
I
(6.17)
there are three solutions \
I
of Eq.?? for a given pressure. One of these solutions
(the one with intermediate volume) can immediately be discarded since here
J
J\
0, i.e. the system has a negative compressibility (corresponding to a local
maximum of the free energy). The other two solutions can be interpreted as
coexistent high density (liquid) and low density (gas) uid.
6.2 Classication of Phase Transitions
Phase transitions are transitions between qualitatively distinct equilibrium states
of matter such as solid to liquid, ferromagnet to paramagnet, superconductor to
normal conductor etc. The rst classication of phase transitions goes back to
1932 when Paul Ehrenfest argued that a phase transition of :
iL
-order occurs
when there is a discontinuity in the :
iL
-derivative of a thermodynamic potential.
Thus, at a 1
si
-order phase transition the free energy is assumed to be continuous
but has a discontinuous change in slope at the phase transition temperature T
c
such that the entropy (which is a rst derivative) jumps from a larger value in
the high temperature state o (T
c
-) to a smaller value o (T
c
-) in the low
temperature state, where - is innitesimally small. Thus a latent heat
^Q = T
c
^o = T
c
(o (T
c
-) o (T
c
-)) (6.18)
is needed to go from the low to the high temperature state. Upon cooling,
the system jumps into a new state with very dierent internal energy (due to
1 = l To must l be discontinuous if o is discontinuous).
Following Ehrenfests classication, a second order phase transition should
have a discontinuity in the second derivative of the free energy. For example
the entropy should then be continuous but have a change in slope leading to
a jump in the value of the specic heat. This is indeed what one nds in
approximate, so called mean eld theories. However a more careful analysis of
62 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
the role of spatial uctuations (see below) yields that the specic heat rather
behaves according to a powerlaw with C ~ (T T
c
)
o
or (like in the two
dimensional Ising model) diverges logarithmically. In some cases c < 0 occurs
making it hard to identify the eects of these uctuations. In other cases,
like conventional superconductors, the quantitative eect of uctuations is so
small that the experiment is virtually indistinguishable from the mean eld
expectation and Ehrenfests picture is a very sensible one.
More generally one might classify phase transitions in a way that at a :
iL
-
order transition a singularity of some kind occurs in the :
iL
-derivative of the
free energy, whereas all lower derivatives are continuous. The existence of a
latent heat in rst order transitions and its absence in a second order transition
is then valid even if one takes uctuations into account.
Finally one needs to realize that strictly no phase transition exists in a nite
system. For a nite system the partition sum is always nite. A nite sum of
analytic functions ~ c
oJ1
is analytic itself and does not allow for similarities in
its derivatives. Thus, the above classication is valid only in the thermodynamic
limit of innite systems.
6.3 Gibbs phase rule and rst order transitions
We next discuss the issue of how many state variables are necessary to uniquely
determine the state of a system. To this end, we start from an isolated sys-
tem which contains 1 dierent particle species (chemical components) and 1
dierent phases (solid, liquid, gaseous,... ) that coexist. Each phase can be un-
derstood as a partial system of the total system and one can formulate the rst
law for each phase, where we denote quantities of the i
||
phase by superscript
i = 1, ..., 1. We have
dl
(I)
= T
(I)
do
(I)
j
(I)
d\
(I)
l=l
j
(I)
l
d
(
1
)
l
(6.19)
Other terms also may appear, if electric or magnetic eects play a role. In
this formulation of the rst law, l
(I)
of phase i is a function of the extensive
state variables o
(I)
, \
(I)
,
(I)
l
, i.e., it depends on 1 2 variables. Altogether
we therefore have 1(1 2) extensive state variables. If the total system is in
thermodynamic equilibrium, we have T
(I)
= T, j
(I)
= j and j
(I)
l
= j
l
. Each
condition contains 11 equations, so that we obtain a system of (1 1) (1 2)
equations. Since T
(I)
, j
(I)
, and j
(I)
l
are functions of o
(I)
, \
(I)
,
(I)
l
we can
eliminate one variable with each equation. Thus, we only require
(1 2) 1 (1 2) (1 1) = 1 2 (6.20)
extensive variables to determine the equilibrium state of the total system. As
we see, this number is independent of the number of phases. If we now consider
6.4. THE ISING MODEL 63
that exactly 1 extensive variables (e.g., \
(I)
with i = 1, ..., 1) determine the
size of the phases (i.e., the volumes occupied by each), one needs
1 = 1 2 1
intensive variables. This condition is named after J.W. Gibbs and is called
Gibbs phase rule. It is readily understood with the help of concrete examples.
Let us for instance think of a closed pot containing a vapor. With 1 = 1 we
need 8 = 12 extensive variables for a complete description of the system, e.g.,
o, \ , and . One of these (e.g., \ ), however, determines only the size of the
system. The intensive properties are completely described by 1 = 1 2 1 = 2
intensive variables, for instance by the pressure and the temperature. Then also
l,\ , o,\ , ,\ , etc. are xed and by additionally specifying \ one can also
obtain all extensive quantities.
If both vapor and liquid are in the pot and if they are in equilibrium, we can
only specify one intensive variable, 1 122 = 1, e.g., the temperature. The
vapor pressure assumes automatically its equilibrium value. All other intensive
properties of the phases are also determined. If one wants in addition to describe
the extensive properties, one has to specify for instance \
lIj
and \
uo
, i.e., one
extensive variable for each phase, which determines the size of the phase (of
course, one can also take
lIj
and
uo
, etc.). Finally, if there are vapor,
liquid, and ice in equilibrium in the pot, we have 1 = 128 = 0. This means
that all intensive properties are xed: pressure and temperature have denite
values. Only the size of the phases can be varied by specifying \
lIj
, \
sol
, and
\
uo
. This point is also called triple point of the system.
6.4 The Ising model
Interacting spins which are allowed to take only the two values o
I
= 1 are
often modeled in terms of the Ising model
H =
I,
J
I
o
I
o
Il
j1
I
o
I
(6.21)
where J
I
is the interaction between spins at sites i and ,. The microscopic
origin of the J
I
can be the dipol-dipol interaction between spins or exchange
interaction which has its origin in a proper quantum mechanical treatment of the
Coulomb interaction. The latter is dominant in many of the known 8d, 4) and
) magnets. Often the Ising model is used in a context unrelated to magnetism,
like the theory of binary alloys where o
I
= 1 corresponds to the two atoms of
the alloy and J
I
characterizes the energy dierence between two like and unlike
atoms on sites i and ,. This and many other applications of this model make
the Ising model one of the most widely used concepts and models in statistical
mechanics. The model has been solved in one and two spatial dimensions and for
situations where every spin interacts with every other spin with an interaction
J
I
,. No analytic solution exists for three dimensions even though computer
64 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
simulations for a model with nearest neighbor coupling J demonstrate that
the model is ordered, with lim
10
o
I
, = 0, below a temperature T
c
4.12J.
Similarly the solution for the square lattice in d = 2 yields an ordered state below
T
c
= 2J,aic colh
_
2 2.260J, while the ising model in one spatial dimension
has T
c
= 0, i.e. the ground state is ordered while no zero eld magnetization
exists at a nite temperature. The latter is caused by the fact that any domain
wall in a d-dimensional model (with short range interactions) costs an energy
1
J
= J
Jl
J
, where
J
is the number of spins in the domain. While this is a
large excitation energy for d 1 it is only of order J in d = 1 and domains with
opposite magnetization of arbitrary size can easily be excited at nite T. This
leads to a breakdown of long range order in d = 1.
6.4.1 Exact solution of the one dimensional model
A rst nontrivial model of interacting particles is the one dimensional Ising
model in an external eld, with Hamiltonian
H = J
I
o
I
o
Il
j1
I
o
I
= J
I
o
I
o
Il
j1
2
I
(o
I
o
Il
) (6.22)
The partition function is
7 =
]S1]
c
o1
=
S1=l
...
S
1
=l
c
o
P
1
[S1S1+1
,T
2
S1S1+1[
(6.23)
We use the method of transfer matrices and dene the operator T dened via
its matrix elements:
o
I
[T[ o
Il
= c
o
P
1
[S1S1+1
,T
2
(S1S1+1)[
(6.24)
The operator can be represented as 2 2 matrix
T =
_
c
o(
T
1)
c
o
c
o
c
o(
T
1)
_
(6.25)
It holds then that
7 =
S1=l
...
S
1
=l
o
l
[T[ o
2
o
2
[T[ o
3
... o
[T[ o
l
=
S1=l
o
l
o
l
_
= liT
. (6.26)
This can be expressed in terms of the eigenvalues of the matrix T
`
= c
coshr
_
c
2
c
2
sinh
2
r
l/2
(6.27)
6.4. THE ISING MODEL 65
with
r = ,j
I
1
j = ,J. (6.28)
It follows
7 = `
(6.29)
yielding
1 = /
I
T
_
log `
log
_
1
_
`
__
= /
I
T log `
(6.30)
where we used in the last step that `
< `
) (6.34)
are small and can be neglected. Using the identity
o
I
o
= (o
I
o
I
) (o
) o
I
o
o
I
o
o
I
o
(6.35)
we can ignore the rst term and write in the Hamiltonian of the Ising model,
assuming o
I
= o independent of i:
H =
I
(.J o j1) o
I
.
2
o
2
(6.36)
66 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
This model is, except for the constant
:
2
o
2
equal to the energy of non-
interacting spins in an eective magnetic eld
1
t}}
= 1
.J
j
o . (6.37)
Thus, we can use our earlier result for this model to nd the expectation value
o in terms of the eld
o = lanh
_
j1
t}}
/
I
T
_
(6.38)
Setting now the external eld 1 = 0, we obtain
o = lanh
_
.J
/
I
T
o
_
(6.39)
If
T T
c
=
.J
/
I
(6.40)
this nonlinear equation has only the solution o = 0. However, for T below T
c
another solution with nite o emerges continuously. This can be seen more
directly by expanding the above lanh for small argument, and it follows
o = lanh
_
T
c
T
o
_
T
c
T
o
1
8
_
T
c
T
_
2
o
3
(6.41)
which yields
o (T
c
T)
l/2
(6.42)
.i.e. the magnetization vanishes continuously. Right at T
c
a small external eld
1 causes a nite magnetization which is determines by
o =
j1 .J o
/
I
T
c
1
8
_
j1 .J o
/
I
T
c
_
3
(6.43)
which yields
o =
_
8
j1
/
I
T
c
_
l/3
. (6.44)
Finally we can analyze the magnetic susceptibility =
J1
J1
with ' = j o.
We rst determine
S
=
JS)
J1
above T
c
. It follows for small eld
o
j1 /
I
T
c
o
/
I
T
(6.45)
such that
S
=
j /
I
T
c
S
/
I
T
(6.46)
6.5. LANDAU THEORY OF PHASE TRANSITIONS 67
yielding
S
=
j
/
I
(T T
c
)
(6.47)
and we obtain
=
C
T T
c
(6.48)
with Curie constant C = j
2
,/
1
. This is the famous Curie-Weiss law which
demonstrates that the uniform susceptibility diverges at an antiferromagnetic
phase transition.
6.5 Landau theory of phase transitions
Landau proposed that one should introduce an order parameter to describe the
properties close to a phase transition. This order parameter should vanish in
the high temperature phase and be nite in the ordered low temperature phase.
The mathematical structure of the order parameter depends strongly on the
system under consideration. In case of an Ising model the order parameter is a
scalar, in case of the Heisenberg model it is a vector. For example, in case of a
superconductor or the normal uid - superuid transition of
d
Ho it is a complex
scalar, characterizing the wave function of the coherent low temperature state.
Another example are liquid crystals where the order parameter is a second rank
tensor.
In what follows we will rst develop a Landau theory for a scalar, Ising type
order. Landau argued that one can expand the free energy density in a Taylor
series with respect to the order parameter c. This should be true close to a
second order transition where c vanishes continuously:
) (c) = )
0
/c
a
2
c
2
/
8
c
3
c
4
c
d
... (6.49)
The physical order parameter is the determined as the one which minimizes )
0)
0c
=
0
= 0. (6.50)
If c < 0 this minimum will be at which is unphysical. If indeed c < 0 one
needs to take a term ~ c
6
into account and see what happens. In what follows
we will always assume c 0. In the absence of an external eld should hold
that ) (c) = ) (c), implying / = / = 0. Whether or not there is a minimum
for c ,= 0 depends now on the sign of a. If a 0 the only minimum of
) (c) = )
0
a
2
c
c
4
c
d
(6.51)
is at c = 0. However, for a < 0 there are two a new solutions c =
_
o
c
.
Since c is expected to vanish at T = T
c
we conclude that a (T) changes sign at
T
c
suggesting the simple ansatz
a (T) = a
0
(T T
c
) (6.52)
68 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
with a
0
0 being at most weakly temperature dependent. This leads to a
temperature dependence of the order parameter
_
o0(TcT)
c
c
0
=
_ _
o0(TcT)
c
T < T
c
0 T T
c
(6.53)
It will turn out that a powerlaw relation like
c ~ (T
c
T)
o
(6.54)
is valid in a much more general context. The main change is the value of ,.
The prediction of the Landau theory is , =
l
2
.
Next we want to study the eect of an external eld (= magnetic eld in
case c characterizes the magnetization of an Ising ferromagnet). This is done
by keeping the term /c in the expansion for ). The actual external eld will be
proportional to /. Then we nd that ) is minimized by
ac
0
cc
3
0
= / (6.55)
Right at the transition temperature where a = 0 this gives
c
3
0
~ /
l/o
(6.56)
where the Landau theory predicts c = 8. Finally we can analyze the change
of the order parameter with respect to an external eld. We introduce the
susceptibility
=
0c
0
0/
|0
(6.57)
and nd from Eq.6.55
a 8cc
2
0
(/ = 0) = 1 (6.58)
using the above result for c
2
0
(/ = 0) =
o
c
if T < T
c
and c
2
0
(/ = 0) = 0 above T
c
gives
=
_
l
do0
(T
c
T)
~
T < T
c
l
o0
(T T
c
)
~
T T
c
(6.59)
with exponent = 1.
Next we consider the specic heat where we insert our solution for c
0
into
the free energy density.
) =
a (T)
2
c
2
0
c
4
c
d
0
=
_
o
2
0
dc
(T T
c
)
2
T < T
c
0 T T
c
(6.60)
This yields for the specic heat per volume
c = T
0
2
)
0T
=
_
o
2
0
dc
T T < T
c
0 T T
c
. (6.61)
6.5. LANDAU THEORY OF PHASE TRANSITIONS 69
The specic heat is discontinuous. As we will see later, the general form of the
specic heat close to a second order phase transition is
c (T) ~ (T T
c
)
o
co::t (6.62)
where the result of the Landau theory is
c = 0. (6.63)
So far we have considered solely spatially constant solutions of the order
parameter. It is certainly possible to consider the more general case of spatially
varying order parameters, where the free energy
1 =
_
d
J
r) [c(r)[ (6.64)
is given as
) [c(r)[ =
a
2
c(r)
2
c
4
c(r)
d
/(r) c(r)
/
2
(\c(r))
2
(6.65)
where we assumed that it costs energy to induce an inhomogeneity of the order
parameter (/ 0). The minimum of 1 is now determined by the Euler-Lagrange
equation
0)
0c
\
0)
0\c
= 0 (6.66)
which leads to the nonlinear partial dierential equation
ac(r) cc(r)
3
= /(r) /\
2
c(r) (6.67)
Above the transition temperature we neglect again the non-linear term and
have to solve
ac(r) /\
2
c(r) = /(r) (6.68)
It is useful to consider the generalized susceptibility
cc(r) =
_
d
J
r
t
(r r
t
) c/(r
t
) (6.69)
which determines how much a local change in the order parameter is aected
by a local change of an external eld at a distance r r
t
. This is often written
as
(r r
t
) =
cc(r)
c/(r
t
)
. (6.70)
We determine (r r
t
) by Fourier transforming the above dierential equation
with
c(r) =
_
d
J
/c
I|:
c(/) (6.71)
which gives
ac(/) //
2
c(/) = /(/) (6.72)
70 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
In addition it holds for (/):
cc(/) = (/) c/(/) . (6.73)
This leads to
(/) =
/
l
2
/
2
(6.74)
where we introduced the length scale
=
_
/
a
=
_
/
a
0
(T T
c
)
l/2
(6.75)
This result can now be back-transformed yielding
(r r
t
) =
_
r r
t
_
d1
2
oxp
_
[r r
t
[
_
(6.76)
Thus, spins are not correlated anymore beyond the correlation length . In
general the behavior of close to T
c
can be written as
~ (T T
c
)
i
(6.77)
with i =
l
2
.
A similar analysis can be performed in the ordered state. Starting again at
ac(r) cc(r)
3
= /(r) /\
2
c(r) (6.78)
and assuming c(r) = c
0
c (r) where c
0
is the homogeneous, / = 0, solution,
it follows for small c (r):
_
a 8cc
2
0
_
c (r) = /(r) /\
2
c (r) (6.79)
and it holds a 8cc
2
0
= 2a 0. Thus in momentum space
(/) =
dc (/)
d/(/)
=
/
l
2
<
/
2
(6.80)
with
=
_
/
2a
=
_
/
2a
0
(T
c
T)
l/2
(6.81)
We can now estimate the role of uctuations beyond the linearized form used.
This can be done by estimating the size of the uctuations of the order parameter
compared to its mean value c
0
. First we note that
(r r
t
) = (c(r) c
0
) (c(r
t
) c
0
) (6.82)
6.5. LANDAU THEORY OF PHASE TRANSITIONS 71
Thus the uctuations of c(r) in the volume
J
is
cc
2
_
=
1
J
_
:<
d
J
r(r) (6.83)
(r) =
_
d
J
/
(2)
J
/
l
/
2
2
c
I|:
_
:<
d
J
r(r) =
_
d
J
/
(2)
J
/
l
/
2
2
_
:<
d
J
rc
I|:
=
_
d
J
/
(2)
J
/
l
/
2
2
J
o=l
sin/
o
/
o
(6.84)
where we used
1
2
_
drc
I|or
sin/
r
/
r
(6.85)
The last integral can be evaluated by substituting .
o
= /
o
leading to
_
:<
d
J
r(r) =
2
_
d
J
.
(2)
J
/
l
.
2
1
J
o=l
sin.
o
.
o
d
l
2
(6.86)
Thus, it follows
cc
2
_
/
l
2J
(6.87)
This must be compared with the mean value of the order parameter
c
2
0
=
a
c
/
c
2
(6.88)
and it holds
cc
2
_
c
2
0
c
/
2
dJ
(6.89)
Thus, for d 4 uctuations become small as , whereas they cannot be
neglected for d < 4. In d = 4, a more careful analysis shows that
o
2
2
0
log .
Role of an additional term c
6
:
) =
1
2
a,
2
c
4
,
d
n
6
,
6
(6.90)
The minimum is at
01
0,
= a, c,
3
n,
5
= 0 (6.91)
which gives either , = 0 or r c,
2
n,
d
= 0 with the two solutions
,
2
=
c
_
c
2
4an
2n
(6.92)
72 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
If c 0 we can exclude the two solutions ,
: , =
_
o
c
, i.e. the
behavior is not aected by n.
If c < 0 then the solutions ,
I
J
I
o
I
o
(6.96)
in an spatially varying magnetic eld. The partition function is 7 =
]S1]
c
o1|S1|
.
In order to map this problem onto a continuum theory we use the identity
_
I
dr
I
oxp
_
_
1
4
I
r
I
_
\
l
_
I
r
:
I
r
I
_
_
=
_
2
_
_
ool \ c
P
1
1
\1s1s
(6.97)
which can be shown to be correct by rotating the variables r
I
into a represen-
tation where \ is diagonal and using
_
droxp
_
r
2
4
:r
_
= 2
_
4c
\ s
2
(6.98)
6.5. LANDAU THEORY OF PHASE TRANSITIONS 73
This identity can now be used to transform the Ising model (use \
I
= ,J
I
)
according to
7 =
]S1]
_
I
dr
I
oxp
_
l
d
I
r
I
\
l
I
r
r
I
o
I
_
(4)
1
2
_
ool \
(6.99)
=
_
I
dr
I
oxp
_
l
d
I
r
I
\
l
I
r
]S1]
c
r1S1
(4)
1
2
_
ool \
(6.100)
The last term is just the partition function of free spins in an external eld ~ r
I
and it holds
]S1]
c
r1S1
~ oxp
_
I
log (coshr
I
)
_
(6.101)
Transforming c
I
=
l
_
2
\
l
I
r
gives
7 ~
_
1coxp
_
_
,
2
I
c
I
J
I
c
I
log
_
_
cosh
_
_
_
2
,J
I
c
_
_
_
_
_
_
(6.102)
where 1c =
I
dc
I
. Using
log (cosh.)
.
2
2
.
d
12
(6.103)
we obtain
7 ~
_
1coxp(,H
of
[c[) (6.104)
with
H
of
[c[ =
1
2
I
c
I
_
J
I
,
4
l
J
Il
J
l
_
c
1
12
I,,|,l,n
J
I
J
I|
J
Il
J
In
c
c
|
c
l
c
n
(6.105)
It is useful to go into a momentum representation
c
I
= c(R
I
) =
_
d
1
/
(2)
1
c
k
c
IkR
(6.106)
which gives
H
of
[c[ =
1
2
_
d
J
/c
|
_
J
|
,
4
J
2
|
_
c
|
1
4
_
d
J
/
l
d
J
/
2
d
J
/
3
n(/
l
, /
2
, /
3
) c
|1
c
|2
c
|3
c
|1|2|3
(6.107)
74 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
with
n(/
l
, /
2
, /
3
) =
,
d
8
J
|1
J
|2
J
|3
J
|1|2|3
(6.108)
Using J
I
= J for nearest neighbors and zero otherwise gives for a cubic lattice
J
|
= 2J
o=r,,...
cos (/
o
a) 2J
_
d a
2
k
2
_
= J
0
_
1
a
2
d
k
2
_
(6.109)
Here a is the lattice constant and we expanded J
|
for small momenta (wave
length large compared to a)
J
|
,
4
J
2
|
= J
0
J
0
a
2
d
k
2
,
4
J
2
0
2
,
4
J
2
0
a
2
d
k
2
= J
0
,
4
J
2
0
_
2
,
4
J
2
0
J
0
_
a
2
d
k
2
=
J
0
T
_
T
J
0
4/
1
_
J
0
_
,J
0
2
1
_
a
2
d
k
2
(6.110)
At the transition ,J
0
= 4 such that
J
|
,
4
J
2
|
a
0
(T T
c
) /k
2
(6.111)
with T
c
=
0
d|
T
, a
0
4/
1
, / J
0
o
2
J
. Using n
l
3
(,J
0
)
3
gives nally
H
of
[c[ =
1
2
_
d
J
/c
|
_
a
0
(T T
c
) /k
2
_
c
|
n
4
_
d
J
/
l
d
J
/
2
d
J
/ c
|1
c
|2
c
|3
c
|1|2|3
(6.112)
This is precisely the Landau form of an Ising model, which becomes obvious if
one returns to real space
H
of
[c[ =
1
2
_
d
J
r
_
a
0
(T T
c
) c
2
/ (\c)
2
n
2
c
d
_
. (6.113)
From these considerations we also observe that the partition function is given
as
7 =
_
1coxp(,H
of
[c[) (6.114)
and it is, in general, not the minimum of H
of
[c[ w.r.t. c which is physically
realized, instead one has to integrate over all values of c to obtain the free
energy. Within Landau theory we approximate the integral by the dominating
contribution of the integral, i.e. we write
_
1coxp(,H
of
[c[) oxp(,H
of
[c
0
[) (6.115)
where
o1
o
=
0
= 0.
6.5. LANDAU THEORY OF PHASE TRANSITIONS 75
Ginzburg criterion
One can now estimate the range of applicability of the Landau theory. This is
best done by considering the next order corrections and analyze when they are
small. If this is the case, one can be condent that the theory is controlled.
Before we go into this we need to be able to perform some simple calculations
with these multidimensional integrals.
First we consider for simplicity a case where H
of
[c[ has only quadratic
contributions. It holds
7 =
_
1coxp
_
1
2
_
d
J
/c
|
_
a /k
2
_
c
|
_
=
_
|
dc
|
oxp
_
^/
2
c
|
_
a /k
2
_
c
|
_
=
|
_
(2)
J
a /k
2
_
l/2
~ oxp
_
1
2
_
d
J
/ log (/)
_
(6.116)
with
(/) =
1
a /k
2
. (6.117)
It follows for the free energy
1 =
/
1
T
2
_
d
J
/ log (/) (6.118)
One can also add to the Hamiltonian an external eld
H
of
[c[ H
of
[c[
_
d
J
//(/) c(/) (6.119)
Then it is easy to determine the correlation function
(/) =
c
|
c
|
_
c
|
c
|
_
(6.120)
via
c log 7
c/
|
c/
|
|0
=
c
c/
|
1
7
_
1cc
|
c
o1
eff
||
=
1
7
_
1cc
|
c
|
c
o1
eff
||
__
1cc
|
c
o1
eff
||
_
2
7
2
= (/) (6.121)
This can again be done explicitly for the case with n = 0:
7 [/[ =
_
1coxp
_
1
2
_
d
J
/c
|
_
a /k
2
_
c
|
_
d
J
//(/) c
|
_
= 7 [0[ oxp
_
1
2
_
d
J
//
|
(/) /
|
_
(6.122)
76 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
Performing the second derivative of log 7 gives indeed
c
|
c
|
_
=
l
obk
2
. Thus,
we obtain as expected
(/) =
cc
|
c/
|
. (6.123)
Let us analyze the specic heat related to the free energy
1 =
/
1
T
2
_
d
J
/ log (/) (6.124)
It holds for the singular part of the specic heat
c ~
0
2
1
0a
2
~
_
d
J
/(/)
2
~
_
/
Jl
d/
_
2
/
2
_
2
~
dJ
(6.125)
Thus, as follows that there is no singular (divergent) contribution to the
specic heat if d 4 just as we found in the Landau theory. However, for d < 4
the specic heat diverges and we obtain a behavior dierent from what Landau
theory predicted.
Another way to see this is to study the role of inhomogeneous uctuations
as caused by the
H
In|
=
d
2
_
d
J
r (\c)
2
(6.126)
Consider a typical variation on the scale \c ~
_
o
u
l
and integrate those
over a volume of size
J
gives
H
In|
~ /
J2
a
n
~
/
2
n
Jd
(6.127)
Those uctuations should be small compared to temperature in order to keep
mean eld theory valid. If their energy is large compared to /
1
T they will be
rare and mean eld theory is valid. Thus we obtain again that mean eld theory
breaks down for d < 4. This is called the Ginzburg criterion. Explicitly this
criterion is
_
n
/
2
/
1
T
_ 1
4d
. (6.128)
Note, if / is large for some reason, uctuation physics will enter only very
close to the transition. This is indeed the case for many so called conventional
superconductors.
6.6 Scaling laws
A crucial observation of our earlier results of second order phase transitions was
the divergence of the correlation length
(T T
c
) . (6.129)
6.6. SCALING LAWS 77
This divergency implies that at the critical point no characteristic length scale
exists, which is in fact an important reason for the emergence of the various
power laws. Using / as a dimensionless number proportional to an external
eld and
t =
T T
c
T
c
(6.130)
as dimensionless measure of the distance to the critical point the various critical
exponents are:
(t, / = 0) ~ t
i
c(t, / = 0) ~ [t[
o
c(t = 0, /) ~ /
l/o
(t, / = 0) ~ t
~
c (t, / = 0) ~ t
o
(r , t = 0) ~ r
2Jq
. (6.131)
where 1 is the spatial dimensionality. The values of the critical exponents for
a number of systems are given in the following table
exponent mean eld d = 2, Ising d = 8, Ising
c 0 0 0.12
,
l
2
l
S
0.81
1
7
d
1.2
i
l
2
1 0.64
c 8 1 .0
j 0
l
d
0.04
It turn out that a few very general assumptions about the scaling behavior
of the correlation function () and the free energy are sucient to derive
very general relations between these various exponents. Those relations are
called scaling laws. We will argue that the fact that there is no typical length
scale characterizing the behavior close to a second order phase transition leads
to a powerlaw behavior of the singular contributions to the free energy and
correlation function. For example, consider the result obtained within Landau
theory
(, t) =
1
t
2
(6.132)
where we eliminated irrelevant prefactors. Rescaling all length r of the system
according to r r,/, where / is an arbitrary dimensionless number, leads to
/ //. Obviously, the mean eld correlation function obeys
(, t) = /
2
_
/, t/
2
_
. (6.133)
Thus, upon rescaling ( / //), the system is characterized by a correlation
function which is the same up to a prefactor and a readjustment of the distance
78 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
from the critical point. In what follows we will generalize this expression and
assume that even beyond the mean eld theory of Landau a similar relationship
holds
(, t) = /
2q
(/, t/
) . (6.134)
The mean eld theory is obviously recovered if j = 2 and j = 0. Since / is
arbitrary, we can for example chose t/
= 1 implying / = t
and we obtain
directly from our above ansatz
(, t) = t
2_
_
t
, 1
_
. (6.135)
By denition, the correlation length is the length scale which characterizes the
momentum variation of (, t) i.e. (, t) ~ ) (), which leads to ~ t
and
we obtain
i = j
l
. (6.136)
The exponent j of our above ansatz for (, t) is therefore directly related to
the correlation length exponent. This makes it obvious why it was necessary to
generalize the mean eld behavior. j = 2 yields the mean eld value of i. Next
we consider t = 0 and chose / = 1 such that
(, t = 0) =
1
2q
(1, 0) (6.137)
which gives
(r, t = 0) =
_
d
J
(2)
J
(, t = 0) c
I|r
~
_
dc
I|r
Jl
2q
(6.138)
substituting . = /r gives
(r, t = 0) ~ r
2Jq
. (6.139)
Thus, the exponent j of Eq.6.134 is indeed the same exponent as the one given
above. This exponent is often called anomalous dimension and characterizes
the change in the powerlaw decay of correlations at the critical point (and more
generally for length scales smaller than ). Thus we can write
(, t) = /
2q
_
/, t/
1
:
_
. (6.140)
Similar to the correlation function can we also make an assumption for the
free energy
1 (t, /) = /
1
1 (t/
, //
!
) . (6.141)
The prefactor /
J
is a simple consequence of the fact that an extensive quantity
changes upon rescaling of length with a corresponding volume factor. Using
j = i
l
we can again use t/
= 1 and obtain
1 (t, /) = t
1i
1
_
1, /t
i
!
_
. (6.142)
6.6. SCALING LAWS 79
This enables us to analyze the specic heat at / = 0 as
c ~
0
2
1 (t, 0)
0t
2
~ t
Ji2
(6.143)
which leads to
c = 2 di. (6.144)
This is a highly nontrivial relationship between the spatial dimensionality, the
correlation length exponent and the specic heat exponent. It is our rst scaling
law. Interestingly, it is fullled in mean eld (with c = 0 and i =
l
2
) only for
d = 4.
The temperature variation of the order parameter is given as
c(t) ~
01 (t, /)
0/
|0
~ t
i(J
!
)
(6.145)
which gives
, = i (d j
|
) = 2 c ij
|
(6.146)
This relationship makes a relation between j
|
and the critical exponents just
like j was related to the exponent i. Within mean eld
j
|
= 8 (6.147)
Alternatively we can chose //
!
= 1 and obtain
1 (t, /) = /
d
!
1
_
t/
1
:
!
, 0
_
(6.148)
This gives for the order parameter at the critical point
c(t = 0, /) ~
01 (t = 0, /)
0/
~ /
d
!
l
(6.149)
and gives
l
o
=
J
!
1. One can simplify this to
c =
j
|
d j
|
=
2 c ,
,
(6.150)
and yields
, (1 c) = 2 c (6.151)
Note, the mean eld theory obeys c =
!
!
J
only for d = 4. whereas c =
2oo
o
is obeyed by the mean eld exponents for all dimensions. This is valid quite
generally, scaling laws where the dimension, d, occurs explicitly are fullled
within mean eld only for d = 4 whereas scaling laws where the dimensionality
does not occur are valid more generally.
The last result allows us to rewrite our original ansatz for the free energy
1 (t, /) = /
(2o)i
1
1
_
t/
1
:
, //
{o
:
_
. (6.152)
80 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
such that t/
1
:
= 1 leads to
1 (t, /) = t
2o
1
_
1, /t
oo
_
(6.153)
We next analyze how the susceptibility diverges at the critical point. It holds
~
0
2
1 (t, /)
0/
2
|0
~ t
2o2oo
(6.154)
which leads to
= c 2 2,c (6.155)
which is yet another scaling relation.
The last scaling law follows from the fact that the correlation function (, t)
taken at = 0 equals the susceptibility just analyzed. This gives
(t) = /
2q
(t/
) (6.156)
and choosing again t/
= 1 yields
(t) = t
i(2q)
(6.157)
such that
= i (2 j) . (6.158)
To summarize, we have identied all the exponents in the assumed scaling
relations of 1 (t, /) and (, t) with critical exponents (see Eqn.6.140 and6.152).
In addition we have four relationships the six exponents have to fulll at the
same time which are collected here:
c 2, = 2 (6.159)
c = 2 di.
, (1 c) = 2 c
2,c = 2 c
= i (2 j) (6.160)
One can easily check that the exponents of the two and three dimensional Ising
model given above indeed fulll all these scaling laws. If one wants to calculate
these exponents, it turns out that one only needs to determine two of them, all
others follow from scaling laws.
6.7 Renormalization group
6.7.1 Perturbation theory
A rst attempt to make progress in a theory beyond the mean eld limit would
be to consider the order parameter
c(r) = c
0
c (r) (6.161)
6.7. RENORMALIZATION GROUP 81
and assume that c
0
is the result of the Landau theory and then consider c (r)
as a small perturbation. We start from
H [c[ =
1
2
_
d
J
rc(r)
_
r \
2
_
c(r)
n
4
_
d
J
rc(r)
d
. (6.162)
where we have introduced
r =
a
/
,
n =
c
T/
2
, (6.163)
and a new eld variable c
nov
=
_
Tdc, where the sux new is skipped in what
follows. We also call H
of
simply H.
We expand up to second order in c (r):
H [c[ = \
_
r
2
c
2
0
n
4
c
d
0
_
1
2
_
d
J
rc (r)
_
r \
2
8nc
2
0
_
c (r) (6.164)
The uctuation term is therefore of just the type discussed in the context
of the n = 0 Gaussian theory, only with a changes value of r r 8nc
2
0
.
Thus, we can use our earlier result for the free energy of the Gaussian model
1
Causs
=
l
2
_
d
J
/ log (/) and obtain in the present case
1
\
=
r
2
c
2
0
n
4
c
d
0
1
2\
_
d
J
/ log
_
r /
2
8nc
2
0
_
(6.165)
We can now use this expression to expand this free energy again up to fourth
order in c
0
using:
log
_
r /
2
8nc
2
0
_
log
_
r /
2
_
8nc
2
0
r /
2
1
2
0n
2
c
d
0
(r /
2
)
2
(6.166)
it follows
1
\
=
1
0
\
r
t
2
c
2
0
n
t
4
c
d
0
(6.167)
with
r
t
= r 8n
_
d
J
/
1
r /
2
,
n
t
= n 0n
2
_
d
J
/
1
(r /
2
)
2
. (6.168)
These are the uctuation corrected values of the Landau expansion. If these
corrections were nite, Landau theory is consistent, if not, one has to use a
qualitatively new approach to describe the uctuations. At the critical point
r = 0 and we nd that
_
d
J
/
1
/
2
_
A
/
Jl
/
2
d/
_
A
J2
d 2
d _ 2
82 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
where A is some upper limit of the momentum integration which takes into
account that the system under consideration is always embedded in a lattice of
atoms with interatomic spacing a, such that A ~ a
l
. It holds that r
t
is nite
for 1 2 which we assume to be the case in what follows. The nite correction
to r
t
only shifts the transition temperature by a nite amount. Since the value
of T
c
was not so much our concern, this is a result which is acceptable. However,
the correction, n
t
, to n diverges for d _ 4:
_
d
J
/
1
/
d
_
A
/
Jl
/
d
d/
_
A
Jd
d 4
d _ 4
and the nonlinearity (interactions) of the theory become increasingly stronger.
This is valid for arbitrarily small values of n itself, i.e. a perturbation theory in
n will fail for d _ 4. The dimension below which such strong uctuations set in
is called upper critical dimension.
The strategy of the above scaling laws (i.e. the attempt to see what hap-
pens as one rescales the characteristic length scales) will be the heart of the
renormalization group theory which we employ to solve the dilemma below four
dimension.
6.7.2 Fast and slow variables
The divergency which caused the break down of Landau theory was caused by
long wave length, i.e. the / 0 behavior of the integral which renormalized
n n
t
. One suspicion could be that only long wave length are important for an
understanding of this problem. However, this is not consistent with the scaling
concept, where the rescaling parameter was always assumed to be arbitrary. In
fact it uctuations on all length scales are crucial close to a critical point. This
is on the one hand a complication, on the other hand one can take advantage
of this beautiful property. Consider for example the scaling properties of the
correlation function
(, t) = /
2q
_
/, t/
1
:
_
. (6.169)
Repeatedly we chose t/
1
:
= 1 such that / = t
i
as one approaches the
critical point. However, if this scaling property (and the corresponding scaling
relation for the free energy) are correct for generic / (of course only if the system
is close to T
c
) one might analyze a rescaling for / very close to 1 and infer the
exponents form this more "innocent" regime. If we obtain a scaling property of
(, t) it simply doesnt matter how we determined the various exponents like
i, j etc.
This, there are two key ingredients of the renormalization group. The rst is
the assumption that scaling is a sensible approach, the second is a decimation
procedure which makes the scaling transformation r r,/ explicit for / 1.
A convenient way to do this is by considering / = c
l
for small |. Lets consider
a eld variable
c(k) =
_
d
J
roxp(ik x) c(x) (6.170)
6.7. RENORMALIZATION GROUP 83
Since there is an underlying smallest length-scale a (interatomic spacing), no
waves with wave number larger than a given upper cut o A a
l
should
occur. For our current analysis the precise value of A will be irrelevant, what
matters is that such a cut o exists. Thus, be observe that c(/) = 0 if / A.
We need to develop a scheme which allows us to explicitly rescale length or
momentum variables. How to do this goes back to the work of Leo Kadano
and Kenneth G. Wilson in the early 70
iL
of the last century. The idea is to
divide the typical length variations of c(/) into short and long wave length
components
c(/) =
_
c
<
(/) 0 < / _ A,/
c
,
(/) A,/ < / _ A
. (6.171)
If one now eliminates the degrees of freedoms c
,
one obtains a theory for c
<
only
oxp
_
H
_
c
<
_
=
_
1c
,
oxp
_
H
_
c
<
, c
,
_
. (6.172)
The momenta in
H
_
c
<
c
<
_
/
t
/
_
(6.174)
where the prefactor /
=
H
_
/
c
t
. (6.175)
In practice this is then a theory of the type where the initial Hamiltonian
H [c[ =
1
2
_
d
J
/
_
r /
2
_
[c(/)[
n
4
_
d
J
/
l
d
J
/
2
d
J
/
3
c(/
l
) c(/
2
) c(/
3
) c(/
l
/
2
/
3
)(6.176)
leads to a renormalized Hamiltonian
H
t
_
c
t
=
1
2
_
d
J
/
t
_
r (|) /
t2
_
c
t
(/
t
)
n(|)
4
_
d
J
/
t
l
d
J
/
t
2
d
J
/
t
3
c
t
(/
t
l
) c
t
(/
t
2
) c
t
(/
t
3
) c
t
(/
t
l
/
t
2
/
t
3
) . (6.177)
84 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
If this is the case one may as well talk about a mapping
H (r, n) H
t
(r (|) , n(|)) (6.178)
and one only needs to analyze where this mapping takes one.
If one now analyzes the so called ow equation of the parameters r (|), n(|)
etc. there are a number of distinct cases. The most distinct case is the one
where a xed point is approaches where r (| ) = r
+
, n(| ) = n
+
etc.
If this is the case the low energy behavior of the system is identical for all initial
values which reach the xed point. Before we go into this we need to make sure
that the current procedure makes any sense and reproduces the idea of scaling.
6.7.3 Scaling behavior of the correlation function:
We start from H [c[ characterized by a cut o A. The new Hamiltonian with
cut o A,/, which results from the shell integration, is then determined by
c
e
1[
[
=
_
1c
,
c
1[
[
, (6.179)
which is supplemented by the rescaling
c
<
(/) = /
c
t
(//)
which yields the new Hamiltonian H
t
_
c
t
t
(//
l
) c (//
l
//
2
)
= /
2J
t
(//
l
) c (/
l
/
2
) (6.181)
where
t
(//) = (//, r (|) , n(|)) is the correlation function evaluated for H
t
i.e.
with parameters r (|) and n(|) instead of the "bare" ones r and n, respectively.
It follows
(/, r, n) = /
2J
(/, r (|) , n(|)) (6.182)
This is close to an actual derivation of the above scaling assumption and suggests
to identify
2j d = 2 j. (6.183)
What is missing is to demonstrate that r (|) and n(|) give rise to a behavior
tc
l
= t/
n
4
_
d
J
/
l
d
J
/
2
d
J
/
3
c(/
l
) c(/
2
) c(/
3
) c(/
l
/
2
/
3
) (6.184)
Concentrating rst on the quadratic term it follows
H
0
_
c
,
, c
<
=
1
2
_
A/b<|<A
d
J
/
_
r /
2
_
c
,
(/)
1
2
_
|<A/b
d
J
/
_
r /
2
_
c
<
(/)
2
(6.185)
There is no coupling between the c
,
and c
<
and therefore (ignoring constants)
H
0
_
c
<
=
1
2
_
|<A/b
d
J
/
_
r /
2
_
c
<
(/)
(6.186)
Now we can perform the rescaling c
<
(/) = /
c
t
(//) and obtain with /
t
= //
H
t
0
=
/
2J
2
_
|
0
<A
d
J
/
t
_
r /
2
/
2
_
c
t
(/
t
)
=
/
2J2
2
_
|
0
<A
d
J
/
t
_
/
2
r /
2
_
c
t
(/
t
)
(6.187)
This suggests j =
J2
2
and gives r (|) = c
2l
r.
Next we consider the quartic term
H
Ini
=
n
4
_
d
J
/
l
d
J
/
2
d
J
/
3
c(/
l
) c(/
2
) c(/
3
) c(/
l
/
2
/
3
) (6.188)
which does couple c
,
and c
<
. If all three momenta are inside the inner shell,
we can easily perform the rescaling and nd
H
t
Ini
=
n/
d3J
4
_
d
1
/
t
l
d
1
/
t
2
d
1
/
t
3
c(/
t
l
) c(/
t
2
) c(/
t
3
) c(/
t
l
/
t
2
/
t
3
) (6.189)
which gives with the above result for j:
4j 81 = 4 d (6.190)
86 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
yielding
n(|) = nc
:l
. (6.191)
The leading term for small n gives therefore the expected behavior that n(| )
0 if d 4 and that n grows if d < 4. If d grows we cannot trust the leading
behavior anymore and need to go to the next order perturbation theory. Tech-
nically this is done using techniques based on Feynman diagrams. The leading
order terms can however be obtained quite easily in other ways and we dont
need to spend our time on introducing technical tools. It turns out that the
next order corrections are identical to the direct perturbation theory,
r
t
= c
2l
r 8n
_
A/b<|<A
d
J
/
1
r /
2
n
t
= c
:l
n 0n
2
_
A/b<|<A
d
J
/
1
(r /
2
)
2
. (6.192)
with the important dierence that the momentum integration is restricted to
the shell with radius between A,/ and A. This avoids all the complications of
our earlier direct perturbation theory where a divergency in n
t
resulted from
the lower limit of the integration (long wave lengths). Integrals of the type
1 =
_
A/b<|<A
d
J
/) (/) (6.193)
can be easily performed for small |:
1 = 1
J
_
A
At
/
Jl
) (/) d/ 1
J
A
Jl
) (A)
_
A Ac
l
_
1
J
A
J
) (A) | (6.194)
It follows therefore
r
t
= (1 2|) r
81
J
A
J
r A
2
n|
n
t
= (1 -|) n
01
J
A
J
(r A
2
)
2
n
2
| . (6.195)
which is due to the small | limit conveniently written as a dierential equation
dr
d|
= 2r
81
J
A
J
r A
2
n
dn
d|
= -n
01
J
A
J
(r A
2
)
2
n
2
. (6.196)
Before we proceed we introduce more convenient variables
r
r
A
2
n 1
J
A
Jd
n (6.197)
6.7. RENORMALIZATION GROUP 87
which are dimensionless and obtain the dierential equations
dr
d|
= 2r
8n
1 r
dn
d|
= -n
0n
2
(1 r)
2
. (6.198)
The system has indeed a xed point (where
J:
Jl
=
Ju
Jl
= 0) determined by
- =
0n
+
(1 r
+
)
2
2r
+
=
8n
+
1 r
+
(6.199)
This simplies at leading order in - to
n
+
=
-
0
or 0
r
+
=
8
2
n
+
(6.200)
If the system reaches this xed point it will be governed by the behavior it its
immediate vicinity, allowing us to linearize the ow equation in the vicinity of
the xed point, i.e. for small
cr = r r
+
cn = n n
+
(6.201)
Consider rst the xed point with n
+
= r
+
= 0 gives
d
d|
_
cr
cn
_
=
_
2 8
0 -
__
cr
cn
_
(6.202)
with eigenvalues `
l
= 2 and `
2
= -. Both eigenvalues are positive for - 0
(1 < 4) such that there is no scenario under which this xed point is ever
governing the low energy physics of the problem.
Next we consider n
+
=
:
9
and r
+
=
:
6
. It follows
d
d|
_
cr
cn
_
=
_
2
:
3
8
:
2
0 -
__
cr
cn
_
(6.203)
with eigenvalues
j = 2
-
2
j
t
= - (6.204)
the corresponding eigenvectors are
c = (1, 0)
c
t
=
_
8
2
-
8
, 1
_
(6.205)
88 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
Thus, a variation along the cdirection (which is varying r) causes the system
to leave the xed point (positive eigenvalue), whereas it will approach the xed
point if
(r, n) ~ c
t
(6.206)
this gives
r = n
_
8
2
-
8
_
(6.207)
which denes the critical surface in parameter space. If a system is on this
surface it approaches the xed point. If it is slightly away, the quantity
t = r n
_
8
2
-
8
_
(6.208)
is non-zero and behaves as
t (|) = tc
l
= t/
. (6.209)
The ow behavior for large | is only determined by the value of t which is the
only scaling variable, which vanishes at the critical point. Returning now to the
initial scaling behavior of the correlation function we can write explicitly
(/, t) = /
2
(/, t/
) (6.210)
comparing this with (, t) = /
2q
_
/, t/
1
:
_
gives immediately the two critical
exponents
j = O
_
-
2
_
i
1
2
-
8
. (6.211)
Extrapolating this to the - = 1 case gives numerical results for the critical
exponents which are much closer to the exact ones (obtained via numerical
simulations)
exponent - expansion d = 8, Ising
c 0.12 0.12
, 0.812 0.81
1.2 1.2
i 0.62 0.64
c .0
j 0 0.04
A systematic improvement of these results occurs if one includes higher order
terms of the -expansion. Thus, the renormalization group approach is a very
powerful tool to analyze the highly singular perturbation expansion of the c
d
-
theory below its upper critical dimension. How is it possible that one can obtain
so much information by essentially performing a low order expansion in n for
6.7. RENORMALIZATION GROUP 89
a small set of high energy degrees of freedom? The answer is in the power of
the scaling concept. We have assumed that the form (, t) = /
2q
_
/, t/
1
:
_
which we obtained for very small deviations of / from unity is valid for all /. If
for example the value of i and j would change with | there would be no way
that we could determine the critical exponents from such a procedure. If scaling
does not apply, no critical exponent can be deduced from the renormalization
group.
6.7.5 Irrelevant interactions
Finally we should ask why we restricted ourself to the quartic interaction only.
For example, one might have included a term of the type
H
(6)
=
6
_
d
J
rc(r)
6
(6.212)
which gives in momentum space
H
(6)
=
6
_
d
J
/
l
...d
J
/
5
c(/
l
) c(/
2
) c(/
3
) (/
d
) c(/
5
)
c(/
l
/
2
/
3
/
d
/
5
) (6.213)
The leading term of the renormalization group is the one where all three mo-
menta are inside the inner shell, and we can perform the rescaling immediately:
H
t
Ini
=
/
65J
_
d
J
/
t
l
...d
J
/
t
5
c(/
t
l
) c(/
t
2
) c(/
t
3
) c(/
t
d
) c(/
t
5
)
c(/
t
l
/
t
2
/
t
3
/
t
d
/
t
5
) (6.214)
and the dependence is with j =
2J
2
and 6j d = 2 (8 d) = 2 (1 -)
(|) = c
2(l:)l
(6.215)
Thus, in the strict sense of the - expansion such a term will never play a role.
Only if n - initially is it important to keep these eects into account. This
happens in the vicinity of a so called tricritical point.
90 CHAPTER 6. INTERACTING SYSTEMS AND PHASE TRANSITIONS
Chapter 7
Density matrix and
uctuation dissipation
theorem
One can make a number of fairly general statements about quantum statistical
systems using the concept of the density matrix. In equilibrium we found that
the expectation value of a physical observable is given by
O
oq
= li
_
j
oq
O
_
(7.1)
with density operator
j
oq
=
1
7
c
o1
. (7.2)
The generalization to the grand canonical ensemble is straight forward. The den-
sity operator (or often called density matrix) is now given as j
oq
=
l
2
c
o(1)
,
where is the particle number operator.
Considering now a system in a quantum state [c
I
with energy 1
I
, the
expectation value of a physical observable is in that state is
O
I
= c
I
[O[ c
I
. (7.3)
If the system is characterized by a distribution function where a state [c
I
occurs
with probability j
I
it follows that the actual expectation value of O is
O =
I
j
I
O
I
. (7.4)
This can be written in a formal sense as
O = li (jO) (7.5)
91
92CHAPTER 7. DENSITYMATRIXANDFLUCTUATIONDISSIPATIONTHEOREM
with the density operator
j =
I
[c
I
j
I
c
I
[ . (7.6)
Inserting this expression into Eq.7.5 gives the above result of one uses that the
[c
I
are orthonormal.
A state is called pure if j
I
= 0 for all i ,= 0 and j
0
= 1. Then
j
juvo
= [c
0
c
0
[ (7.7)
which implies j
2
juvo
= j
juvo
. States which are not pure are called mixed. Indeed,
it holds in general
li (j) =
I
j
I
= 1 (7.8)
which gives
li
_
j
2
_
=
I
j
2
I
_ 1. (7.9)
The equal sign only holds for pure states making li
_
j
2
_
a general criterion for
a state to be mixed.
Next we determine the equation of motion of the density matrix under the
assumption that the probabilities are xed:
J1
J|
= 0. We use
i~
0
0t
[c
I
= H[c
I
(7.10)
and
i~
0
0t
c
I
[ = c
I
[ H (7.11)
which gives
i~
0
0t
j =
I
H[c
I
j
I
c
I
[ [c
I
j
I
c
I
[ H = Hj jH
= [H, j[ (7.12)
which is called von Neuman equation.
The von Neuman equation should not be confused with the Heisenberg equa-
tion of operators in Heisenberg picture. The latter is given by
i~
d
dt
|
(t) = i~
0
0t
|
(t) [
|
(t) , H[ (7.13)
The rst term is the explicit variation of the operator
|
which might be there
even in Schrdinger picture. The second term results from the unitary trans-
formation
|
(t) = c
I1|/h
|
c
I1|/h
.
Once we know j, we can analyze for example the entropy
o = /
I
lij log j. (7.14)
7.1. DENSITY MATRIX OF SUBSYSTEMS 93
Using the von Neuman equation gives for the time variation of the entropy
0o
0t
= /
I
li
_
0j
0t
(log j 1)
_
= i
/
I
~
li [[H, j[ (log j 1)[ = 0. (7.15)
Obviously this is a consequence of the initial assumption
J1
J|
= 0. We conclude
that the von Neuman equation will not allow us to draw conclusions about the
change of entropy as function of time.
7.1 Density matrix of subsystems
Conceptually very important information can be obtained if one considers the
behavior of the density matrix of a subsystem of a bigger system. The bigger
system is then assumed to be in a pure quantum state. We denote the variables
of our subsystem with r and the variables of the bigger system which dont
belong to our subsystem with 1 . The wave function of the pure quantum
mechanical state of the entire system is
w(1, r, t) (7.16)
and we expand its r-dependence in terms of a complete set of functions acting
on the subsystem. Without loss of generality we can say
w(1, r, t) =
o
1
o
(1, t) ,
o
(r) . (7.17)
Let O(r) be some observable of the subsystem, i.e. the operator O does not act
on the coordinates 1 . If follows
O = w[O[ w =
o,o
0
1
o
(t) [1
o
0 (t) ,
o
[O[,
o
0 (7.18)
This suggests to introduce the density operator
j
o
0
o
(t) = 1
o
(t) [1
o
0 (t) (7.19)
such that
O = lijO, (7.20)
where the trace is only with respect to the quantum numbers c of the subsystem.
Thus, if one assumes that one can characterize the expectation value exclusively
within the quantum numbers and coordinates of a subsystem, one is forced to
introduce mixed quantum states and a density matrix.
Lets analyze the equation of motion of the density matrix.
i~
0
0t
j
o
0
o
(t) = i~
_
d1
_
01
+
o
(1, t)
0t
1
o
0 (1, t) 1
+
o
(1, t)
01
o
0 (1, t)
0t
_
(7.21)
94CHAPTER 7. DENSITYMATRIXANDFLUCTUATIONDISSIPATIONTHEOREM
where
_
d1... is the matrix element with respect to the variables 1 . Since
w(1, r, t) obeys the Schrdinger equation it follows
i~
o
01
o
(1, t)
0t
,
o
(r) = H (1, r)
o
1
o
(1, t) ,
o
(r) (7.22)
Multiplying this by ,
+
o
0 (r) and integrating over r gives
i~
01
o
0 (1, t)
0t
=
o
H
o
0
o
(1 ) 1
o
(1, t) (7.23)
i~
01
+
o
(1, t)
0t
=
o
1
+
o
(1, t) H
oo
(1 ) (7.24)
where
H
oo
(1 ) =
,
o
[H (1, r)[ ,
o
_
. (7.25)
It follows
i~
0
0t
j
o
0
o
(t) =
_
d1
o
_
1
+
o
(1, t) H
o
0
o
(1 ) 1
o
(1, t) 1
+
o
(1, t) H
oo
(1 ) 1
o
0 (1, t)
_
(7.26)
Lets assume that
H (1, r) = H
0
(r) \ (1, r) (7.27)
where H
0
does not depend on the environment coordinates. It follows
i~
0
0t
j
o
0
o
(t) =
o
_
H
0,o
0
o
j
oo
j
o
0
o
H
0,oo
_
i
o,o
0 (t) j
o
0
o
(t) (7.28)
with
o,o
0 = i
_
d1
o
_
1
+
o
(1, t) \
o
0
o
(1 ) 1
o
(1, t) 1
+
o
(1, t) \
oo
(1 ) 1
o
0 (1, t)
_
_
d1 1
+
o
0 (1, t) 1
o
(1, t)
(7.29)
which obeys
o
0
o
=
+
o,o
0 .
i~
0
0t
j (t) = [H, j[ iIj. (7.30)
Thus, only if the subsystem is completely decoupled from the environment do
we recover the von Neuman equation. In case there is a coupling between
subsystem and environment the equation of motion of the subsystem is more
complex, implying
J1
J|
,= 0.
7.2. LINEAR RESPONSE ANDFLUCTUATIONDISSIPATIONTHEOREM95
7.2 Linear response and uctuation dissipation
theorem
Lets consider a system coupled to an external eld
\
|
=
_
d.
2
\ (.) c
I(.Io)|
(7.31)
such that \
|o
0. Next we consider the time evolution of a physical
quantity :
|
= li (j
|
) (7.32)
where
i~
0
0t
j
|
= [H \
|
, j
|
[ (7.33)
We assume the system is in equilibrium at t :
j
|o
= j =
1
7
c
o1
. (7.34)
Lets go to the interaction representation
j
|
= c
I1|/~
j
|
(t) c
I1|/~
(7.35)
gives
i~
0j
|
0t
= [H, j
|
[ c
I1|/h
i~
0j
|
(t)
0t
c
I1|/h
(7.36)
which gives
i~
0j
|
(t)
0t
= [\
|
(t) , j
|
(t)[ (7.37)
which is solved by
j
|
(t) = j i~
_
|
o
dt
t
[\
|
0 (t
t
) , j
|
0 (t
t
)[
j
|
= j i~
_
|
o
dt
t
c
I1(||
0
)/h
[\
|
0 , j
|
0 [ c
I1(||
0
)/h
. (7.38)
Up to leading order this gives
j
|
= j i~
_
|
o
dt
t
c
I1(||
0
)/h
[\
|
0 , j[ c
I1(||
0
)/h
. (7.39)
We can now determine the expectation value of :
|
= i~
_
|
o
dt
t
li ([\
|
0 (t
t
) , j[ (t)) (7.40)
one can cyclically change order under the trace operation
li (\j j\) = li (\ \) j (7.41)
96CHAPTER 7. DENSITYMATRIXANDFLUCTUATIONDISSIPATIONTHEOREM
which gives
|
= i~
_
|
o
dt
t
[(t) , \
|
0 (t
t
)[ (7.42)
It is useful to introduce (the retarded Greens function)
(t) ; \
|
0 (t
t
) = i~0 (t t
t
) [(t) , \
|
0 (t
t
)[ (7.43)
such that
|
=
_
o
o
dt
t
(t) ; \
|
0 (t
t
) . (7.44)
The interesting result is that we can characterize the deviation from equilibrium
(dissipation) in terms of uctuations of the equilibrium (equilibrium correlation
function).
Example, conductivity:
Here we have an interaction between the electrical eld and the electrical
polarization:
\
|
= p E
|
(7.45)
with
E
|
= E
0
oxp(i (. ic) t) (7.46)
If we are interested in the electrical current it follows
,
o
|
=
_
o
o
dt
t
,
o
(t) ; j
o
(t
t
)
__
1
o
c
I(.Io)|
0
(7.47)
which gives in Fourier space
,
o
.
=
,
o
; j
o
__
.
1
0
(7.48)
which gives for the conductivity
o (.) =
,
o
; j
o
__
.
. (7.49)
Obviously, a conductivity is related to dissipation whereas the correlation func-
tion
_
,
o
(t) , j
o
(t
t
)
_
is just an equilibrium uctuation.
Chapter 8
Brownian motion and
stochastic dynamics
We start our considerations by considering the diusion of a particle with density
j (x,t). Diusion should take place if there is a nite gradient of the density
\j. To account for the proper bookkeeping of particles, one starts from the
continuity equation
0j
0t
= \ j (8.1)
with a current given by j 1\j, which is called Ficks law. The prefactor 1 is
the diusion constant and we obtain
J
J|
= 1\
2
j.This is the diusion equation,
which is conveniently solved by going into Fourier representation
j (x, t) =
_
d
J
/
(2)
J
j (k, t) c
Ikx
, (8.2)
yielding an ordinary dierential equationwith respect to time:
0j (k, t)
0t
= 1/
2
j (k, t) , (8.3)
with solution
j (k, t) = j
0
(k) c
1|
2
|
. (8.4)
where j
0
(k) = j (k, t = 0). Assuming that j (x, t = 0) = c (x), i.e. a particle at
the origin, it holds j
0
(k) = 1 and we obtain
j (x, t) =
_
d
J
/
(2)
J
c
1|
2
|
c
Ikx
. (8.5)
The Fourier transformation is readily done and it follows
j (r, t) =
1
(41t)
J/2
c
x
2
/(d1|)
. (8.6)
In particular, it follows that
r
2
(t)
_
= 21t grows only linearly in time, as
opposed to the ballistic motion of a particle where r(t) = t.
97
98 CHAPTER 8. BROWNIAN MOTION AND STOCHASTIC DYNAMICS
8.1 Langevin equation
A more detailed approach to diusion and Brownian motion is given by using
the concept of a stochastic dynamics. We rst consider one particle embedded
in a uid undergoing Brownian motion. Later we will average over all particles
of this type. The uid is modeled to cause friction and to randomly push the
particle. This leads to the equation of motion for the velocity =
Jr
J|
:
d (t)
dt
=
:
(t)
1
:
(t) . (8.7)
Here is a friction coecient proportional to the viscosity of the host uid.
If we consider large Brownian particles, the friction term can be expressed in
terms of the shear viscosity, j, of the uid and the radius, 1, of the particle:
= 6j1. (t) is a random force, simulating the scattering of the particle with
the uid and is characterized by the correlation functions
(t)
= 0
(t) (t
t
)
= qc (t t
t
) . (8.8)
The prefactor q is the strength of this noise. As the noise is uncorrelated in
time, it is also referred to as white noise (all spectral components are equally
present in the Fourier transform of (t) (t
t
)
1
:
_
|
0
d:c
~(|s)/n
(:) (8.9)
and
r(t) = r
0
:
0
_
1 c
~|/n
_
_
|
0
d:
_
1 c
~(|s)/n
_
(:) . (8.10)
We can now directly perform the averages of this result. Due to (t)
= 0
follows
(t)
=
0
c
~|/n
r(t)
r
0
=
:
0
_
1 c
~|/n
_
(8.11)
which implies that a particle comes to rest at a time t
n
~
, which can be
long if the viscosity of the uid is small ( is small). More interesting are the
correlations between the velocity and positions at distant times. Inserting the
results for (t) and r(t) and using (t) (t
t
)
= qc (t t
t
) gives
(t) (t
t
)
=
_
2
0
q
2:
_
c
~(||
0
)/n
q
2:
c
~(||
0
)/n
(8.12)
8.2. RANDOM ELECTRICAL CIRCUITS 99
and similarly
_
(r(t) r
0
)
2
_
=
:
2
2
_
2
0
q
2:
_
_
1 c
~|/n
_
2
2
_
t
:
_
1 c
~|/n
_
_
. (8.13)
If the Brownian particle is in equilibrium with the uid, we can average over all
the directions and magnitudes of the velocity
2
0
2
0
_
T
=
|
T
T
n
. Where ...
T
refers to the thermal average over all particles (so far we only considered one of
them embedded in the uid). Furthermore, in equilibrium the dynamics should
be stationary, i.e.
_
(t) (t
t
)
_
T
= ) (t t
t
) (8.14)
should only depend on the relative time t t
t
and not on some absolute time
point, like t, t
t
or t t
t
. This is fullled if
2
0
_
T
=
q
2:
(8.15)
which enables us to express the noise strength at equilibrium in terms of the
temperature and the friction coecient
q = 2/
1
T (8.16)
which is one of the simplest realization of the uctuation dissipation theorem.
Here the friction is a dissipative eect whereas the noise a uctuation eect.
Both are closely related in equilibrium.
This allows us to analyze the mean square displacement in equilibrium
_
(r(t) r
0
)
2
_
T
=
2/
1
T
_
t
:
_
1 c
~|/n
_
_
. (8.17)
In the limit of long times t t =
n
~
holds
_
(r(t) r
0
)
2
_
T
2/
1
T
t (8.18)
which the result obtained earlier from the diusion equation if we identify
1 =
|
T
T
~
. Thus, even though the mean velocity of the particle vanishes for
t t it does not mean that it comes to rest, the particle still increases its
mean displacement, only much slower than via a ballistic motion. This demon-
strates how important it is to consider, in addition to mean values like (t)
= 0
(t) (t
t
)
= qc (t t
t
) . (8.21)
In addition we use that the energy of the circuit
1 (Q, 1) =
1
2C
Q
2
1
2
1
2
(8.22)
implies via equipartition theorem that
Q
2
0
_
T
= C/
1
T
1
2
0
_
T
=
/
1
T
1
. (8.23)
The above equation is solved for the current as:
1 (t) = 1
0
c
I|
C (t)
1
C1^
Q
0
c
I|
sinh(^t)
1
1
_
|
0
d: (:) c
I(||
0
)
C (t t
t
) (8.24)
with damping rate
I =
1
1
(8.25)
and time constant
^ =
_
1
2
1,C
1
2
, (8.26)
as well as
C (t) = cosh(^t)
I
^
sinh(^t) . (8.27)
Assuming Q
0
1
0
T
= 0 gives (assume t t
t
)
_
1 (t) 1 (t
t
)
_
T
= c
I(||
0
)
C (t) C (t
t
)
1
2
0
_
T
_
1
C1^
_
2
Q
2
0
_
T
c
I(||
0
)
sinh(^t) sinh(^t
t
)
q
1
2
_
|
0
0
d:c
I(||
0
2s)
C (t :) C (t
t
:) (8.28)
8.2. RANDOM ELECTRICAL CIRCUITS 101
The last integral can be performed analytically, yielding
c
I(||
0
)
4^
2
I
_
I
2
^
2
_
cosh(^(t
t
t))
c
I(||
0
)
4^
_
sinh(^(t
t
t))
^
I
cosh(^(t
t
t))
_
c
I(||
0
)
4^
2
(Icosh(^(t
t
t)) ^sinh(^(t
t
t))) (8.29)
It is possible to obtain a fully stationary current-current correlation function if
q = 41/
1
T (8.30)
which is again an example of the uctuation dissipation theorem.
It follows (t
t
t)
_
1 (t) 1 (t
t
)
_
T
=
/
1
T
1
c
I(|
0
|)
_
cosh(^(t
t
t))
I
^
sinh(^(t
t
t))
_
(8.31)
If C 0, it holds ^ I and I
l
is the only time scale of the problem. Current-
current correlations decay exponentially. If 0 < ^ < I, the correlation function
changes sign for t
t
t ^
l
. Finally, if 1
2
< 1,C, current correlations decay
in an oscillatory way (use cosh(ir) = cos r and sinh(ir) = i sin(r)). Then
^ = ic with c =
_
J/c1
2
J
2
and
_
1 (t) 1 (t
t
)
_
T
=
/
1
T
1
c
I(|
0
|)
_
cos (c (t
t
t))
I
c
sin(c (t
t
t))
_
. (8.32)
For the uctuating charge follows
Q(t) = Q
0
_
|
0
1 (:) d:. (8.33)
102 CHAPTER 8. BROWNIAN MOTION AND STOCHASTIC DYNAMICS
Chapter 9
Boltzmann transport
equation
9.1 Transport coecients
In the phenomenological theory of transport coecients one considers a relation-
ship between generalized currents, J
I
, and forces, A
I
which, close to equilibrium
and for small forces is assumed to be linear:
J
I
=
1
I
A
(9.1)
Here, the precise denition of the forces is such that in each case an entropy
production of the kind
do
dt
=
I
A
I
J
I
(9.2)
occurs. If this is the case, the coecients 1
I
are symmetric:
1
I
= 1
I
, (9.3)
a result originally obtained by Onsager. The origin of this symmetry is the time
reversal invariance of the microscopic processes causing each of these transport
coecients. This implies that in the presence of an external magnetic eld holds
1
I
(H) = 1
I
(H) . (9.4)
For example in case of an electric current it holds that from Maxwells equa-
tions follows
do
I
dt
= J
1
T
(9.5)
which gives A
I
=
J
T
for the force (note, not the electrical eld 1 itself). Consid-
ering a heat ux J
Q
gives entropy ux J
Q
,T. Then there should be a continuity
103
104 CHAPTER 9. BOLTZMANN TRANSPORT EQUATION
equation
do
Q
dt
= \(J
Q
,T) = J
Q
\
1
T
(9.6)
which gives A
Q
=
l
T
2
\T for the generalized forces and it holds
J
J
= 1
JJ
A
J
1
JQ
A
Q
=
1
JJ
T
1
1
JQ
T
2
\T (9.7)
J
Q
= 1
QJ
A
J
1
QQ
A
Q
=
1
QJ
T
1
1
QQ
T
2
\T. (9.8)
We can now consider a number of physical scenario. For example the electri-
cal current in the absence of a temperature gradient is the determined by the
conductivity, o, via
J
J
= o1 (9.9)
which gives
o =
1
JJ
T
. (9.10)
On the other hand, the thermal conductivity is dened as the relation between
heat current and temperature gradient in the absence of an electrical current
J
Q
= i\T. (9.11)
This implies 1 =
J
1
J
11
T
\T for the electrical eld yielding
i =
1
T
2
_
1
QQ
1
2
QJ
1
JJ
_
. (9.12)
Finally we can consider the Thermopower, which is the above established rela-
tion between 1 and \T for J
J
= 0
1 = o\T (9.13)
with
o =
1
JQ
1
JJ
T
. (9.14)
One approach to determine these coecients from microscopic principles is
based on the Boltzmann equation.
9.2 Boltzmann equation for weakly interacting
fermions
For quasiclassical description of electrons we introduce the Boltzmann distri-
bution function )
n
(k, r, t). This is the probability to nd an electron in state
9.2. BOLTZMANNEQUATIONFOR WEAKLYINTERACTINGFERMIONS105
:, k at point r at time t. More precisely is ),\ the probability density to nd
an electron in state :, k in point r. This means the probability to nd it in a
volume element d\ is given by )d\,\ .
We consider both k and r dened. This means that we consider wave pack-
ets with both k and r (approximately) dened, however always such that the
uncertainty relation ^/^r ~ 1 holds.
The electron density and the current density are given by
:(r, t) =
1
\
n,k,c
)
n
(k, r, t) (9.15)
j(r, t) =
c
\
n,k,c
v
k
)
n
(k, r, t) (9.16)
The equations of motion of non-interacting electrons in a periodic solid and
weak external elds are
dr
dt
= v
k
=
1
~
_
0-
n
(k)
0k
_
, (9.17)
and
~
dk
dt
= cE
c
c
(v B) . (9.18)
They determine the evolution of the individual k(t) and r(t) of each wave packet.
If the electron motion would be fully determined by the equations of motion,
the distribution function would satisfy
)
n
(k(t), r(t), t) = )
n
(k(0), r(0), 0) (9.19)
Thus, the full time derivative would vanish
d)
dt
=
0)
0t
0k
0t
r
k
)
0r
0t
r
r
) = 0 (9.20)
However, there are processes which change the distribution function. These
are collisions with impurities, phonons, other electrons. The new equation reads
d)
dt
=
0)
0t
0k
0t
r
k
)
0r
0t
r
r
) =
_
0)
0t
_
CoII
, (9.21)
where
_
J}
J|
_
CoII
= 1[)[ is called the collision integral.
Using the equations of motion we obtain the celebrated Boltzmann equation
0)
0t
c
~
_
E
1
c
(v B)
_
r
k
) v
k,n
r
r
) = 1[)[ . (9.22)
The individual contributions to
J}
J|
can be considered as a consequence of
spatial inhomogeneity eects, such as temperature or chemical potential gradi-
ents (carriers of a given state enter from adjacent regions enter into r whilst
others leave):
v
k
r
r
) v
k
0)
0T
\T (9.23)
106 CHAPTER 9. BOLTZMANN TRANSPORT EQUATION
In addition, there are eects due to external elds (changes of the k-vector at
the rate)
c
~
_
E
1
c
(v B)
_
r
k
) (9.24)
Finally there are scattering processes, characterized by 1[)[ which are deter-
mined by the dierence between the rate at which the state k is entered and
the rate at which carriers are lost from it.
9.2.1 Collision integral for scattering on impurities
The collision integral describes processes that bring about change of the state
of the electrons, i.e., transitions. There are several reasons for the transitions:
phonons, electron-electron collisions, impurities. Here we consider only one:
scattering o impurities.
Scattering in general causes transitions in which an electron which was in the
state :
l
, k
l
is transferred to the state :
2
, k
2
. We will suppress the band index
as in most cases we consider scattering within a band. The collision integral has
two contribution: "in" and "out": 1 = 1
In
1
oui
.
The "in" part describes transitions from all the states to the state k:
1
In
[)[ =
k1
\(k
l
, k))(k
l
, r)[1 )(k, r)[ , (9.25)
where \(k
l
, k) is the transition probability per unit of time (rate) from state
k
l
to state k given the state k
l
is initially occupied and the state k is initially
empty. The factors )(k
l
) and 1 )(k) take care for the Pauli principle.
The "out" part describes transitions from the state k to all other states:
1
oui
[)[ =
k1
\(k, k
l
))(k, r)[1 )(k
l
, r)[ , (9.26)
The collision integral should vanish for the equilibrium state in which
)(k) = )
0
(k) =
1
oxp
_
:(k)
|BT
_
1
. (9.27)
This can be rewritten as
oxp
_
-(k) j
/
I
T
_
)
0
= 1 )
0
. (9.28)
The requirement 1
In
[)
0
[ 1
oui
[)
0
[ is satised if
\(k, k
l
) oxp
_
-(k
l
)
/
I
T
_
= \(k
l
, k) oxp
_
-(k)
/
I
T
_
. (9.29)
9.2. BOLTZMANNEQUATIONFOR WEAKLYINTERACTINGFERMIONS107
We only show here that this is sucient but not necessary. The principle that
it is always so is called "detailed balance principle". In particular, for elastic
processes, in which -(k) = -(k
l
), we have
\(k, k
l
) = \(k
l
, k) . (9.30)
In this case (when only elastic processes are present we obtain)
1[)[ =
k1
\(k
l
, k))(k
l
)[1 )(k)[
k1
\(k, k
l
))(k)[1 )(k
l
)[
=
k1
\(k
l
, k) ()(k
l
) )(k)) . (9.31)
9.2.2 Relaxation time approximation
We introduce ) = )
0
c). Since 1[)
0
[ = 0 we obtain
1[)[ =
k1
\(k
l
, k) (c)(k
l
) c)(k)) .
Assume the rates \ are all equal and
k1
c)(k
l
) = 0 (no change in total
density), then 1[)[ ~ c)(k). We introduce the relaxation time t such that
1[)[ =
c)
t
. (9.32)
This form of the collision integral is more general. That is it can hold not only
for the case assumed above. Even if this form does not hold exactly, it serves
as a simple tool to make estimates.
More generally, one can assume t is k-dependent, t
k
. Then
1[)(k)[ =
c)(k)
t
k
. (9.33)
9.2.3 Conductivity
Within the t-approximation we determine the electrical conductivity. Assume
an oscillating electric eld is applied, where E(t) = Ec
I.|
. The Boltzmann
equation reads
0)
0t
c
~
E r
k
) v
k
r
r
) =
) )
0
t
k
. (9.34)
Since the eld is homogeneous we expect homogeneous response c)(t) = c)c
I.|
.
This gives
c
~
E r
k
) =
_
i.
1
t
k
_
c) . (9.35)
108 CHAPTER 9. BOLTZMANN TRANSPORT EQUATION
If we are only interested in the linear response with respect to the electric eld,
we can replace ) by )
0
in the l.h.s. This gives
c
~
0)
0
0-
k
~v
k
E =
_
i.
1
t
k
_
c) . (9.36)
and we obtain
c) =
ct
k
1 i.t
k
0)
0
0-
k
v
k
E (9.37)
For the current density we obtain j(t) = jc
I.|
, where
j =
c
\
k,c
v
k
c)(k)
=
2c
2
\
|
t
k
1 i.t
k
0)
0
0-
k
(v
k
E) v
k
= 2c
2
_
d
3
/
(2)
3
t
k
1 i.t
k
0)
0
0-
k
(v
k
E) v
k
. (9.38)
We dene the conductivity tensor o
oo
via ,
o
=
o
o
o,o
1
o
. Thus
o
o,o
= 2c
2
_
d
3
/
(2)
3
t
k
1 i.t
k
0)
0
0-
k
ko
ko
.
At low enough temperatures, i.e., for /
I
T j,
0)
0
0-
k
- c(-
k
j)
2
6
(/
I
T)
2
c
tt
(-
k
j) , (9.39)
Assuming t is constant and the band energy is isotropic (eective mass is
simple) we obtain
o
o,o
=
c
2
t
1 i.t
_
d-j(-)
d\
4
0)
0
0-
o
o
=
c
2
tj
J
1 i.t
_
d\
4
o
o
=
2c
2
tj
J
(1 i.t)
2
J
8
c
o,o
. (9.40)
For the dc-conductivity, i.e., for . = 0 we obtain
o
o,o
=
c
2
tj
J
2
J
8
c
o,o
(9.41)
where j
J
is the total density of states at the Fermi level.
9.2.4 Determining the transition rates
Impurities are described by an extra potential acting on electrons
l
Inj
(r) =
(r c
) , (9.42)
9.2. BOLTZMANNEQUATIONFOR WEAKLYINTERACTINGFERMIONS109
where c
_
d\ (r c
)n
+
k1
(r)n
k
(r)c
I(kk1)r
(9.44)
We assume all impurities are equivalent. Moreover we assume that they all have
the same position within the primitive cell. That is the only random aspect is
in which cell there is an impurity. Then c
= R
cc. Shifting by R
in each
term of the sum and using the periodicity of the functions n we obtain
l
Inj,k1,k
=
1
\
c
I(kk1)R
_
d\ (r cc)n
+
k1
(r)n
k
(r)c
I(kk1)r
=
1
\
k1,k
c
I(kk1)R
(9.45)
where
k1,k
is the matrix element of a single impurity potential.
This gives
[l
Inj,k1,k
[
2
=
1
\
2
[
k1,k
[
2
,l
c
I(kk1)(RR
)
. (9.46)
This result will be put into the sum over k
l
in the expression for the collision
integral 1. The locations R
1
\
2
[
k1,k
[
2
Inj
, (9.47)
where
Inj
is the total number of impurities.
This gives for the collision integral
1[)[ =
|1
\(k
l
, k) ()(k
l
) )(k))
=
2
~
Inj
\
2
|1
[
k1,k
[
2
c(-(k
l
) -(k)) ()(k
l
) )(k))
=
2
~
:
Inj
_
d
3
/
l
(2)
3
k1,k
[
2
c(-(k
l
) -(k)) ()(k
l
) )(k)) , (9.48)
where :
Inj
=
Inj
,\ is the density of impurities.
110 CHAPTER 9. BOLTZMANN TRANSPORT EQUATION
9.2.5 Transport relaxation time
As we have seen the correction to the distribution function due to application
of the electric eld was of the form c) ~ E v
k
. In a parabolic band (isotropic
spectrum) this would be c) ~ E k. So we make an ansatz
c) = a (/) E e
k
, (9.49)
where e
k
= k,[k[. For isotropic spectrum conservation of energy means [k[ =
[k
l
[, the matrix element
k1,k
depends on the angle between k
l
and k only, the
surface o is a sphere. Then we obtain
1[c)[ =
2
~
:
Inj
j
J
_
d\
l
4
[
k1,k
[
2
(c)(k
l
) c)(k))
=
2
~
:
Inj
j
J
a (/) 1
_
d\
l
4
[(0
k1,k
)[
2
(cos 0
k,E
cos 0
k1E
) . (9.50)
We choose direction k as .. Then the vector k
l
is described in spherical co-
ordinates by 0
k1
= 0
k,k1
and ,
k1
. Analogously the vector E is described by
0
E
= 0
k,E
and ,
E
. Then d\
l
= sin0
k1
d0
k1
d,
k1
.
From simple vector analysis we obtain
cos 0
E,k1
= cos 0
E
cos 0
k1
sin0
E
sin0
k1
cos(,
E
,
k1
) . (9.51)
The integration then gives
1[c)[ =
:
Inj
j
J
2~
a (/) 1
_
sin0
k1
d0
k1
d,
k1
[(0
k1
)[
2
_
cos 0
E
cos 0
E
cos 0
k1
sin0
E
sin0
k1
cos(,
E
,
k1
)
_
=
:
Inj
j
J
~
a (/) 1 cos 0
g
_
sin0
k1
d0
k1
[(0
k1
)[
2
(1 cos 0
k1
) . (9.52)
Noting that a (/) 1 cos 0
g
= a (/) E e
k
= c) we obtain
1[c)[ =
c)
t
iv
, (9.53)
where
1
t
iv
=
:
Inj
i
~
_
sin0d0 [(0)[
2
(1 cos 0) (9.54)
Note that our previous "relaxation time approximation" was based on total
omission of the "in" term. That is in the t-approximation we had
1[)[ =
k1
\(k
l
, k) (c)(k
l
) c)(k)) - c)(k)
k1
\(k
l
, k) .
Thus
1
t
=
k1
\(k
l
, k) =
:
Inj
i
~
_
d0 [(0)[
2
sin0 .
The dierence between t
iv
(transport time) and t (momentum relaxation
time) is the factor (1 cos 0) which emphasizes backscattering. If [(0)[
2
=
co::t. we obtain t
iv
= t.
9.2. BOLTZMANNEQUATIONFOR WEAKLYINTERACTINGFERMIONS111
9.2.6 H-theorem
Further insight into the underlying nonequilibrium dynamics can be obtained
from analyzing the entropy density
H = /
1
_
d
J
rd
J
/
(2)
J
[)
k
(r) ln)
k
(r) (1 )
k
(r)) ln(1 )
k
(r))[
It follows for the time dependence of H that
0H
0t
= /
1
_
d
J
rd
J
/
(2)
J
0)
k
(r)
0t
ln
)
k
(r)
1 )
k
(r)
= /
1
_
d
J
rd
J
/
(2)
J
_
0k
0t
r
k
)
0r
0t
r
r
)
_
ln
)
k
(r)
1 )
k
(r)
/
1
_
d
J
rd
J
/
(2)
J
1
k
[)[ ln
)
k
(r)
1 )
k
(r)
where we used that )
k
(r) is determined from the Boltzmann equation.
Next we use that
r
k
)
k
(r) ln
)
k
(r)
1 )
k
(r)
= r
k
j
k
(r)
with
j
k
(r) = log (1 )
k
(r)) )
k
(r) ln
)
k
(r)
1 )
k
(r)
which allows us to write the term with
Jk
J|
r
k
) as a surface integral. The same
can be done for the term with
Jr
J|
r
r
). Thus, it follows
0H
0t
= /
1
_
d
J
rd
J
/
(2)
J
1
k
[)[ ln
)
k
(r)
1 )
k
(r)
= /
1
_
d
J
rd
J
/
(2)
J
\(k
t
, k))
k
0 (r)[1 )
k
(r)[ \(k, k
t
))
k
(r)[1 )
k
0 (r)[ ln
)
k
(r)
1 )
k
(r)
=
/
1
2
_
d
J
rd
J
/
(2)
J
\(k
t
, k) ()
k
0 (r)[1 )
k
(r)[ )
k
(r)[1 )
k
0 (r)[ ) ln
)
k
(r) (1 )
k
0 (r))
)
k
0 (r) (1 )
k
(r))
=
/
1
2
_
d
J
rd
J
/
(2)
J
\(k
t
, k) ()
k
0 (r)[1 )
k
(r)[ )
k
(r)[1 )
k
0 (r)[ ) ln
)
k
(r) (1 )
k
0 (r))
)
k
0 (r) (1 )
k
(r))
It holds
(1 r) log r _ 0
where the equal sign is at zero. Thus, it follows
0H
0t
< 0.
112 CHAPTER 9. BOLTZMANN TRANSPORT EQUATION
Only for
)
k
(r) (1 )
k
0 (r))
)
k
0 (r) (1 )
k
(r))
= 1
is H a constant. Thus
log
)
k
(r)
(1 )
k
(r))
= co::t.
which is the equilibrium distribution function.
9.2.7 Local equilibrium, Chapman-Enskog Expansion
Instead of global equilibrium with given temperature T and chemical potential
j in the whole sample, consider a distribution function )(r, k) corresponding to
space dependent T(r) and j(r):
)
0
=
1
oxp
_
t
!
(r)
|BT(r)
_
1
. (9.55)
This sate is called local equilibrium because also for this distribution function
the collision integral vanishes: 1[)
0
[ = 0. However this state is not static. Due
to the kinematic terms in the Boltzmann equation (in particular v
k
r
r
)) the
state will change. Thus we consider the state ) = )
0
c) and substitute it into
the Boltzmann equation. This gives (we drop the magnetic eld)
0c)
0t
c
~
E r
k
()
0
c)) v
k
r
r
()
0
c)) = 1[c)[ . (9.56)
We collect all the c) terms in the r.h.s.:
c
~
E r
k
)
0
v
k
r
r
)
0
= 1[c)[
0c)
0t
c
~
E r
k
c) v
k
r
r
c) . (9.57)
We obtain
r
r
)
0
=
0)
0
0-
k
_
r
r
j
(-
k
j)
T
r
r
T
_
(9.58)
and
0)
0
0-
k
v
k
_
(r
r
j cE)
-
k
j
T
r
r
T
_
= 1[c)[
0c)
0t
c
~
Er
k
c)v
k
r
r
c) .
(9.59)
In the stationary state, relaxation time approximation, and neglecting the
last two terms (they are small at small elds) we obtain
0)
0
0-
k
v
k
_
(r
r
j cE)
-
k
j
T
r
r
T
_
=
c)
t
iv
, (9.60)
which yields:
c) = t
iv
0)
0
0-
k
v
k
_
(r
r
j cE)
-
k
j
T
r
r
T
_
. (9.61)
9.2. BOLTZMANNEQUATIONFOR WEAKLYINTERACTINGFERMIONS113
Thus we see that there are two "forces" getting the system out of equilib-
rium: the electrochemical eld: E
oI.cL.
= E (1,c)rj and the gradient of the
temperature rT. More precisely one introduces the electrochemical potential
c
oI.cL.
such that E
oI.cL.
= E (1,c)rj = rc
oI.cL.
= rc (1,c)rj. Thus
c
oI.cL.
= c (1,c)j.
On top of the electric current
j
J
(r, t) =
c
\
k,c
v
k
c)(k, r, t) (9.62)
we dene the heat current
j
Q
(r, t) =
1
\
k,c
(-
k
j)v
k
c)(k, r, t) (9.63)
This expression for the heat current follows from the denition of heat dQ =
dl jd.
This gives
_
j
J
j
Q
_
=
_
1
ll
1
l2
1
2l
1
22
__
E
oI.cL.
rT,T
_
(9.64)
Before we determine these coecients, we give a brief interpretation of the
various coecients. In the absence of rT, holds
j
J
= 1
ll
E
oI.cL.
j
Q
= 1
2l
E
oI.cL.
.
The rst term is the usual conductivity, i.e.
o = 1
ll
, while the second term describes a heat current in case of an applied electric
eld. The heat current that results as consequence of an electric current is called
Peltier eect
j
Q
= ,
J
j
J
.
where
,
J
= 1
2l
,1
ll
.
is the Peltier coecient.
In the absence of E
oI.cL.
holds
j
J
=
1
l2
T
rT
j
Q
=
1
22
T
rT
Thus, with j
Q
= irT follows
i = 1
22
,T
114 CHAPTER 9. BOLTZMANN TRANSPORT EQUATION
for the thermal conductivity, while 1
l2
determines the electric current that is
caused by a temperature gradient. Keep in mind that the relationship between
the two currents is now j
Q
= ,
T
j
J
with .
,
T
= 1
22
,1
l2
= Ti,1
l2
Finally a frequent experiment is to apply a thermal gradient and allow for
no current ow. Then, due to
0 = 1
ll
E
oI.cL.
1
l2
T
rT
follows that a voltage is being induced with associated electric eld
E
oI.cL.
= orT
where
o =
1
l2
T1
ll
is the Seebeck coecient (often refereed to as thermopower).
For the electrical current density we obtain
j
J
=
c
\
k,c
v
k
c)(k)
=
c
\
k,c
t
iv
0)
0
0-
k
_
v
k
_
cE
oI.cL.
-
k
j
T
rT
__
v
k
. (9.65)
Thus for 1
ll
we obtain
1
lloo
=
c
2
\
k,c
t
iv
0)
0
0-
k
k,o
k,o
. (9.66)
For 1
l2
this gives
1
l2oo
=
c
\
k,c
t
iv
0)
0
0-
k
(-
k
j)
k,o
k,o
. (9.67)
For the heat current density we obtain
j
Q
=
1
\
|,c
(-
k
j)v
k
c)(k)
=
1
\
|,c
t
iv
(-
k
j)
0)
0
0-
k
_
v
k
_
cE
oI.cL.
-
k
j
T
rT
__
v
k
.(9.68)
Thus for 1
2l
we obtain
1
2loo
=
c
\
k,c
t
iv
0)
0
0-
k
(-
k
j)
k,o
k,o
. (9.69)
9.2. BOLTZMANNEQUATIONFOR WEAKLYINTERACTINGFERMIONS115
For 1
22
this gives
1
22oo
=
1
\
k,c
t
iv
0)
0
0-
k
(-
k
j)
2
k,o
k,o
. (9.70)
1
ll
is just the conductivity calculated earlier. 1
l2
= 1
2l
. This is one of
the consequences of Onsager relations. 1
l2
,= 0 only if the density of states is
asymmetric around j (no particle-hole symmetry). Finally, for 1
22
we use
0)
0
0c
- c(c j)
2
6
(/
I
T)
2
c
tt
(c j) , (9.71)
This gives
1
22oo
=
1
\
|,c
t
iv
0)
0
0c
(c
|
j)
2
|,o
|,o
= t
iv
_
j (-) d-
d\
4
0)
0
0-
(- j)
2
o
= t
iv
2
8
(/
I
T)
2
j
J
_
d\
4
o
=
2
0
(/
I
T)
2
j
J
2
J
t
iv
c
o,o
. (9.72)
Thus, for thermal conductivity i dened via j
Q
= irT we obtain
i =
1
22
/
I
T
=
2
0
/
2
I
Tj
J
2
J
t
iv
(9.73)
Comparing with the electrical conductivity
o =
1
8
j
J
2
J
t
iv
(9.74)
We obtain the Wiedemann-Franz law:
i
o
=
/
2
I
T
c
2
2
8
. (9.75)