0% found this document useful (0 votes)
2 views40 pages

Lecturenotes2012 - Functional Methods in Quantum Mechanics

The document consists of lecture notes on functional methods in quantum mechanics by F.H.L. Essler, covering mathematical background, functionals, functional differentiation, and multidimensional Gaussian integrals. It introduces concepts such as action functionals and their extremization, as well as the path integral approach to quantum mechanics. The notes emphasize the importance of these methods in theoretical physics and provide detailed mathematical derivations and examples.

Uploaded by

mahrusdastegir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views40 pages

Lecturenotes2012 - Functional Methods in Quantum Mechanics

The document consists of lecture notes on functional methods in quantum mechanics by F.H.L. Essler, covering mathematical background, functionals, functional differentiation, and multidimensional Gaussian integrals. It introduces concepts such as action functionals and their extremization, as well as the path integral approach to quantum mechanics. The notes emphasize the importance of these methods in theoretical physics and provide detailed mathematical derivations and examples.

Uploaded by

mahrusdastegir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Lecture Notes for the C6 Theory Option

F.H.L. Essler
The Rudolf Peierls Centre for Theoretical Physics
Oxford University, Oxford OX1 3NP, UK

May 3, 2013

Please report errors and typos to fab@thphys.ox.ac.uk


c 2012 F.H.L. Essler

Part I
Functional Methods in Quantum Mechanics
1 Some Mathematical Background
Functional Methods form a central part of modern theoretical physics. In the following we introduce the
notion of functionals and how to manipulate them.

1.1 Functionals
What is a functional? You all know that a real function can be viewed as a map from e.g. an interval [a, b]
to the real numbers

f : [a, b] → R , x → f (x). (1)

A functional is similar to a function in that it maps all elements in a certain domain to real numbers,
however, the nature of its domain is very different. Instead of acting on all points of an interval or some
other subset of the real numbers, the domain of functionals consists of (suitably chosen) classes of functions.
In other words, given some class {f } of functions, a functional F is a map

F : {f } → R , f → F [f ]. (2)

We now consider two specific examples of functionals.

1. The distance between two points. A very simple functional F consists of the map which assigns to all
paths between two fixed points the length of the path. To write this functional explicitly, let us consider
a simple two-dimensional situation in the (x, y) plane and choose two points (x1 , y1 ) and (x2 , y2 ). We
consider the set of paths that do not turn back, i.e. paths along which x increases monotonically as we
go from (x1 , y1 ) to (x2 , y2 ). These can be described by the set of functions {f } on the interval [x1 , x2 ]
satisfying f (x1 ) = y1 and f (x2 ) = y2 . The length of a path is then given by the well-known expression
Z x2
dx0 1 + f 0 (x0 )2 .
p
F [f (x)] = (3)
x1

1
2. Action Functionals. These are very important in Physics. Let us recall their definition in the context
of classical mechanics. Start with n generalised coordinates q(t) = (q1 (t), . . . , qn (t)) and a Lagrangian
L = L(q, q̇). Then, the action functional S[q] is defined by
Z t2
S[q] = dt L(q(t), q̇(t)) . (4)
t1

It depends on classical paths q(t) between times t1 and t2 satisfying the boundary conditions q(t1 ) = q1
and q(t2 ) = q2 .

1.2 Functional differentiation


In both the examples given above a very natural question to ask is what function extremizes the functional.
In the first example this corresponds to wanting to know the path that minimizes the distance between two
points. In the second example the extremum of the action functional gives the solutions to the classical
equations of motion. This is known as Hamilton’s principle. In order to do so it is very useful to generalize
the notion of a derivative. For our purposes we define the functional derivative by

δF [f (x)] F [f (x) + δ(x − y)] − F [f (x)]


= lim .
δf (y) →0 
(5)

Here, as usual, we should think of the δ-function as being defined as the limit of a test function, e.g.
1 2 2
δ(x) = lim √ e−x /a , (6)
a→0 πa

and take the limit a → 0 only in the end (after commuting the limit with all other operations such as
the lim→0 in (5)). Importantly, the derivative defined in this way is a linear operation which satisfies the
product and chain rules of ordinary differentiation and commutes with ordinary integrals and derivatives.
Let us see how functional differentiation works for our two examples.

1. The distance between two points. In analogy with finding stationary points of functions we want to
extremize (3) by setting its functional derivative equal to zero

δF [f (x)]
0= . (7)
δf (y)

We first do the calculation by using the definition (5).


q q
Z x2 1 + [f 0 (x0 ) + δ 0 (x0 − y)]2 − 1 + [f 0 (x0 )]2
δF [f (x)] 0
= lim dx . (8)
δf (y) →0 x
1


The Taylor expansion of the square root is 1 + 2 = 1 +  + . . ., which gives

f 0 (x0 )δ 0 (x0 − y)
q q
2
0 0 0 0
1 + [f (x ) + δ (x − y)] = 1 + [f 0 (x0 )]2 + q + O(2 ) , (9)
0 0 2
1 + [f (x )]

where δ 0 (x) is the derivative of the delta-function and O(2 ) denote terms proportional to 2 . Substi-
tuting this back into (8) we have 1
1
In the last step we have used Z b
dx0 δ 0 (x0 − y)g(x0 ) = −g 0 (y) , (10)
a
which can be proved by “integration by parts”.

2
x2
δ 0 (x0 − y)f 0 (x0 ) f 0 (y)
Z
δF [f (x)] d
= dx0 q =− q . (11)
δf (y) x1 0 0 2 dy 0 2
1 + [f (x )] 1 + [f (y)]
The solution to (7) is thus
f 0 (y) = const, (12)
which describes a straight line. In practice we don’t really go back to the definition of the functional
derivative any more than we use the definition of an ordinary derivative to work it out, but proceed
as follows. We first interchange the functional derivative and the integration
Z x2
δF [f (x)] δ
q
= dx0 1 + [f 0 (x0 )]2 . (13)
δf (y) x1 δf (y)
Next we use the chain rule
p
δ 1 + f 0 (x0 )2 f 0 (x0 ) δf 0 (x0 )
=p . (14)
δf (y) 1 + f 0 (x0 )2 δf (y)
Finally we interchange the functional and the ordinary derivative
δf 0 (x0 ) d δf (x0 ) d
= 0 = 0 δ(x0 − y) . (15)
δf (y) dx δf (y) dx
The last identity follows from our definition (5). Now we can put everything together and arrive at
the same answer (11).
2. Next we want to try out these ideas on our second example and extremize the classical action (4)
in order to obtain the classical equations of motion. We first interchange functional derivative and
integration and then use the chain rule to obtain

Z t2
δS[q] δ
= dt̃ L(q(t̃), q̇(t̃)) (16)
δqi (t) δqi (t) t1
Z t2  
∂L δqj (t̃) ∂L δ q̇j (t̃)
= dt̃ (q, q̇) + (q, q̇) (17)
t1 ∂qj δqi (t) ∂ q̇j δqi (t)
(18)
δ q̇j (t̃) d δqj (t̃)
We now use that δqi (t) =and integrate by parts with respect to t̃
dt̃ δqi (t)
Z t2  
δS[q] ∂L d ∂L δqj (t̃)
= dt̃ (q, q̇) − (q, q̇) (19)
δqi (t) t1 ∂qj dt̃ ∂ q̇j δqi (t)
Z t2  
∂L d ∂L ∂L d ∂L
= dt̃ (q, q̇) − (q, q̇) δij δ(t̃ − t) = (q, q̇) − (q, q̇) . (20)
t1 ∂qj dt̃ ∂ q̇j ∂qi dt ∂ q̇i
In the second last step we have used
δqj (t̃)
= δij δ(t̃ − t) , (21)
δqi (t)
which follows straightforwardly from our general definition (5). Thus we conclude that the extrema of
the classical action are given by paths that fulfil the equations of motion

∂L d ∂L
(q, q̇) − (q, q̇) = 0.
∂qi dt ∂ q̇i
(22)
Nice.

3
1.3 Multidimensional Gaussian Integrals
As a reminder, we start with a simple one-dimensional Gaussian integral over a single variable y. It is given
by
Z ∞ r
1 2 2π
I(z) ≡ dy exp(− zy ) = , (23)
−∞ 2 z
where z is a complex number with Re(z) > 0. The standard proof of this relation involves writing I(z)2
as
p a two-dimensional integral over y1 and y2 and then introducing two-dimensional polar coordinates r =
y12 + y22 and ϕ. Explicitly,
Z ∞ Z ∞ Z ∞ Z ∞
2 1 2 1 2 1
I(z) = dy1 exp(− zy1 ) dy2 exp(− zy2 ) = dy1 dy2 exp(− z(y12 + y22 )) (24)
−∞ 2 −∞ 2 −∞ −∞ 2
Z 2π Z ∞
1 2π
= dϕ dr r exp(− zr2 ) = . (25)
0 0 2 z
Next we consider n-dimensional Gaussian integrals
Z  
n 1 T
W0 (A) ≡ d y exp − y Ay , (26)
2
over variables y = (y1 , . . . , yn ), where A is a symmetric, positive definite matrix (all its eigenvalues are
positive). This integral can be reduced to a product of one-dimensional Gaussian integrals by diagonalising
the matrix A. Consider an orthogonal rotation O such that A = ODOT with a diagonal matrix D =
diag(a1 , . . . , an ). The eigenvalues ai are strictly positive since we have assumed that A is positive definite.
Introducing new coordinates ỹ = OT y we can write
n
X
yT Ay = ỹT Dỹ = ai ỹi2 , (27)
i=1

where the property OT O = 1 of orthogonal matrices has been used. Note further that the Jacobian of
the coordinate change y → ỹ is one, since |det(O)| = 1. Hence, using Eqs. (23) and (27) we find for the
integral (26)
n Z
Y 1
W0 (A) = dỹi exp(− ai ỹi2 ) = (2π)n/2 (a1 a2 . . . an )−1/2 = (2π)n/2 (detA)−1/2 . (28)
2
i=1

To summarise, we have found for the multidimensional Gaussian integral (26) that

W0 (A) = (2π)n/2 (detA)−1/2 ,


(29)

a result which will be of some importance in the following. We note that if we multiply the matrix A by a
complex number z with Re(z) > 0 and then follow through exactly the same steps, we find
 n/2

W0 (zA) = (detA)−1/2 . (30)
z
One obvious generalisation of the integral (26) involves adding a term linear in y in the exponent, that is
Z  
n 1 T T
W0 (A, J) ≡ d y exp − y Ay + J y . (31)
2
Here J = (J1 , . . . , Jn ) is an n-dimensional vector. Changing variables y → ỹ, where

y = A−1 J + ỹ (32)

4
this integral can be written as
 Z  
1 T −1 n 1 T
W0 (A, J) = exp J A J d ỹ exp − ỹ Aỹ . (33)
2 2

The remaining integral is Gaussian without a linear term, so can be easily carried out using the above
results. Hence, one finds
 
1 T −1
W0 (A, J) = (2π)n/2 (detA)−1/2 exp J A J .
2
(34)

2 Path Integrals in Quantum Mechanics


So far you have encountered two ways of doing QM

1. solving the Schrödinger equation for the wave function (→ PDEs);

2. following Heisenberg, we can work with operators, commutation relations, eigenstates.

Historically it took some time for people to realize that these are in fact equivalent.
There is a third approach to QM, due to Feynman. It is particularly useful for QFTs and many-particle
QM problems, as it makes certain calculations much easier. We will now introduce this approach.

2.1 The Propagator


Our starting point is the time-dependent Schrödinger equation

i~ |ψ(t)i = H|ψ(t)i. (35)
∂t
We recall that the wave function is given by

ψ(~x, t) = h~x|ψ(t)i. (36)

Eqn (35) can be integrated to give


i
|ψ(t)i = e− ~ Ht |ψ(0)i (37)
The time-evolution operator in QM is thus (assuming that H is time-independent)
i
U (t; t0 ) = e− ~ H(t−t0 ) . (38)

A central object in Feynman’s approach is the propagator

h~x0 |U (t; t0 )|~xi ,


(39)

where |~xi are the simultaneous eigenstates of the position operators x̂, ŷ and ẑ. The propagator is the
probability amplitude for finding our QM particle at position ~x0 at time t, if it started at position ~x at time
t0 . To keep notations simple, we now consider a particle moving in one dimension with time-independent
Hamiltonian
p̂2
H = T̂ + V̂ = + V (x̂). (40)
2m
We want to calculate the propagator
hxN |U (t; 0)|x0 i. (41)

5
It is useful to introduce small time steps

tn = n , n = 0, . . . , N, (42)

where  = t/N . Then we have by construction


 i N
U (t; 0) = e− ~ H . (43)

The propagator is
i i
hxN |U (t; 0)|x0 i = hxN |e− ~ H · · · e− ~ H |x0 i
Z Z
i i i
= dxN −1 . . . dx1 hxN |e− ~ H |xN −1 ihxN −1 |e− ~ H |xN −2 i . . . hx1 |e− ~ H |x0 i, (44)

where we have inserted N − 1 resolutions of the identity in terms of position eigenstates


Z
1 = dx |xihx| . (45)

This expression now has a very nice and intuitive interpretation, see Fig. 1:

Figure 1: Propagator as sum over paths.

The propagator, i.e. the probabilty amplitude for finding the particle at position xN and time t given
that it was at position x0 at time 0 is given by the sum over all “paths” going from x0 to xN (as x1 ,. . . ,
xN −1 are integrated over).

6
In the next step we determine the “infinitesimal propagator”
i
hxn+1 |e− ~ H |xn i. (46)
Importantly we have [T̂ , V̂ ] 6= 0 and concomitantly

eα(T̂ +V̂ ) 6= eαT̂ eαV̂ . (47)


However, using that  is infinitesimal, we have
i i
e− ~ (T̂ +V̂ ) = 1 − (T̂ + V̂ ) + O(2 ) ,
~
i i i
e− ~ T̂ e− ~ V̂ = 1 − (T̂ + V̂ ) + O(2 ). (48)
~
So up to terms of order 2 we have
i i i i i
hxn+1 |e− ~ H |xn i ' hxn+1 |e− ~ T̂  e− ~ V̂  |xn i = hxn+1 |e− ~ T̂  |xn ie− ~ V (xn ) , (49)
where we have used that V̂ |xi = V (x)|xi. As T̂ = p̂2 /2m it is useful to insert a complete set of momentum
eigenstates 2 to calculate
Z Z
− ~i T̂  dp ip̂2 
− 2m~ dp − ip2  −i p (xn −xn+1 )
hxn+1 |e |xn i = hxn+1 |e |pihp|xn i = e 2m~ ~
2π 2π~
m im 2
= e 2~ (xn −xn+1 ) . (50)
2πi~
In the second step we have used that p̂|pi = p|pi and that
ipx
hx|pi = e ~ . (51)
The integral over p is performed by chaging variables to p0 = p − m
 (xn − xn+1 ). Substituting (50) and (49)
back into our expression (44) for the propagator gives

N −1
!
h m iN Z i X m xn+1 − xn 2
 
2
hxN |U (t; 0)|x0 i = lim dx1 . . . dxN −1 exp − V (xn ) . (52)
N →∞ 2πi~ ~ 2 
n=0

Note that in this expression there are no operators left.

2.1.1 Propagator as a “Functional Integral”


The way to think about (52) is as a sum over trajectories:
• x0 , . . . , xN constitute a discretization of a path x(t0 ), where we set xn ≡ x(tn ).
• We then have
xn+1 − xn x(tn+1 ) − x(tn )
= ' ẋ(tn ), (53)
 tn+1 − tn
and
N −1  2 t i Z t
xn+1 − xn
Z
X m hm
 − V (xn ) ' dt0 ẋ2 (t0 ) − V (x) ≡ dt0 L[ẋ, x], (54)
2  0 2 0
n=0

where L is the Lagrangian of the system. In classical mechanics the time-integral of the Lagrangian
is known as the action Z t
S= dt0 L. (55)
0
2 dp
R
We use a normalization hp|ki = 2π~δ(p − k), so that 1 = 2π~
|pihp|.

7
• The integral over x1 , . . . xN −1 becomes a functional integral, also known as a path integral, over all
paths x(t0 ) that start at x0 at time t0 = 0 and end at xN at time t0 = t.

• The prefactor in (52) gives rise to an overall (infinite) normalization and we will denote it by N .

These considerations lead us to express the propagator as the following formal expression

i 0
Dx(t0 ) e ~ S[x(t )] .
R
hxN |U (t; 0)|x0 i = N
(56)

What is in fact meant by (56) is the limit of the discretized expression (52). The ultimate utility of (56) is
that it provides a compact notation, that on the one hand will allow us to manipulate functional integrals,
and on the other hand provides a nice, intuitive interpretation. The probability amplitude for propagation
from x0 to xN is obtained by summing over all possible paths connecting x0 and xN , where each path is
i

weighted by a phase factor exp ~ S , where S is the classical action of the path. This provides a new way
of thinking about QM!

2.2 Classical Limit and Stationary Phase Approximation


An important feature of (56) is that it gives us a nice way of thinking about the classical limit “~ → 0” (more
precisely in the limit when the dimensions, masses, times etc are so large that the action is huge compared
to ~). To see what happens in this limit let us first consider the simpler case of an ordinary integral
Z ∞
g(a) = dt h1 (t)eiah2 (t) , (57)
−∞

when we take the real parameter a to infinity. In this case the integrand will oscillate wildly as a function of
t because the phase of exp iah2 (t) will vary rapidly. The dominant contribution will arise from the points
where the phase changes slowly, which are the stationary points

h02 (t) = 0. (58)

The integral can then be approximated by expanding around the stationary points. Assuming that there is
a single stationary point at t0
Z ∞ ah00
2 (t0 ) 2
dt h1 (t0 ) + (t − t0 )h01 (t0 ) + . . . eiah2 (t0 )+i 2 (t−t0 ) ,
 
g(a  1) ≈ (59)
−∞

Changing integration variables to t0 = t − t0 (and giving a a small imaginary part to make the integral
converge at infinity) as obtain a Gaussian integral that we can take using (23)
s
2πi
g(a  1) ≈ h1 (t0 )eiah2 (t0 ) . (60)
ah002 (t0 )

Subleading contributions can be evaluated by taking higher order contributions in the Taylor expansions
into account. If we have several stationary points we sum over their contributions. The method we have
just discussed is known as stationary phase approximation.
The generalization to path integrals is now clear: in the limit ~ → 0 the path integral is dominated by
the vicinity of the stationary points of the action S
δS
= 0. (61)
δx(t0 )

The condition (61) precisely defines the classical trajectories x(t0 )!

8
2.3 The Propagator for Free Particles
We now wish to calculate the functional integral (56) for a free particle, i.e.

V (x) = 0. (62)

Going back to the explicit expression (52) we have


N −1
!
h m iN Z i X m xn+1 − xn 2
 
2
hxN |U (t; 0)|x0 i = lim dx1 . . . dxN −1 exp . (63)
N →∞ 2πi~ ~ 2 
n=0

It is useful to change integration variables to

yj = xj − xN , j = 1, . . . , N − 1, (64)

which leads to an expression


h m iN Z 
1 T

im 2
dy exp − y Ay + J · y e 2~ (x0 −xN ) .
2 T
hxN |U (t; 0)|x0 i = lim (65)
N →∞ 2πi~ 2

Here
im
JT =

(xN − x0 ), 0, . . . , 0 , (66)
~
and A is a (N − 1) × (N − 1) matrix with elements
−im
Ajk = [2δj,k − δj,k+1 − δj,k−1 ] . (67)
~
For a given N (65) is a multidimensional Gaussian integral and can be carried out using (34)
h m iN 
1 T −1

N −1 − 12 im 2
J A J e 2~ (x0 −xN ) .
2
hxN |U (t; 0)|x0 i = lim (2π) 2 [det(A)] exp (68)
N →∞ 2πi~ 2

The matrix A is related to the one dimensional lattice Laplacian, see below. Given the eigenvalues and
eigenvectors worked out below we can calculate the determinant and inverse of A (homework problem).
Substituting the results into (68) gives

im
m (x0 −xN )2
p
hxN |U (t; 0)|x0 i = 2πi~t e
2~t .
(69)

For a free particle we can evaluate the propagator directly in a much simpler way.
Z ∞ Z ∞
dp p̂2 t dp −i p2 t +i p(x0 −xN )
hxN |U (t; 0)|x0 i = hxN |e−i 2m~ |pihp|x0 i = e 2m~ ~
−∞ 2π~ −∞ 2π~
r
m im (x0 −xN )2
= e 2~t . (70)
2πi~t

The matrix A is in fact related to the one dimensional Lattice Laplacian. Consider functions of a variable
z0 ≤ z ≤ zN with “hard-wall boundary conditions”

f (z0 ) = f (zN ) = 0. (71)

The Laplace operator D acts on these functions as

d2 f (z)
Df ≡ . (72)
dz 2

9
Discretizing the variable z by introducing N − 1 points

zn = z0 + na0 , n = 1, . . . , N − 1 (73)

where a0 = (zN − z0 )/N is a “lattice spacing”, maps the function f (z) to a N − 1 dimensional vector

f (z) → f = (f (z1 ), . . . , f (zN −1 )). (74)

Recalling that
d2 f f (z + a0 ) + f (z − a0 ) − 2f (z)
(z) = lim , (75)
dz 2 a0 →0 a20
we conclude that the Lapacian is discretized as follows

Df → a−2
0 ∆f , (76)

where
∆jk = δj,k+1 + δj,k−1 − 2δj,k . (77)
im
Our matrix A is equal to ~ ∆. The eigenvalue equation

∆an = λn an , n = 1, . . . , N − 1 (78)

gives rise to a recurrence relation for the components an,j of an

an,j+1 + an,j−1 − (2 + λn )an,j = 0. (79)

The boundary conditions an,N = an,0 = 0 suggest the ansatz

πnj 
an,j = Cn sin . (80)
N
Substituting this in to (79) gives
πn 
λn = 2 cos −2 , n = 1, . . . , N − 1. (81)
N
The normalized eigenvectors of ∆ are
πn πn
     
sin N  sin N 
2πn 2πn
1  sin N
 r2  sin N

an = qP .. = .. (82)
   
N
  
N −1
sin2 πnj   .   . 
j=1 N π(N −1)n  π(N −1)n 
sin N . sin N .

3 Path Integrals in Quantum Statistical Mechanics


An important quantity in Statistical Mechanics is the partition function
h i
Z(β) = Tr e−βH , (83)

where H is the Hamiltonian of the system, Tr denotes the trace over the Hilbert space of quantum mechanical
states, and
1
β= . (84)
kB T

10
Ensemble averages of the quantum mechanical observable O are given by
1 h i
hOiβ = Tr e−βH O . (85)
Z(β)

Taking the trace over a basis of eigenstates of H with H|ni = En |ni gives
1 X −βEn
hOiβ = e hn|O|ni ,
Z(β) n
X
Z(β) = e−βEn . (86)
n

Assuming that the ground state of H is non-degenerate we have

lim hOiβ = h0|O|0i , (87)


T →0

where |0i is the ground state of the system. Let us consider a QM particle with Hamiltonian

p̂2
H= + V (x̂), (88)
2m
coupled to a heat bath at temperature T . The partition function can be written in a basis of position
eigenstates Z Z Z
−βH
Z(β) = dxhx|e |xi = dx dx0 hx|x0 i hx0 |e−βH |xi. (89)

Here
hx0 |e−βH |xi (90)
is very similar to the propagator
i(t−t0 )
hx0 |e− ~
H
|xi. (91)
Formally (90) can be viewed as the propagator in imaginary time τ = it, where we consider propagation
from τ = 0 to τ = β~. Using this interpretation we can follow through precisely the same steps as before
and obtain

N −1
!
h m iN Z  X m xn+1 − xn 2
 
−βH 2
hxN |e |x0 i = lim dx1 . . . dxN −1 exp − + V (xn ) , (92)
N →∞ 2πi~ ~ 2 
n=0

where now

. = (93)
N
We again can interpret this in terms of a sum over paths x(τ ) with

x(τn ) = xn , τn = n. (94)

Going over to a continuum description we arrive at an imaginary-time functional integral

1
hxN |e−βH |x0 i = N Dx(τ ) e− ~ SE [x(τ )] ,
R
(95)

where SE is called Euclidean action


"  2 #
Z ~β
m dx
SE [x(τ )] = dτ + V (x) . (96)
0 2 dτ

11
Substituting (95) into the expression for the partition function we find that

1
Dx(τ ) e− ~ SE [x(τ )] ,
R
Z(β) = N
(97)

where we integrate over all periodic paths


x(~β) = x(0). (98)
The restriction to periodic paths arises because Z(β) is a trace.

3.1 Harmonic Oscillator at T > 0: a first encounter with Generating Functionals


We now consider the simple harmonic oscillator

p̂2 κ
H= + x̂2 . (99)
2m 2
Let us work out the averages of powers of the position operator

dx hx|e−βH x̂n |xi dx xn hx|e−βH |xi


R R
n
hx̂ iβ = R = R . (100)
dxhx|e−βH |xi dxhx|e−βH |xi

We have Z h 2
i
− ~1 m
( dx +κ x2
R ~β
dτ )
−βH dτ
hx|e |xi = N Dx(τ ) e 0 2 2
, (101)

where the path integral is over all paths with x(0) = x(β~). Integrating by parts we can write the action as
"   #
1 ~β m dx 2 κ 2 1 ~β
Z Z
− dτ + x =− dτ x(τ )D̂x(τ ) , (102)
~ 0 2 dτ 2 2 0

where
m d2 κ
D̂ = − + . (103)
~ dτ 2 ~
We now define the generating functional
Z
1 ~β
W [J] ≡ N Dx(τ ) e− 2 0 dτ [x(τ )D̂x(τ )−2J(τ )x(τ )] .
R
(104)

Here the functions J(τ ) are called sources. The point of the definition (104) is that we can obtain hx̂n iβ by
taking functional derivatives
1 δ δ
hx̂n iβ = ... W [J]. (105)
W [0] δJ(0) δJ(0)
J=0
We now could go ahead and calculate the generating functional by going back to the definition of the the
path integral in terms of a multidimensional Gaussian integral. In practice we manipulate the path integral
itself as follows. We define the Green’s function of the differential operator D̂τ in the usual way

D̂τ G(τ − τ 0 ) = δ(τ − τ 0 ) , G(0) = G(β~). (106)

We then change variables in the path integral in order to “complete the square”
Z
y(τ ) = x(τ ) − dτ 0 G(τ − τ 0 )J(τ 0 ). (107)

12
We see that
Z Z Z
dτ y(τ )D̂τ y(τ ) = dτ x(τ )D̂τ x(τ ) + dτ dτ 0 dτ 00 G(τ − τ 0 )J(τ 0 )D̂τ G(τ − τ 00 )J(τ 00 )
Z h i
− dτ dτ 0 x(τ )D̂τ G(τ − τ 0 )J(τ 0 ) + G(τ − τ 0 )J(τ 0 )D̂τ x(τ )
Z Z Z
0 0 0
= dτ x(τ )D̂τ x(τ ) + dτ dτ G(τ − τ )J(τ )J(τ ) − 2 dτ x(τ )J(τ ) , (108)

where in the Rlast step we have used (106) andR integrated by parts twice to simplify the last term in the
second line ( dτ dτ 0 G(τ − τ 0 )J(τ 0 )D̂τ x(τ ) = dτ dτ 0 x(τ ) D̂τ G(τ − τ 0 )J(τ 0 )). The upshot is that
Z Z Z Z
dτ x(τ )D̂τ x(τ ) − 2 dτ x(τ )J(τ ) = dτ y(τ )D̂τ y(τ ) − dτ dτ 0 J(τ )G(τ − τ 0 )J(τ 0 ). (109)

On the other hand, the Jacobian of the change of variables (107) is 1 as we are shifting all paths by the
same constant (you can show this directly by going back to the definition of the path integral in terms of
multiple Gaussian integrals). Hence we have Dy(τ ) = Dx(τ ) and our generating functional becomes

1
dτ dτ 0 J(τ )G(τ −τ 0 )J(τ 0 )
R
W [J] = W [0] e 2 .
(110)

Now we are ready to calculate (105). The average position is zero


Z
1 δ 1
dτ dτ 0 δ(τ )G(τ − τ 0 )J(τ 0 ) + J(τ )G(τ − τ 0 )δ(τ 0 ) W [J]
 
hx̂iβ = W [J] = = 0. (111)
W [0] δJ(0) 2
J=0 J=0

Here we have used that


δJ(τ )
= δ(τ − τ 0 ). (112)
δJ(τ 0 )
The expression (111) vanishes, because we have a “left over” J and obtain zero when setting all sources to
zero in the end of the calculation. By the same mechanism we have

hx̂2n+1 iβ = 0. (113)

Next we turn to

1 δ δ
hx̂2 iβ = W [J]
W [0] δJ(0) δJ(0)
J=0
Z
1 δ 1
dτ dτ 0 δ(τ )G(τ − τ 0 )J(τ 0 ) + J(τ )G(τ − τ 0 )δ(τ 0 ) W [J] = G(0). (114)
 
=
W [0] δJ(0) 2
J=0

So the means square deviation of the oscillator’s position is equal to the Green’s function evaluated at zero.
To determine G(τ ) we need to solve the differential equation (106). As G(0) = G(β~) we are dealing with
a periodic function and therefore may employ a Fourier expansion

1 X
G(τ ) = √ gn e2πinτ /β~ . (115)
β~ n=−∞

Substituting this into the differential equation gives



"  2 #
1 X m 2πn κ
D̂G(τ ) = √ gn e2πinτ /β~ + = δ(τ ). (116)
β~ n=−∞ ~ β~ ~

13
dτ e−2πikτ /~β on both sides fixes the Fourier coefficients and we obtain
R ~β
Taking the integral 0


1 X 1 2πinτ β~
G(τ ) = 2 e , (117)
βκ n=−∞

2πn
1+ β~ω

p
where ω = κ/m. Setting τ = 0 gives the desired result
 
~ω ~ω 1 1
G(0) = = + . (118)
2κ tanh β~ω/2 κ eβ~ω − 1 2

Using equipartition
hHiβ = hT iβ + hV iβ = 2hV iβ = κhx̂2 iβ , (119)
we find that the average energy of the oscillator at temperature T is
 
1 1
hHiβ = ~ω β~ω + . (120)
e −1 2

Recalling that
1
H = ~ω n̂ + , (121)
2
where n̂ = a† a is the number operator, we recover the Bose-Einstein distribution
1
hn̂iβ = . (122)
eβ~ω −1

3.2 Correlation Functions


It is clear from the above that we can calculate more general quantities from the generating functional W [J],
namely
n n
N
Z
1 Y δ Y 1 ~β
R
W [J] = Dx(τ ) x(τj ) e− 2 0 dτ x(τ )D̂x(τ ) . (123)
W [0] δJ(τj ) W [0]
j=1 J=0 j=1

What is their significance? Graphically, the path integral in (123) is represented in Fig. 2. It consists of

τ
x(hβ )

τn ..
x(τn )
.
τ3 x( τ3 )
τ2
τ1
x( 0 )

Figure 2: Path integral corresponding to (123).

14
several parts. The first part corresponds to propagation from x(0) to x(τ1 ) and the associated propagator is

hx(τ1 )|e−Hτ1 /~ |x(0)i. (124)

The second part corresponds to propagation from x(τ1 ) to x(τ2 ), and we have a multiplicative factor of x(τ1 )
as well. This is equivalent to a factor

hx(τ2 )|e−H(τ2 −τ1 )/~ x̂|x(τ1 )i. (125)

Repeating this analysis for the other pieces of the path we obtain
 
n
Y
 hx(τj+1 )|e−H(τj+1 −τj )/~ x̂|x(τj )i hx(τ1 )|e−Hτ1 /~ |x(0)i , (126)
j=1

where τn+1 = ~β. Finally, in order to represent the full path integral (123) we need R to integrate over
the intermediate positions x(τj ) and impose periodicity of the path. Using that 1 = dx|xihx| and that
W [0] = Z(β) we arrive at
Z
1
dx(0)hx(0)|e−H(β−τn )/~ x̂e−H(τn −τn−1 )/~ x̂ . . . x̂e−H(τ2 −τ1 )/~ x̂e−Hτ1 /~ |x(0)i
Z(β)
1 h i
= Tr e−βH x̄(τ1 )x̄(τ2 ) . . . x̄(τn ) , (127)
Z(β)

where we have defined operators


x̄(τj ) = eHτj /~ x̂e−Hτj /~ . (128)
There is one slight subtlety: in the above we have used implicitly that τ1 < τ2 < . . . < τn . On the other
hand, our starting point (123) is by construction symmetric in the τj . The way to fix this is to introduce a
time-ordering operation Tτ , which automatically arranges operators in the “right” order. For example

Tτ x̄(τ1 )x̄(τ2 ) = θ(τ1 − τ2 )x̄(τ1 )x̄(τ2 ) + θ(τ2 − τ1 )x̄(τ2 )x̄(τ1 ), (129)

where θ(x) is the Heaviside theta function. Then we have

n
1 Y δ 1 h
−βH
i
W [J] = Tr e Tτ x̄(τ1 )x̄(τ2 ) . . . x̄(τn ) .
W [0] δJ(τj ) Z(β)
j=1 J=0
(130)

Finally, if we analytically continue from imaginary time to real time τj → itj , the operators x̄(τ ) turn into
Heisenberg-picture operators
it it
x̂(t) ≡ e ~ H x̂e− ~ H . (131)
The quantities that we get from (130) after analytic continuation are called n-point correlation functions

1 h i
hT x̂(t1 )x̂(t2 ) . . . x̂(tn )iβ ≡ Tr e−βH T x̂(t1 )x̂(t2 ) . . . x̂(tn ) .
Z(β)
(132)

Here T is a time-ordering operator that arranges the x̂(tj )’s in chronologically increasing order from right
to left. Such correlation functions are the central objects in both quantum field theory and many-particle
quantum physics.

15
3.2.1 Wick’s Theorem
Recalling that
1
dτ dτ 0 J(τ )G(τ −τ 0 )J(τ 0 )
R
W [J] = W [0] e 2 , (133)
then taking the functional derivatives, and finally setting all sources to zero we find that
n
1 Y δ X
W [J] = G(τP1 − τP2 ) . . . G(τPn−1 − τPn ) . (134)
W [0] δJ(τj )
j=1 J=0 P (1,...,n)

Here the sum is over all possible pairings of {1, 2, . . . , n} and G(τ ) is the Green’s function (117). In particular
we have
hTτ x̄(τ1 )x̄(τ2 )iβ = G(τ1 − τ2 ). (135)
The fact that for “Gaussian theories” 3 like the harmonic oscillator n-point correlation functions can be
expressed as simple products over 2-point functions is known as Wick’s theorem.

3.3 Perturbation Theory and Feynman Diagrams


Let us now consider the anharmonic oscillator
p̂2 κ λ
H= + x̂2 + x̂4 . (136)
2m 2 4!
As you know from QM2, this Hamiltonian is no longer exactly solvable. What we want to do instead is
p̂2
perturbation theory for small λ > 0. As the Hamiltonian is of the form H = 2m + V (x̂) our previous
construction of the path integral applies. Our generating functional becomes
Z
1 ~β λ 4
R ~β
Wλ [J] = N Dx(τ ) e− 2 0 dτ [x(τ )D̂x(τ )−2J(τ )x(τ )]− 4!~ 0 dτ x (τ ) .
R
(137)

The partition function is


Zλ (β) = Wλ [0] . (138)
The idea is to expand (137) perturbatively in powers of λ
Z Z ~β  
λ 1 ~β
dτ x (τ ) + . . . e− 2 0 dτ [x(τ )D̂x(τ )−2J(τ )x(τ )]
R
0 4 0
Wλ [J] = N Dx(τ ) 1 −
4!~ 0
Z " Z ~β  4 #
λ δ 1 ~β
+ . . . e− 2 0 dτ [x(τ )D̂x(τ )−2J(τ )x(τ )]
R
0
= N Dx(τ ) 1 − dτ 0
4!~ 0 δJ(τ )
Z h i 4
− λ ~β dτ 0 δJ(τ
δ
R
1 ~β
e− 2 0 dτ [x(τ )D̂x(τ )−2J(τ )x(τ )]
R
0)
= N Dx(τ ) e 4!~ 0
h i4
λ δ
dτ 0
R ~β
− 4!~ δJ(τ 0 )
= e 0
W0 [J]. (139)

We already know W0 [J]


1
dτ dτ 0 J(τ )G(τ −τ 0 )J(τ 0 )
R
W0 [J] = W0 [0] e 2 , (140)
which will enable us to work out a perturbative expansion very efficiently.
3
These are theories in which the Lagrangian is quadratic in the generalized co-ordinates.

16
3.3.1 Partition Function of the anharmonic oscillator
By virtue of (138) the perturbation expansion for Zλ (β) is
h i4 Z ~β  4
λ
− 4!~
R ~β
dτ 0 δ
δJ(τ 0 )
λ 0 δ
Zλ (β) = e 0
W0 [J] = Z0 (β) − dτ W0 [J]
4!~ 0 δJ(τ 0 )
J=0 J=0
1 λ 2 ~β 0 00
  Z  4  4
δ δ
− dτ dτ W0 [J] + . . .
2 4!~ 0 δJ(τ 0 ) δJ(τ 00 )
J=0
= Z0 (β) 1 + λγ1 (β) + λ2 γ2 (β) + . . . .
 
(141)

1. First order perturbation theory.


Carrying out the functional derivatives gives
Z ~β
λ λβ
λγ1 (β) = − dτ 0 [G(τ − τ )]2 = − [G(0)]2 . (142)
8~ 0 8

This contribution can be represented graphically by a Feynman diagram. In order to do so we introduce


the following elements:

(a) The two-point function G(τ − τ 0 ) is represented by a line running from τ to τ 0 .

λ
R ~β
(b) The interaction vertex − 4!~ 0 dτ is represented by

Figure 3: Graphical representation of the interaction vertex.

Combining these two elements, we can express the integral λγ1 (β) by the diagram

Figure 4: Feynman diagram for the 1st order perturbative contribution to the partition function.

Here the factor of 3 is a combinatorial factor associated with the diagram.

2. Second order perturbation theory.


To second order we obtain a number of different contributions upon taking the functional derivatives.
The full second order contribution is

17
 2 Z ~β Z ~β
1 λ
2
λ γ2 (β) = 72 dτ dτ 0 G(τ − τ )G2 (τ − τ 0 )G(τ 0 − τ 0 )
2 4!~ 0 0
 2 Z ~β Z ~β
1 λ
+ 24 dτ dτ 0 G4 (τ − τ 0 )
2 4!~ 0 0
 2 Z ~β 2
1 λ 2
+ 9 dτ G (τ − τ ) . (143)
2 4!~ 0

The corresponding Feynman diagrams are shown in Fig.5. They come in two types: the first two are
connected, while the third is disconnected.

Figure 5: Feynman diagram for the 2nd order perturbative contribution to the partition function.

The point about the Feynman diagrams is that rather than carrying out functional derivatives and then
representing various contributions in diagrammatic form, in practice we do the calculation by writing down
the diagrams first and then working out the corresponding integrals! How do we know what diagrams to
draw? As we are dealing with the partition function, we can never produce a diagram with a line sticking
out: all (imaginary) times must be integrated over. Such diagrams are sometimes called vacuum diagrams.
Now, at first order in λ, we only have a single vertex, i.e. a single integral over τ . The combinatorics works
out as follows:

1. We have to count the number of ways of connecting a single vertex to two lines, that reproduce the
diagram we want.

2. Let us introduce a short-hand notation


 
1
J G J 1 1
W [J] = W [0]e 2 1 12 2 = W [0] 1 + J1 G12 J2 + 3 J1 G12 J2 J3 G34 J4 + . . . . (144)
2 2

The last term we have written is the one that gives rise to our diagram, so we have a factor
1
(145)
23
to begin with.

18
3. Now, the combinatorics of acting with the functional derivatives is the same as the one of connecting
a single vertex to two lines. There are 4 ways of connecting the first line to the vertex, and 3 ways of
connecting the second. Finally there are two ways of connecting the end of the first line to the vertex
as well. The end of the second line must then also be connected to the vertex to give our diagram,
but there is no freedom left. Altogether we obtain a factor of 24. Combining this with the factor of
1/8 we started with gives a combinatorial factor of 3. That’s a Bingo!

3.3.2 Green’s Function of the anharmonic oscillator


Homework problem?

Part II
Many-Particle Quantum Mechanics
In the basic QM course you encountered only quantum systems with very small numbers of particles. In
the harmonic oscillator problem we are dealing with a single QM particle, when solving the hydrogen atom
we had one electron and one nucleus. Perhaps the most important field of application of quantum physics
is to systems of many particles. Examples are the electronic degrees of freedom in solids, superconductors,
trapped ultra-cold atomic gases, magnets and so on. The methods you have encountered in the basic QM
course are not suitable for studying such problems. In this part of the course we introduce a framework,
that will allow us to study the QM of many-particle systems. This new way of looking at things will also
reveal very interesting connections to Quantum Field Theory.

4 Second Quantization
The formalism we develop in the following is known as second quantization.

4.1 Systems of Independent Particles


You already know from second year QM how to solve problems involving independent particles
N
X
H= Hj (146)
j=1

where Hj is the Hamiltonian on the j’th particle, e.g.


p̂2j ~2 2
Hj = + V (r̂j ) = − ∇ + V (r̂j ). (147)
2m 2m j
The key to solving such problems is that [Hj , Hl ] = 0. We’ll now briefly review the necessary steps,
switching back and forth quite freely between using states and operators acting on them, and the position
representation of the problem (i.e. looking at wave functions).
• Step 1. Solve the single-particle problem
Hj |φl i = El |φi . (148)
The corresponding wave functions are
φl (rj ) = hrj |φl i. (149)
The eigenstates form an orthonormal set
Z
hφl |φm i = δl,m = dD rj φ∗l (rj )φm (rj ). (150)

19
• Step 2. Form N -particle eigenfunctions as products
   
N
X N
X
 Hj  φl1 (r1 )φl2 (r2 ) . . . φlN (rN ) =  Elj  φl1 (r1 )φl2 (r2 ) . . . φlN (rN ) . (151)
j=1 j=1

This follows from the fact that in the position representation Hj is a differential operator that acts
only on the j’th position rj . The corresponding eigenstates are tensor products

|l1 i ⊗ |l2 i ⊗ · · · ⊗ |lN i. (152)

• Step 3. Impose the appropriate exchange symmetry for indistinguishable particles, e.g.

1
ψ± (r1 , r2 ) = [φl (r1 )φm (r2 ) ± φl (r2 )φm (r1 )] . (153)
2
Generally we require
ψ(. . . , ri , . . . , rj , . . . ) = ±ψ(. . . , rj , . . . , ri , . . . ) , (154)
where the + sign corresponds to bosons and the − sign to fermions. This is achieved by taking
X
ψl1 ...lN (r1 , . . . , rN ) = N (±1)|P | φlP1 (r1 ) . . . φlPN (rN ),
P ∈SN
(155)

where the sum is over all permutations of (1, 2, . . . , N ) and |P | is the number of pair exchanges required
to reduce (P1 , . . . , PN ) to (1, . . . , N ). The normalization constant N is
1
N =√ , (156)
N !n1 !n2 ! . . .

where nj is the number of times j occurs in the set {l1 , . . . , lN }. For fermions the wave functions can
be written as Slater determinants
 
φl1 (r1 ) . . . φl1 (rN )
1
ψl1 ...lN (r1 , . . . , rN ) = √ det  ... ..
. (157)
 
.
N!
φlN (r1 ) . . . φlN (rN )

The states corresponding to (155) are


X
|l1 , . . . , lN i = N (±1)|P | |lP1 i ⊗ · · · ⊗ |lPN i .
P ∈SN
(158)

4.1.1 Occupation Number Representation


By construction the states have the symmetry

|lQ1 . . . lQN i = ±|l1 . . . lN i , (159)

where Q is an arbitrary permutation of (1, . . . , N ). As the overall sign of state is irrelevant, we can therefore
choose them without loss of generality as

| |1 .{z
. . 1} |2 .{z . . 3} 4 . . . i ≡ |n1 n2 n3 . . . i.
. . 2} 3| .{z (160)
n1 n2 n3

20
In (160) we have as many nj ’s as there are single-particle eigenstates, i.e. dim H 4 . For fermions we have
nj = 0, 1 only as a consequence of the Pauli principle. The representation (160) is called occupation number
representation. The nP j ’s tell us how many particles are in the single-particle state |ji. By construction the
states {|n1 n2 n3 . . . i| j nj = N } form an orthonormal basis of our N -particle problem
Y
hm1 m2 m3 . . . |n1 n2 n3 . . . i = δnj ,mj , (161)
j

where we have defined hm1 m2 m3 . . . |=|m1 m2 m3 . . . i† .

4.2 Fock Space


We now want to allow the particle number to vary. The main reason for doing this is that we will encounter
physical problems where particle number is in fact not conserved. Another motivation is that experimental
probes like photoemission change particle number, and we want to be able to describe these. The resulting
space of states is called Fock Space.

1. The state with no particles is called the vacuum state and is denoted by |0i.
P
2. N -particle states are |n1 n2 n3 . . . i with j nj = N .

4.2.1 Creation and Annihilation Operators


Given a basis of our space of states we can define operators by specifying their action on all basis states.

• particle creation operators with quantum number l


(
0 if nl = 1 for fermions
c†l |n1 n2 . . . i = √ Pl−1
nj
nl + 1(±1) j=1 |n1 n2 . . . nl + 1 . . . i else.
(162)

Here the + (−) sign applies to bosons (fermions).

• particle annihilation operators with quantum number l

√ Pl−1
nj
cl |n1 n2 . . . i = nl (±1) j=1 |n1 n2 . . . nl − 1 . . . i .
(163)

We note that (163) follows from (162) by

hm1 m2 . . . |c†l |n1 n2 . . . i∗ = hn1 n2 . . . |cl |m1 m2 . . . i . (164)

The creation and annihilation operators fulfil canonical (anti)commutation relations

[cl , cm ] = 0 = [c†l , c†m ] , [cl , c†m ] = δl,m bosons,


(165)

{cl , cm } = cl cm + cm cl = 0 = {c†l , c†m } , {cl , c†m } = δl,m fermions.


(166)

4
Note that this is different from the particle number N .

21
Let us see how to prove these: let us consider the fermionic case and take l > m. Then
√ Pm−1
c†l cm | . . . nl . . . nm . . . i = c†l nm (−1) j=1 nj | . . . nl . . . nm − 1 . . . i
√ √ Pm−1
= nl + 1 nm (−1) j=l+1 nj | . . . nl + 1 . . . nm − 1 . . . i. (167)

Similarly we have
√ √ Pm−1
cm c†l | . . . nl . . . nm . . . i = nl + 1 nm (−1)1+ j=l+1 nj | . . . nl + 1 . . . nm − 1 . . . i. (168)

This means that for any basis state |n1 n2 . . . i we have

{c†l , cm }|n1 n2 . . . i = 0 , if l > m. (169)

This implies that


{c†l , cm } = 0 , if l > m. (170)
The case l < m works in the same way. This leaves us with the case l = m. Here we have
√ Pl−1
c†l cl | . . . nl . . . nm . . . i = c†l nl (−1) j=1 nj | . . . nl − 1 . . . i = nl | . . . nl . . . i. (171)

( √ Pl−1
† cl nl + 1(−1) j=1 nj | . . . nl + 1 . . . i if nl = 0 ,
cl cl | . . . nl . . . i =
0 if nl = 1 ,
(
| . . . nl . . . i if nl = 0 ,
= (172)
0 if nl = 1 ,

Combining these we find that


{c†l , cl }| . . . nl . . . i = | . . . nl . . . i , (173)
and as the states | . . . nl . . . i form a basis this implies (here 1 really means 1 times the identity operator 1).

{c†l , cl } = 1. (174)

4.2.2 Basis of the Fock Space


We are now in a position to write down our Fock space basis in a very convenient way.

• Fock vacuum (state without any particles)


|0i. (175)

• Single-particle states
1 0 . . . i = c†l |0i .
|0 . . . 0 |{z} (176)
l

• N -particle states
Y 1  † nj
|n1 n2 . . . i = c |0i . (177)
nj ! j
p
j

22
4.2.3 Change of Basis
The Fock space is built from a given basis of single-particle states

single-particle states |li N-particle states |n1 n2 . . . i Fock Space


−→ −→ . (178)
You know from second year QM that it is often convenient to switch from one basis to another, e.g. from
energy to momentum eigenstates. This is achieved by a unitary transformation
{|li} −→ {|αi} , (179)
where X
|αi = hl|αi |li. (180)
| {z }
l Ulα
By construction X X

Ulα Uαm = hl|αihα|mi = hl|mi = δlm . (181)
α α
We now want to “lift” this unitary transformation to the level of the Fock space. We know that
|li = c†l |0i ,
|αi = d†α |0i . (182)
On the other hand we have
Ulα c†l |0i.
X X
|αi = Ulα |li = (183)
l l
This suggests that we take
Ulα c†l ,
X
d†α =
l
(184)
and this indeed reproduces the correct transformation for N -particle states. Taking the hermitian conjugate
we obtain the transformation law for annihilation operators


X
dα = Uαl cl .
l
(185)
We emphasize that these transformation properties are compatible with the (anti)commutation relations (as
they must be). For fermions

{dα , d†β } =
X † X †
Uαl Umβ {cl , c†m } = Uαl Ulβ = (U † U )αβ = δα,β . (186)
| {z }
l,m l
δl,m

4.3 Second Quantized Form of Operators


In the next step we want to know how observables such as H, P , X etc act on the Fock space.

4.3.1 Occupation number operators


These are the simplest hermitian operators we can build from cl and c†m . They are defined as
n̂l ≡ c†l cl . (187)

From the definition of cl and c†l it follows immediately that


n̂l |n1 n2 . . . i = nl |n1 n2 . . . i. (188)

23
4.3.2 Single-particle operators
Single-particle operators are of the form X
Ô = ôj , (189)
j

where the operator ôj acts only on the j’th particle. Examples are kinetic and potential energy operators
X p̂2j X
T̂ = , V̂ = V (x̂j ). (190)
2m
j j

We want to represent Ô on the Fock space built from single-particle eigenstates |αi. We do this in two steps:
• Step 1: We first represent Ô on the Fock space built from the eigenstates of ô

ô|li = λl |li = λl c†l |0i. (191)

Then, when acting on an N -particle state (158), we have


 
XN
Ô|l1 , l2 , . . . , lN i =  λj  |l1 , l2 , . . . , lN i. (192)
j=1

This is readily translated into the occupation number representation


" #
X
Ô|n1 n2 . . . i = nk λk |n1 n2 . . .i. (193)
k

As |n1 n2 . . . i constitute a basis, this together with (188) imply that we can represent Ô in the form

λk c†k ck .
X X
Ô = λk n̂k = (194)
k k

• Step 2: Now that we have a representation of Ô on the Fock space built from the single-particle states
|li, we can use a basis transformation to the basis {|αi} to obtain a representation on a general Fock
space. Using that hk|Ô|k 0 i = δk,k0 λk we can rewrite (194) in the form

hk 0 |Ô|kic†k0 ck .
X
Ô = (195)
k,k0

Then we apply our general rules for a change of single-particle basis of the Fock space

c†k =
X †
Uαk d†α . (196)
α

This gives
† 
XX X
hk 0 |Uαk Ukβ |ki d†α dβ .

Ô = 0 Ô (197)
α,β k0
| {z } |k {z }
hα| |βi

where we have used that



X
|ki = Uαk |αi . (198)
α
This gives us the final result
X
Ô = hα|Ô|βi d†α dβ .
α,β
(199)

24
We now work out a number of explicit examples of Fock space representations for single-particle operators.

1. Momentum Operators P in the infinite volume:


(i) Let us first consider P in the single-particle basis of momentum eigenstates

P̂|ki = k|ki , hp|ki = (2π~)3 δ (3) (p − k). (200)

These are shorthand notations for

P̂a |kx , ky , kz i = ka |kx , ky , kz i , a = x, y, z. (201)

and
hpx , py , pz |kx , ky , kz i = (2π~)3 δ(kx − px )δ(ky − py )δ(kz − pz ) . (202)

Using our general result for representing single-particle operators in a Fock space built from their
eigenstates (194) we have

d3 p
Z
P̂ = pc† (p)c(p) , [c† (k), c(p)} = (2π~)3 δ (3) (p − k). (203)
(2π~)3
Here we have introduced a notation
(
c† (k)c(p) − c(p)c† (k) for bosons
[c† (k), c(p)} = (204)
c† (k)c(p) + c(p)c† (k) for fermions.

(ii) Next we want to represent P̂ in the single-particle basis of position eigenstates

X̂|xi = x|xi , hx|x0 i = δ (3) (x − x0 ). (205)

Our general formula (199) gives


Z
P̂ = d3 xd3 x0 hx0 |P̂|xic† (x0 )c(x) . (206)

We can simplify this by noting that

hx0 |P̂|xi = −i~∇x0 δ (3) (x − x0 ), (207)

which allows us to eliminate three of the integrals


Z h i Z
P̂ = d xd x −i~∇x0 δ (x − x ) c (x )c(x) = d3 xc† (x) (−i~∇x ) c(x).
3 3 0 (3) 0 † 0
(208)

2. Single-particle Hamiltonian:
N
X p̂2j
H= + V (x̂j ). (209)
2m
j=1

(i) Let us first consider H in the single-particle basis of energy eigenstates H|li = El |li, |li = c†l |0i.
Our result (194) tells us that
El c†l cl .
X
H= (210)
l

25
(ii) Next we consider the position representation, i.e. we take position eigenstates |xi = c† (x)|0i as a
basis of single-particle states. Then by (199)
Z
H = d3 xd3 x0 hx0 |H|xi c† (x0 )c(x). (211)

Substituting (209) into (211) and using

hx0 |V (x̂)|xi = V (x)δ (3) (x − x0 ) , hx0 |p̂2 |xi = −~2 ∇2 δ (3) (x − x0 ) , (212)

we arrive at the position representation


 2 2 
~ ∇
Z
3 †
H= d x c (x) − + V (x) c(x).
2m
(213)

(iii) Finally we consider the momentum representation, i.e. we take momentum eigenstates |pi =
c† (p)|0i as a basis of single-particle states. Then by (199)
Z 3 3 0
d pd p
H= hp0 |H|pi c† (p0 )c(p). (214)
(2π~)6

Matrix elements of the kinetic energy operator are simple

hp0 |p̂2 |pi = p2 hp0 |pi = p2 (2π~)3 δ (3) (p − p0 ). (215)

Matrix elements of the potential can be calcuated as follows


Z Z
i i 0 0
0
hp |V̂ |pi = d xd x hp |x ihx |V̂ |xihx|pi = d3 xd3 x0
3 3 0 0 0 0
hx0 |V̂ |xi e ~ p·x− ~ p ·x
| {z }
V (x)δ (3) (x−x0 )
Z
i 0
= d3 x V (x)e ~ (p−p )·x = Ṽ (p − p0 ), (216)

where Ṽ (p) is essentially the three-dimensional Fourier transform of the (ordinary) function V (x).
Hence
Z 3 3 0
d3 p p2 †
Z
d pd p
H= 3
c (p)c(p) + Ṽ (p − p0 )c† (p0 )c(p).
(2π~) 2m (2π~)6
(217)

4.3.3 Two-particle operators


These are operators that act on two particles at a time. A good example is the interaction potential V (r̂1 , r̂2 )
between two particles at positions r1 and r2 . For N particles we want to consider
N
X
V̂ = V (r̂i , r̂j ). (218)
i<j

On the Fock space built from single-particle position eigenstates this is represented as
Z
1
V̂ = d3 rd3 r0 c† (r)c† (r0 )V (r, r0 )c(r0 )c(r).
2
(219)

26
The derivation of (219) for fermions proceeds as follows. We start with our original representation of
N -particle states (158) X
|r1 , . . . , rN i = N (−1)|P | |r1 i ⊗ . . . |rN i . (220)
P ∈SN

Then X 1X
V̂ |r1 , . . . , rN i = V (ri , rj )|r1 , . . . , rN i = V (ri , rj )|r1 , . . . , rN i . (221)
2
i<j i6=j

On the other hand we know that


N
Y
|r1 , . . . , rN i = c† (rj )|0i. (222)
j=1

Now consider
N
Y N
Y
c(r)|r1 , . . . , rN i = c(r) c† (rj )|0i = {c(r), c† (rj )}|0i , (223)
j=1 j=1

where is the last step we have used that c(r)|0i = 0. The anticommutator is
N
Y N
Y N
Y
† † † † †
{c(r), c (rj )} = {c(r), c (r1 )} c (rj ) − c (r1 ){c(r), c (r2 )} c† (rj )
j=1 j=2 j=3
N
Y −1
+... + c† (rj ){c(r), c† (rN )}. (224)
j=1

Using that {c(r), c† (rj )} = δ (3) (r − rj ) we then find

N N N missing
X Y X
n−1 (3) † n−1 (3) z}|{
c(r)|r1 , . . . , rN i = (−1) δ (r − rn ) c (rj )|0i = (−1) δ (r − rn )|r1 . . . rn . . . rN i. (225)
n=1 j6=n n=1

Hence
N N missing
X X
c† (r0 )c(r0 ) c(r)|r1 , . . . , rN i = (−1)n−1 δ (3) (r − rn ) δ (3) (r0 − rm ) |r1 . . . rn . . . rN i,
z}|{
(226)
| {z }
n=1 m6=n
number op.

and finally
N
X N
X
† † 0 0
c (r)c (r )c(r )c(r)|r1 , . . . , rN i = δ (3)
(r − rn ) δ (3) (r0 − rm ) |r1 . . . rn . . . rN i. (227)
n=1 m6=n

This implies that


Z
1 1 X
d3 rd3 r0 V (r, r0 ) c† (r)c† (r0 )c(r0 )c(r)|r1 , . . . , rN i = V (rn , rm )|r1 , . . . , rN i. (228)
2 2
n6=m

As {|r1 , . . . , rN i} form a basis, this establishes (219).

Using our formula for basis transformations (184)

hl|ri c†l ,
X
c† (r) = (229)
l

27
we can transform (219) into a general basis. We have
Z
1 X
V̂ = d3 rd3 r0 V (r, r0 )hl|rihl0 |r0 ihr0 |mihr|m0 ic†l c†l0 cm cm0 . (230)
2 0 0
ll mm

We can rewrite this by using that V̂ |ri ⊗ |r0 i = V (r, r0 )|ri ⊗ |r0 i

V (r, r0 )hl|rihl0 |r0 ihr0 |mihr|m0 i = V (r, r0 ) hl| ⊗ hl0 | |ri ⊗ |r0 i| hr0 | ⊗ hr| |mi ⊗ |m0 i|
    

= hl| ⊗ hl0 | V̂ |ri ⊗ |r0 i| hr0 | ⊗ hr| |mi ⊗ |m0 i|


     
(231)

Now we use that Z


d3 rd3 r0 |ri ⊗ |r0 i hr0 | ⊗ hr| = 1
  
(232)

to obtain
1 X 
hl| ⊗ hl0 |V̂ | mi ⊗ |m0 i c†l c†l0 cm cm0 .
  
V̂ = (233)
2 0 0
ll mm

Finally we can express everything in terms of states with the correct exchange symmetry
1 
|ll0 i = √ |li ⊗ |l0 i ± |l0 i ⊗ |li (l 6= l0 ).

(234)
2
in the form
hll0 |V̂ |mm0 ic†l c†l0 cm cm0 .
X
V̂ =
ll0 mm0
(235)
The representation (235) generalizes to arbitrary two-particle operators O.

4.4 Lattice Models


These constitute a very important class of models in condensed matter physics. To see what these are about
let us consider a crystal lattice, in which the ion cores are separated by a distance that is large compared to
the Bohr radius of the valence electrons. In this atomic limit the electron wave functions are substantially
different from zero only in close vicinity of the lattice sites Rn , i.e. the positions of the ion cores (which we
take to be constant for simplicity). It is convenient to use a basis of atomic eigenstates |ψR,n i

5 Application I: The Ideal Fermi Gas


Consider an ideal gas of spin-1/2 fermions. The creation operators in the momentum representation (in the
infinite volume) are
c†σ (p) , σ =↑, ↓ . (236)
They fulfil canonical anticommutation relations

{cσ (p), cτ (k)} = 0 = {c†σ (p), c†τ (k)} , {cσ (p), c†τ (k)} = δσ,τ (2π~)3 δ (3) (k − p). (237)

The Hamiltonian, in the grand canonical ensemble, is

d3 p
Z  2  X
p
H − µN̂ = −µ c†σ (p)cσ (p). (238)
(2π~)3 2m
| {z } σ=↑,↓
(p)

28
Here µ > 0 is the chemical potential. As c†σ (p)cσ (p) = n̂σ (p) is the number operator for spin-σ fermions
with momentum p, we can easily deduce the action of the Hamiltonian on states in the Fock space:
h i
H − µN̂ |0i = 0 ,
h i
H − µN̂ c†σ (p)|0i = (p) c†σj (p)|0i ,
n
" n # n
h iY X Y
H − µN̂ c†σ (pj )|0i = (pk ) c†σj (pj )|0i . (239)
j=1 k=1 j=1

5.1 Quantization in a large, finite volume


In order to construct the ground state and low-lying excitations, it is convenient to work with a discrete set
of momenta. This is achieved by considering the gas in a large, periodic box of linear size L. Momentum
eigenstates are obtained by solving the eigenvalue equation e.g. in the position representation

p̂ψk (r) = −i~∇ψk (r) = kψk (r). (240)

The solutions are plane waves


i
ψk (r) = e ~ k·r . (241)
Imposing periodic boundary conditions (ea is the unit vector in a direction)

ψk (r + Lea ) = ψk (r) for a = x, y, z, (242)

gives quantization conditions for the momenta k


i 2π~na
e ~ Lka = 1 ⇒ ka = , a = x, y, z. (243)
L
To summarize, in a large, periodic box the momenta are quantized as
 
n
2π~  x 
k= ny (244)
L
nz

Importantly, we can now normalize the eigenstates to 1, i.e.


1 i
ψk (r) = 3 e ~ k·r . (245)
L 2

Hence Z
hk|k0 i = d3 rψk∗ (r)ψk0 (r) = δk,k0 . (246)

As a consequence of the different normalization of single-particle states, the anticommutation relations of


creation/annihilation operators are changed and now read

{cσ (p), cτ (k)} = 0 = {c†σ (p), c†τ (k)} , {c†σ (p), cτ (k)} = δσ,τ δk,p . (247)

The Hamiltonian is
X X
H − µN̂ = (p) c†σ (p)cσ (p).
p σ=↑,↓
(248)
We define a Fermi momentum by
p2F
= µ. (249)
2m

29
5.1.1 Ground State
Then the lowest energy state is obtained by filling all negative energy single-particle states, i.e.
Y
|GSi = c†σ (p)|0i.
|p|<pF ,σ
(250)
The ground state energy is X X
EGS = (p). (251)
σ |p|<pF

This is extensive (proportional to the volume) as expected. You can see the advantage of working in a finite
volume: the product in (250) involves only a finite number of factors and the ground state energy is finite.
The ground state momentum is X X
PGS = p = 0. (252)
σ |p|<pF

The ground state momentum is zero, because is a state with momentum p contributes to the sum, then so
does the state with momentum −p.
ε (p)

Figure 6: Ground state in the 1 dimensional case. Blue circles correspond to “filled” single-particle states.

5.1.2 Excitations
• Particle excitations
c†σ (k)|GSi with |k| > pF . (253)
Their energies and momenta are
E = EGS + (k) > EGS , P = k. (254)

• Hole excitations
cσ (k)|GSi with |k| < pF . (255)
Their energies and momenta are
E = EGS − (k) > EGS , P = −k. (256)

• Particle-hole excitations
c†σ (k)cτ (p)|GSi with |k| > pF > |p|. (257)
Their energies and momenta are
E = EGS + (k) − (p) > EGS , P = k − p. (258)

30
ε (p) ε (p) ε (p)

p p p

Figure 7: Some simple excited states: (a) particle (b) hole (c) particle-hole.

5.1.3 Density Correlations


Consider the single-particle operator
o = |rihr| (259)
It represents the particle density at position |ri as can be seen by acting on position eigenstates. In second
quantization it is
XZ X
ρ(r) = d3 r0 d3 r00 hr0 |o|r00 i c†σ (r0 )cσ (r00 ) = c†σ (r)cσ (r). (260)
σ σ

1. One-point function.
We now want to determine the expectation value of this operator in the ground state
X
hGS|ρ(r)|GSi = hGS|c†σ (r)cσ (r)|GSi. (261)
σ

A crucial observation is that the ground state has a simple description in terms of the Fock space built
from momentum eigenstates. Hence what we want to do is to work out the momentum representation
of ρ(r). We know from our general formula (185) that
X
cσ (r) = hr|pi cσ (p). (262)
| {z }
p i
L−3/2 e ~ p·r

Substituting this as well as the analogous expression for the creation operator into (261), we obtain
X 1 X i 0
hGS|ρ(r)|GSi = 3
e ~ (p−p )·r hGS|c†σ (p0 )cσ (p)|GSi. (263)
σ
L 0 p,p

For the expectation value hGS|c†σ (p0 )cσ (p)|GSi to be non-zero, we must have that c†σ (p0 )cσ (p)|GSi
reproduces |GSi itself. The only way this is possible is if |p| < pF (so that the c pokes a hole in the
Fermi sea) and p0 = p (so that the c† precisely fills the hole made by the c). Hence
X 1 X i 1 X N
(p−p0 )·r
hGS|ρ(r)|GSi = 3
e ~ δp,p0 θ(pF − |p|) = 2
|{z} L3 θ(pF − |p|) = . (264)
σ
L 0 p
L
p,p spin

So our expectation value gives precisely the particle density. This is expected because our system is
translationally invariant and therefore hGS|ρ(r)|GSi cannot depend on r.

2. Two-point function.

31
Next we want to determine the two-point function
X 1 XX i 0 i 0 0
hGS|ρ(r)ρ(r0 )|GSi = 6
e ~ (p−p )·r e ~ (k−k )·r hGS|c†σ (p0 )cσ (p)c†σ (k0 )cσ (k)|GSi. (265)
0
L 0 0
σ,σ p,p k,k

The expectation value hGS|c†σ (p0 )cσ (p)c†σ (k0 )cσ (k)|GSi can be calculated by thinking about how the
creation and annihilation operators act on the ground state, and then concentrating on the processes
that reproduce the ground state itself in the end (see Fig. 8).

Figure 8:

The result is

hGS|c†σ (p0 )cσ (p)c†σ (k0 )cσ (k)|GSi = δk,k0 δp,p0 θ(pF − |p|)θ(pF − |k|)
+δσ,τ δp,k0 δk,p0 θ(|k0 | − pF )θ(pF − |k|). (266)

Substituting this back in to (265) gives


X 1 X
hGS|ρ(r)ρ(r0 )|GSi = θ(pF − |k|)θ(pF − |p|)
0
L6
σ,σ k,p
X 1 X i 0 0
+ 6
θ(|k| − pF )θ(pF − |k0 |)e ~ (k−k )·(r−r )
σ
L 0 k,k
1 X i k·(r−r0 ) 1 X − i k0 ·(r−r0 )
= hGS|ρ(r)|GSihGS|ρ(r0 )|GSi + 2 e~ e ~ . (267)
L3 L3 0
|k|>pF |k |<pF

32
Evaluting the k sums for large L: The idea is to turn sums into integrals
Z ∞ Z π Z 2π
d3 k θ(pF − ~p) ip|R| cos ϑ
Z
1 X i k·R i
k·R 2
e ~ −→ θ(pF − |k|)e ~ = dpp dϑ sin ϑ dϕ e
L3 (2π~)3 0 0 0 (2π)3
|k|<pF
Z pF /~
dp 2 2 sin(p|R|)
= p . (268)
0 (2π)2 p|R|

Here we have introduced spherical polar coordinates such that the z-axis of our co-ordinate system is along
the R direction, and

kx = ~p sin ϑ cos ϕ ,
ky = ~p sin ϑ sin ϕ ,
kz = ~p cos ϑ. (269)

The other sum works similarly


1 X i k·R 1 X i k·R 1 X i k·R
e ~ = e ~ − e~ . (270)
L3 L3 L3
|k|>pF k |k|<pF

The second part is evaluated above, while the first part is


1 X i k·R
e~ = δ (3) (R). (271)
L3
k

The equality can be proved by multiplying both sides by a test-function f (R) and then integrating over R:
Z Z
3 1 X i k·R 1 X i 1 X˜
d R 3 e~ f (R) = 3 d3 Re ~ k·R f (R) = 3 fk = f (0). (272)
L L L
k k k

Here we have used standard definitions for Fourier series, cf Riley/Hobson/Bence 12.7.

Using these simplifications for large L we arrive at our final answer

"Z #2
0 2 (3) 0
pF /~
dp 2 2 sin(p|r − r0 |)
hGS|ρ(r)ρ(r )|GSi = hGS|ρ(r)|GSi + hGS|ρ(r)|GSiδ (r − r ) − 2 p .
0 (2π)2 p|r − r0 |

(273)
The first two terms are the same as for a classical ideal gas, while the third contribution is due to the
fermionic statistics (Pauli exclusion: “fermions don’t like to be close to one another”).

6 Application II: Weakly Interacting Bosons


As you know from Statistical Mechanics, the ideal Bose gas displays the very interesting phenomenon of
Bose condensation. This has been observed in systems of trapped Rb atoms and led to the award of the
Nobel prize in 2001 to Ketterle, Cornell and Wiemann. The atoms in these experiments are bosonic, but the
atom-atom interactions are not zero. We now want to understand the effects of interactions in the framework
of a microscopic theory. The kinetic energy operator is expressed in terms of creation/annihilation operators
single-particle momentum eigenstates as
X p2
T̂ = c† (p)c(p). (274)
p
2m

33
Here we have assumed that our system in enclosed in a large, periodic box of linear dimension L. The
boson-boson interaction is most easily expressed in position space
Z
1
V̂ = d3 rd3 r0 c† (r)c† (r0 )V (r, r0 )c(r0 )c(r) (275)
2
A good model for the potential V (r, r0 ) is to take it of the form

V (r, r0 ) = U δ (3) (r − r0 ), (276)

i.e. bosons interact only if they occupy the same point in space. Changing to the momentum space
description
1 X i p·r
c(r) = 3/2 e ~ c(p), (277)
L p

we have
U X
V̂ = c† (p1 )c† (p2 )c(p3 )c(p1 + p2 − p3 ). (278)
2L3 p1 ,p2 ,p3

6.1 Ideal Bose Gas


For U = 0 we are dealing with an ideal Bose gas and we know that the ground state is a condensate: all
particles occupy the lowest-energy single-particle state, i.e. the zero-momentum state
1  † N
|GSi0 = √ c (p = 0) |0i. (279)
N!
So p = 0 is special, and in particular we have

0 hGS|c (p = 0)c(p = 0)|GSi0 = N. (280)

6.2 Bogoliubov Approximation


For small U > 0 we expect the Bose-Einstein condensate to persist, i.e. we expect

hGS|c† (p = 0)c(p = 0)|GSi = N0  1. (281)

However,
[c† (0)c(0), V̂ ] 6= 0, (282)
so that the number of p = 0 bosons is not conserved, and the ground state |GSi will be a superposition of
states with different numbers of p = 0 bosons. However, for the ground state and low-lying excited states
we will have
hΨ|c† (0)c(0)|Ψi ' N0 , (283)
where N0 , crucially, is a very large number. The Bogoliubov approximation states that for such states we in
fact have
c† (0) ' N0 ,
p p
c(0) ' N0 ,
(284)
i.e. creation and annihilation operators are approximately diagonal. We then may expand H in inverse
powers of N0
X p2
H = c† (p)c(p)
p
2m
U U N0 X †
+ 3
N02 + 2c (k)c(k) + 2c† (−k)c(−k) + c† (k)c† (−k) + c(−k)c(k)
2L 2L3
k6=0
+... (285)

34
Now use that X
N0 = c† (0)c(0) = N − c† (p)c(p), (286)
p6=0
where N is the (conserved) total number of bosons, and define
N
ρ= = density of particles. (287)
L3
Then our Hamiltonian becomes
Uρ X  p2 
Uρ h † i
H= N+ + U ρ c† (p)c(p) + c (p)c† (−p) + c(−p)c(p) + . . .
2 2m 2
p6=0 | {z }
(p)
(288)
The Bogoliubov approximation has reduced the complicated four-fermion interaction to two-fermion terms.
The price we pay is that we have to deal with the “pairing”-terms quadratic in creation/annihilation oper-
ators.

6.3 Bogoliubov Transformation


Consider the creation/annihilation operators defined by
    
b(p) cosh(θp ) sinh(θp ) c(p)
= (289)
b† (−p) sinh(θp ) cosh(θp ) c† (−p).
It is easily checked that for any choice of Bogoliubov angle θp
[b(p), b(q)] = 0 = [b† (p), b† (q)] , [b(p), b† (q)] = δp,q . (290)
In terms of the Bogoliubov bosons the Hamiltonian becomes
 2  h
1X p i
H = const + + U ρ cosh(2θp ) − U ρ sinh(2θp ) b† (p)b(p) + b† (−p)b(−p)
2 2m
p6=0
 2  h
p i
− + U ρ sinh(2θp ) − U ρ cosh(2θp ) b† (p)b† (−p) + b(−p)b(p) + . . . (291)
2m
Now we choose

tanh(2θp ) = p2
, (292)
2m + Uρ
as this removes the b† b† + bb terms, and leaves us with a diagonal Hamiltonian
X
H = const + E(p)b† (p)b(p) + . . .
p6=0
(293)
where s 2
p2
E(p) = + Uρ − (U ρ)2 . (294)
2m
We note that
p2
E(p) −→ for |p| → ∞, (295)
2m
which tells us that at high momenta (and hence high energies) we recover the quadratic dispersion. In this
limit θp → 0, so that the Bogoliubov bosons reduce to the “physical” bosons we started with. On the other
hand r

E(p) −→ |p| for |p| → 0. (296)
m
So here we have a linear dispersion.

35
6.4 Ground State and Low-lying Excitations
We note that the Hamiltonian (293) involves only creation/annihilation operators with p 6= 0. Formally,
the Bogoliubov transformation (289) tells us that
b(0) = c(0) , (297)
because θp=0 = 0. Let us now define the vacuum state |0̃i by
b(p)|0̃i = 0 . (298)
Clearly, for p 6= 0 we have E(p) > 0, and hence no Bogoliubov quasiparticles will be present in the ground
state. On the other hand, our basic assumption was that
b(0)|GSi ' N0 |GSi , b† (0)|GSi ' N0 |GSi.
p p
(299)
In practice that is all we need to know, as it allows us to calculate correlation functions etc. However, in
order to get an idea what the ground state really looks like, let us express it in the form

α n
√ n b† (0) |0̃i ,
X
|GSi = (300)
n=0 n!

where the normalization condition imposes ∞ 2


P
n=0 |αn | = 1. The approximate relations (299) then imply
that r
√ p N0
nαn ' N0 αn−1 , αn ' αn+1 , (301)
n+1
which tell us that αn ≈ 0 unless n ≈ N0 . So in the zero-momentum sector we obtain an (approximately)
equal-amplitude linear superposition of eigenstates with approximately N0 p = 0 bosons.
Low-lying excited states can now be obtained by creating Bogoliubov quasipartices, e.g.
b† (q)|GSi, (302)
is a particle-excitation with energy E(q) > 0.

6.5 Depletion of the Condensate


We started out by asserting that for small interactions U > 0 we retain a Bose-Einstein condensate, i.e. the
consensate fraction N0 /N remains large. We can now check that this assumption is self-consistent. We have
X
N0 = N − c† (p)c(p). (303)
p6=0

Thus in the ground state


N0 1 X
=1− h0̃|c† (p)c(p)|0̃i. (304)
N N
p6=0
Inverting the Bogoliubov transformation we have
h i
c(p)|0̃i = cosh(θp )b(p) − sinh(θp )b† (−p) |0̃i = − sinh(θp )b† (−p)|0̃i. (305)

Using this and its hermitian conjugate equations we find


 
N0 1 X 1 X 1 1 X  (p) 
2
= 1− sinh (θp ) = 1 − q − 1 = 1 − −1
N N 2N 1 − tanh2
(2θ ) 2N E(p)
p6=0 p6=0 p p6=0
 
1 X 1 
= 1− r − 1
. (306)
2N  h
U ρ
i2
p6=0 1 − (p)

36
We again turn this into an integral and evaluate it in spherical polar coordinates, which gives

π ∞ dp (U ρ)2
Z
N0 2
≈1− p 2 . (307)
N ρ 0 (2π~)3

p2
2m + U ρ

The integral is proportional to U 3/2 and thus indeed small for small U .

7 Application III: Spinwaves in a Ferromagnet


Consider the following model of a magnetic insulator: at each site r of a D-dimensional with N sites lattice
we have a magnetic moment. In QM such magnetic moments are described by three spin-operators

Srα , α = x, y, z , (308)

which fulfil the angular momentum commutation relations

[Srα , Srβ0 ] = δr,r0 iαβγ Srγ . (309)

We will assume that the spin are large in the sense that
X 2
S2r = Srα = s(s + 1)  1. (310)
α

An appropriate Hamiltonian for such systems was derived by Heisenberg


X
H = −J Sr · Sr0 .
hr,r0 i
(311)

Here hr, r0 i denote nearest-neighbour pairs of spins and we will assume that J > 0. The model (311) is
known as the ferromagnetic Heisenberg model. Let us begin by constructing a basis of states. At each site
we have 2s + 1 states
1 − n
|s − nir = S ) | ↑ir , n = 0, 1, . . . , 2s, (312)
N r
where N is a normalization constant and

Srz | ↑ir = s| ↑ir , (313)

and Sr− = Srx − iSry is the spin lowering operator on site r. A basis of states is then given by
Y
|sr ir , −s ≤ sr ≤ s spin on site r. (314)
r

One ground state of H is given by


Y
|GSi = | ↑ir .
r
(315)
Its energy is X
H|GSi = −J s2 |GSi = −Js2 zN |GSi, (316)
hr,r0 i

where zN is the total number of bonds in our lattice. The total spin lowering operator S − = −
P
r Sr
commutes with H, i.e. [S − , H] = 0, and hence
1 n
|GS, ni = S − |GSi , 0 ≤ n ≤ 2sN (317)
N

37
are ground states as well (as they have the same energy). Here N is a normalization.

Proof that |GSi is a ground state:

2Sr · Sr0 = (Sr + Sr0 )2 − S2r − S2r0 = J2 − 2s(s + 1). (318)

Here J2 is the total angular momentum squared. Its eigenvalues follow from the theory of adding angular
momenta to be
J2 |j, mi = j(j + 1)|j, mi , j = 2s, 2s − 1, . . . , 1, 0. (319)
This tells us that the maximal eigenvalue of J2 is 2s(2s + 1), and by expanding |ψi in a basis of eigenstates
of J2 we can easily show that
X
hψ|J2 |ψi = hψ|j, mihj, m|J2 |j 0 , m0 ihj 0 , m0 |ψi
j,m,j 0 ,m0
X X
= |hψ|j, mi|2 j(j + 1) ≤ 2s(2s + 1) |hψ|j, mi|2 = 2s(2s + 1). (320)
j,m j,m

This tells us that


hψ|Sr · Sr0 |ψi ≤ s2 . (321)
This provides us with a bound on the eigenvalues of the Hamiltonian, as

hψ|H|ψ|i ≥ −Jhr,r0 i s2 = −Js2 N z. (322)

The state we have constructed saturates this bound, so must be a ground state.

Let us now consider thermal expectation values of observables O


1 h i
hOiβ = Tr e−βH O , (323)
Z(β)

where Z(β) = Tr[e−βH ] is the partition function and β = 1/kB T . In the T → 0 limit we have
2sN
X
hOi∞ = hGS, n|O|GS, ni, (324)
n=0

i.e. we average over all ground states. This average is independent of the choice of quantization axis,
reflecting the presence of a global spin rotation symmetry. Indeed, in a rotated co-ordinate system we have
2sN
X
hO0 i∞ = hGS, n|eiα·S Oe−iα·S |GS, ni, (325)
n=0
P
where S = r Sr are the global spin operators. Now, {|GS, ni} form an orthonormal basis of the subspace
H0 of degenerate ground states of H. As e−iα·S is a unitary operator that commutes with H, we must have
2sN
X 2sN
X
−iα·S ∗
e |GS, ni = Unm |GS, mi , Unm Uml = δn,l . (326)
m=0 m=0

Substituting this back, we see that indeed hO0 i∞ = hOi∞ .


In a real system, the 2sN + 1-fold ground state degeneracy is usually broken through imperfections. In
particular, these will lead to spontaneous symmetry breaking of the spin rotational SU(2) symmetry and
the emergence of a spontaneous magnetization. A convenient mathematical description of this effect is as

38
follows. Imagine adding an infinitesimal magnetic field − r Srz to the Hamiltonian. This will break the
P
degeneracy of the ground states, which now will have energies

EGS,n = −Js2 zN − s(N − 2n). (327)

Now consider the sequence of limits


(
n
0 if limN →∞ N =0,
lim lim [EGS,n − EGS,0 ] = (328)
→0 N →∞ ∞ else.

This means that if we define the thermodynamic limit in the above way, then the only surviving ground
states will have magnetization per site s, i.e. contain only a non-extensive number of spin flips. In all of
these remaining ground states the spin rotational symmetry has been broken.
We succeeded in finding the ground states of H because of their simple structure. For more general spin
Hamiltonians, or even the Hamiltonian (311) with negative value of J, this will no longer work and we need
a more general, but approximate way of dealing with such problems.

8 Holstein-Primakoff Transformation
Spin operators can be represented in terms of bosonic creation and annihilation operators in the form
s
z † + x y
√ a†r ar
Sr = s − ar ar , Sr = Sr + iSr = 2s 1 − ar . (329)
2s
You can check that the bosonic commutation relations

[ar , a†r0 ] = δr,r0 (330)

imply that
[Srα , Srβ0 ] = δr,r0 iαβγ Srγ . (331)
However, there is a caveat: the spaces of QM states are different! At site r we have
n
Sr | ↑ir , n = 0, . . . , 2s (332)

for spins, but for bosons there are infinitely many states
n
a†r |0ir , n = 0, . . . , ∞. (333)

To make things match, we must impose a constraint, that there are at most 2s bosons per site. Now we take
advantage of the fact that we have assumed s to be large: in the ground state there are no bosons present,
because
hGS|Srz |GSi = s = hGS|s − a†r ar |GSi. (334)
Low-lying excited states will only have a few bosons, so for large enough s we don’t have to worry about
the constraint. Using the Holstein-Primakoff transformation, we can rewrite H in a 1/s expansion
h i
† †
X
2 † †
H = −J s − s ar ar + ar0 ar − ar ar − ar0 ar + . . .
0 0

hr,r0 i
(335)

Here the dots indicate terms proportional to s0 , s−1 , etc. Once again using that s is large, we drop these
terms (for the time being). We then can diagonalize H by going to momentum space
1 X ik·r
a(k) = √ e ar , [a(k), a† (p)] = δk,p , (336)
N r

39
which gives
X
H = −Js2 N z − Js (q)a† (q)a(q) + . . .
q
(337)
For a simple cubic lattice the energy is

(q) = 2Js [3 − cos qx − cos qx − cos qz ] . (338)

For small wave numbers this is quadratic

(q) ≈ Js2 |q|2 for |q| → 0. (339)

In the context of spontaneous symmetry breaking these gapless excitations are known as Goldstone modes.

40

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy