Landauer 3D
Landauer 3D
Landauer 3D
ii
Chapter 1
1.1
Quantum currents
1.1.1
Probability currents
i
h
2m
x
x
(1.1)
(1.2)
describes the change in the probability Pab (t) of finding a particle in the region a < x < b at
time t, where
Z b
(1.3)
Pab (t) =
|(x, t)|2 dx,
a
provided the wave function (x, t) describing the particle is normalized. Proving this is trivial:
Z b
dPab (t)
(x, t) (x, t)
=
+
(x, t) dx
(x, t)
dt
t
t
a
Z
i
h b
2 (x, t) 2 (x, t)
(x, t)
=
(x, t) dx
2m a
x2
x2
b
i
h
(x, t) (x, t)
=
(x, t) .
(x, t)
2m
x
x
a
2
h (x,t)
= 2m
+
Going from the first to the second line one uses the Schrodinger equation i
h (x,t)
t
x2
V (x)(x, t) and its complex conjugate (the potential terms cancel), and from the second to the
third line one uses partial integration. Setting b = a + dx with dx infinitesimal, allows one to
write Eq. 1.2 as
J(x, t)
(x, t)
=
,
t
x
(1.4)
with (x, t) = |(x, t)|2 the probability density. You might recognize Eq. 1.4 as a continuity
equation, which describes the relation between a density and a current.
Probability currents may seem rather abstract, but they are easily related to something
more familiar. Suppose the particle has a charge q, then the expected charge found in the region
a < x < b at time t is Qab (t) = qPab (t).5 Defining the electrical current as I(x, t) = qJ(x, t),
Eq. 1.2 can be rewritten as
dQab (t)
= I(a, t) I(b, t).
dt
(1.5)
This makes sense; the rate of change of charge is given by the dierence between the current
flowing in from one side minus the current flowing out from the other side.
A nice thing is that, even if the wave function cannot be normalized, like the wave function of
a free particle, the probability current according to Eq. 1.1 is still a well-defined quantity. Free
particles often enter in scattering problems, where we are interested in quantities like reflection
and transmission coecients. Since the latter can be directly defined in terms of probability
currents, we can get away with using non-normalizable wave functions.6
1.1.2
Stationary states
(x, t) = (x)e h Et .
(1.6)
5
This is an expectation value in the quantum mechanical sense. One starts the wave at time 0 and at time t
one measures whether the particle is in the region a < x < b. By repeating this experiment over and over, one
can calculate the probability Pab (t). Qab (t) is then the average charge found in this region from these repeated
experiments. If you have problems imagining this then think of a particle emitter that sends out a pulse of
many (independent) particles. The averaging is done automatically and the average charge is what you measure.
6
For quantum purists: one can work with normalizable wave packets. The math then becomes ugly, and in
the proper limit the physical results will be the same as when using plane waves.
(1.7)
(1.8)
(1.9)
Since Pab is the probability of finding the particle in the interval between x = a and x = b, i.e.
an interval of length b a, we can interpret |A|2 as the probability density per unit length. It
is also called the particle density7
= |A|2 .
(1.10)
The probability current is easily calculated from its definition, Eq. 1.1
J=
k
h
hk
.
|A|2 =
m
m
(1.11)
k
h
p
=
m
m
(1.12)
(1.13)
which is the usual definition of an electrical current, namely chargevelocitydensity. For the
wave function of Eq. 1.8 both velocity and density are constant, so the wave function describes
a uniform current. Suppose q > 0; then if k > 0 the current flows to the right, if k < 0, the
current flows to the left. From now on we assume that k > 0.
Now lets go to the more complicated wave function
(x) = Aeikx + Beikx ,
(1.14)
k
h
k
h
|A|2
|B|2 ,
m
m
(1.15)
which is interpreted as a right going current minus a left going current. In a scattering problem
one would interpret the first term on the right handside of Eq. 1.14 as the incident wave and
the second term as the reflected wave. Eq. 1.15 is then interpreted as the dierence between
incident and reflected currents
J = Jin JR .
(1.16)
V ( x)
Aeik L x
Be-ikL x
FeikR x
left region
middle region
right region
x
Figure 1.1: Cartoon representing a general one-dimenssional scattering problem. In the left
region the potential is a constant V (x) = VL , in the middle region the potential V (x) can be
anything, and in the right region the potential is a constant V (x) = VR . The middle region is
called the scattering region. The left and right regions are called the left and right leads.
In the left lead we have an incoming wave AeikL x and a reflected wave BeikL x and in the right
lead we have a transmitted wave F eikR x .
The reflection coecient R is defined as the ratio between reflected and incident currents
R=
|B|2
JR
=
.
Jin
|A|2
(1.17)
Consider the scattering problem shown in Fig. 1.1. In the left region we assume that the
potential is constant V (x) = VL , in the middle region the potential can have any shape V (x),
and in the right region the potential is again constant V (x) = VR . The solution in the left region
is given by Eq. 1.14 with k replaced by kL
p
2m(E VL )
.
(1.18)
kL =
h
(1.19)
p
2m(E VR )
.
kR =
h
(1.20)
with
kR
h
|F |2 .
m
(1.21)
The transmission coecient T is defined as the ratio between transmitted and incident currents
T =
7
JT
vR |F |2
=
,
Jin
vL |A|2
(1.22)
Using a beam of N particles in which each particle is independent of the others and is described by the wave
function of Eq. 1.8, the particle density is N |A|2 , which is the number of particles to be found per unit length.
Jin = JR + JT .
(1.23)
This relation expresses the conservation of current, or: current in = current out (reflected
plus transmitted). No matter how weird the potential in the middle region is, the current going
into it has to be equal to the total current coming out of it. No particles magically appear or
disappear in the middle region. From the definitions of Eqs. 1.17 and 1.22 it is shown that Eq.
1.23 is equivalent to
1 = R + T,
(1.24)
i.e. the reflection and transmission coecients add up to 1. Since these coecients denote the
probabilities that a particle is reflected or transmitted, this simply states that particles are either
reflected or transmitted.
1.2
1.2.1
Quantum conductance
Tunnel junction
The device shown in Fig. 1.2 is called a tunnel junction. The left and right regions consist of
metals and the middle region consists of an insulator material, usually a metal-oxide.8 Such
devices can be made in a very controlled way with the middle region having a thickness of a
few nm. One is interested in electrical currents, i.e. the transport of electrons through such
junctions, or more generally in the current-voltage characteristics of such a device.9 On this
small, nanometer length scale electrons have to be considered as waves and quantum tunneling
is important. Nano-electronics is the general name of the field where one designs and studies
special devices that make use of this electron wave behavior.
We start with the simplest possible one-dimensional model of a tunnel junction. The atoms
of a material attract electrons by their nuclear Coulomb potential. The electrons in low lying
energy levels are localized around the atomic nuclei and form the atomic cores. The atomic
valence electrons experience a much weaker eective atomic core potential, which is the sum
of attractive nuclear and repulsive core electron terms. If the atoms are closely packed and
the material is suciently simple, all these atomic potentials add up to a total potential that
is relatively constant in space. As a qualitative level the potential for electrons in a material
can be approximated by a constant, which is what we are going to do in the following.10 The
constant potential depends on the sort of atoms a material is composed of, so it is dierent for
every material. The potential in the tunnel junction of Fig. 1.2 along the transport direction
can then be represented by a square barrier, as shown in Fig. 1.3.
8
Scanning tunneling microscopy (STM) uses a tunnel junction between the probe tip and a surface, where the
middle region is simply vacuum.
9
The device is called MIM, which stands for metal-insulator-metal. Using magnetic metals the device can be
applied as a magnetic field sensor, or in MRAMs (magnetic random access memories).
10
This approximation is often used for simple metals such as the alkalis, aluminium, silver and gold. The
constant potential approximation is also called the jellium approximation. It does not hold for complicated
metals such as the transition metals or for covalently bonded materials, such as silicon or carbon. To be fair, it
doesnt even hold very well for simple metals if one is interested in quantitative results.
incoming wave
transmitted wave
reflected wave
left region
middle
region
right region
Figure 1.2: Schematic representation of a tunnel junction. The yellow balls represent atoms
of a metal, the blue balls represent atoms of an insulator. The left and right regions stretch
macroscopically far into the left and right, respectively. The electron waves in the metal are
reflected or transmitted by the insulator in the middle region
V ( x)
Aeikx
Feikx
Be-ikx
left region
middle region
right region
x
Figure 1.3: Simple approximation of the potential along the transport direction of a tunnel
junction, see Fig. 1.2. In the metal (left and right regions) the potential is constant, V (x) = V1 .
In the insulator the potential is also constant, V (x) = V0 , where V0 > V1 . The incoming,
reflected and transmitted waves are given by Aeikx , Beikx and F eikx .
V0
V ( x)
V0 V
V1
left region
middle region
right region
V1 V
x
Figure 1.4: The potential when a bias voltage U is applied between the left and right leads. This
changes the potential of the right region by V = eU to V1 V with respect to the left
region; see Fig. 1.3. The voltage drop is indicated schematically. If the bias voltage is small, i.e.
V V0 V1 , then we can still use the transmission coecient T calculated for the unbiased
square barrier (given by the dashed line).
1.2.2
(1.25)
using the definition I = qJ (the charge q of an electron is e). The incoming current Iin is
given by Eq. 1.13 as
Iin = ev.
(1.26)
How large are the velocity v and density of the incoming electrons? To answer that we must
ask the more basic question: how is the incoming current in a device created? In an experimental
setup this is done by applying a voltage dierence U between the left and right regions. The left
and right regions are metals, which can be connected to the two ends of a battery, for instance.
This results in a potential drop V = eU between left and right regions, as shown in Fig.
1.4. We suppose that the temperature is zero and the metals are non-magnetic, so we have spin
degeneracy. Then v is given by the simple expression
v =
eU
V
= .
(1.27)
e2
U T,
(1.28)
One can calculate the transmission coecient T for the tilted barrier of Fig. 1.4, but this calculation is more
complicated, and for small V the result is almost the same as for the square barrier.
e2
IT
=
T.
U
(1.29)
1.2.3
I will give a very simple derivation of Eq. 1.27. We have to do a little bit of solid state physics,
but I use only simple introductory quantum mechanics language. Spin degeneracy means that
each energy level can be filled with two electrons. The non spin degenerate case is relevant for
magnetic materials. I let you work out that case yourself.
The Pauli exclusion principle and the Fermi energy
The left and right regions of a tunnel junction consist of metal wires, see Fig. 1.2. These wires
are supposed to be very, very long compared to the size of the middle region. In a simple-minded
model the potential of a metal wire looks like Fig. 1.5. The potential is approximately constant
inside the wire and it has steps at the beginning and end of the wire to keep the electrons in.
The energy levels of this square well potential are,14
En =
n2 2
h2
2 kn2
h
=
.
2m
2mL2
(1.30)
The spacing between the energy levels, En En1 , scales as 1/L2 with the length L of the
wire. If L is large, the spacing becomes very small, so from a distance the energy level spectrum
almost looks like a continuum, as illustrated by Fig. 1.5.
The wave functions are given by
r
2
1 ikn x
sin kn x =
(1.31)
e
n (x) =
eikn x .
L
i 2L
These are not exactly what we need, because they correspond to standing waves, whereas we
need traveling waves to describe currents,
see Eq. 1.8. For the incoming current we only need
the exp(ikn x) part. Setting A = 1/(i 2L), the corresponding electron density according to Eq.
1.10 is
= |A|2 =
12
1
.
2L
(1.32)
If you are more used to working with resistances, the resistance R is the inverse of the conductance, i.e.
R = 1/G, so the quantum of resistance is
h/e2 12.9k.
13
R. Landauer, Philosophical Magazine 21, 863 (1970).
14
This is actually the solution for an infinitely deep square well, whereas you might think that we need the
solution for a finite square well. However if the well is very wide and not too shallow, the infinite well is an
extremely good approximation.
9
V ( x)
L
2
L
2
EF
Figure 1.5: Schematic drawing of the potential and the energy levels of a long wire. The points
L/2 and L/2 mark the beginning and the end of the wire. The spacing between the energy
levels is so small that the energy spectrum almost looks like a continuum. EF marks the Fermi
energy, i.e. the highest level that is occupied in the ground state by an electron.
The wire is full of electrons since each of the atoms in the wire brings at least one electron
with it. Filling the energy levels according to the Pauli principle, and having N electrons in
total, the highest occupied level is E N . The highest occupied level in the ground state is called
2
vn .
(1.33)
EF V <En <EF
The factor of 2 is there because there are two electrons in each level.
15
In an ordinary metal the Fermi energy is of order 10 eV with respect to the lowest energy level of the valence
electrons.
16
One can also reverse the argument. If the Fermi energies on the left and right side would be dierent, then
a current would flow. However this current would be short-lived. By sending electrons from left to right one
occupies a level on the right side, and de-occupies a level on the left side. This would go on until the highest
occupied levels on the left and right are the same; in other words until an equilibrium is reached.
17
Again, this is valid in the linear response regime.
10
V ( x)
EF
left region
EF
middle region
right region
Figure 1.6: Tunnel junction where left and right regions are filled with electrons. The Fermi
energies EF on the left and right side are identical. The exclusion principle forbids electrons to
trespass from left to right or vice versa.
This sum in Eq. 1.33 is rather awkward, but by a trick we can turning it into an integral
X
LX
LX
vn =
vn k
vn =
Z
L
vdk,
(1.34)
where
,
(1.35)
L
see Eq. 1.30. Turning the sum into an integral is allowed because L is very large, so k is
tiny. The lower and upper bound of the integral in Eq. 1.34 should correspond to the energies
EF V and EF , whereas the integral is over dk, which is again awkward. We can however
turn it into an integral over dE, using the following trick
2 2
h k
d
2m
1
1 dE
hk
=
=
.
(1.36)
v=
m
h
dk
h dk
Collecting Eqs. 1.37, 1.34 and Eq. 1.32 in Eq. 1.33, we find for the incoming current
k = kn kn1 =
V
.
(1.38)
h
This is the required expression for the incoming current, see Eqs. 1.26 and 1.27. The tunnel
current is then given by
Iin = e
eV
T,
(1.39)
h
where the transmission coecient T needs to be calculated for the energy E = EF . Note that
with V = eU , where U is the potential dierence (in Volts), this corresponds to Eq. 1.28.
The Landauer formula, Eq. 1.29, is then derived straightforwardly.
IT = Iin T =
11
V0
V ( x)
V0 V
EF
left region
E F V
middle region
right region
Figure 1.7: Tunnel junction with an applied bias voltage. All the levels occupied by electrons in
the left region with an energy EF V < En < EF correspond to empty energy levels in the
right region. The electrons in these levels can tunnel from though the barrier from left to right.
1.3
The Landauer formula expresses the conductance in terms of a transmission coecient. In other
words, the problem of finding the conductance becomes a problem of solving the scattering
problem. In this section I review some basic elements of scattering theory in one dimension.
You probably know what a scattering matrix is, but read through the section anyway, if only to
refresh your memory.
1.3.1
Again I explain the concepts using elementary quantum mechanics only and the rectangular
barrier as an example. Consider Fig. 1.8 and let the middle region run from x = a to x = a.
We are trying to find a solution to the time-independent Schrodinger equation
E(x) +
2 d2 (x)
h
V (x)(x) = 0.
2m dx2
x < a
Aeikx + Beikx ;
x
x
Ce + De ; a < x < a
(x) =
x>a
F eikx + Geikx ;
(1.40)
(1.41)
with
k=
2mE
, =
h
2m(V0 E)
.
h
(1.42)
For 0 < E < V0 , both k and are real numbers.18 Provided we choose the constants A-G such,
that the function (x) is continuous and dierentiable everywhere, it will be a solution to the
Schrodinger equation for all x, including the boundaries x = a. Continuity of (x) at x = a
gives the relation
Aeika + Beika = Cea + Dea ,
18
Most of the equations actually also hold for 0 < V0 < E (scattering over the barrier) or for V0 < 0 < E
(scattering from a potential well). In those cases will be a purely imaginary number.
12
V ( x)
V0
Aeikx
Ge-ikx
Be-ikx
Feikx
0
left region
middle region
-a
right region
Figure 1.8: Scattering from a rextangular barrier. The transfer matrix M relates the coecients
of the waves at the right to the waves at the left of the barrier.
whereas continuity of
d
dx (x)
at x = a gives
A
C
(1)
= M
B
D
ik+
ik
(ik)a
(ik+)a
e
e
2ik
.
2ik
M(1) = ik
ik+
(ik+)a
(ik)a
e
e
2ik
2ik
d
dx (x)
C
D
(1.43)
=M
F
G
(1.44)
where the matrix M(2) can be obtained from M(1) by replacing a by a, ik by and by ik.
Combining Eqs. 1.43 and 1.44 gives an equation of the form
A
F
= M
(1.45)
B
G
M = M(1) M(2) ,
where the 2 2 matrix M is the product of the two matrices of Eqs. 1.43 and 1.44.
In a scattering problem we set our boundary conditions by hand, or, if you wish, by choosing
specific experimental conditions. We define an incoming wave Aeikx , with A a fixed value that
gives the density of particles corresponding to our experimental source, see Eq. 1.10. At the
same time we set G = 0, which means that we have no incoming wave Geikx from the right
(experimentally this is easy, even for a theoretician). Comparing Eqs. 1.22 and 1.45 we find for
the transmission coecient
vR 1 2
T =
.
(1.46)
vL M11
13
potential barrier
V ( x)
Aeik L x
Be-ikL x
FeikR x
left region
right region
x1
xi
xN
Figure 1.9: Approximating a barrier of any shape by a series of steps. The transfer matrix M
is given by M = M1 M1 MN .
Since the potential is symmetric, we actually have vR = vL here, but we let the more general
form of Eq. 1.46 stand for later. It is a simple exercise to work out the matrix element M11 and
show that one obtains the usual textbook expression for the transmission coecient
2
1
k + 2
T = 1+
.
(1.47)
sinh2 (2a)
2k
Comparing Eqs. 1.17 and 1.45 gives the reflection coecient as
R=
1.3.2
|M21 |2
.
|M11 |2
(1.48)
The algorithm formulated in the previous section can be extended in order to describe the
transmission through a barrier of a more general shape. The matrix M as in Eq. 1.45 is called
the transfer matrix. It relates the modes in the right region to the modes in the left region.
Once we know the transfer matrix, we can calculate the reflection and transmission coecients,
see Eqs. 1.48 and 1.46. For a single step in the potential, the transfer matrices are given by
Eqs. 1.43 and 1.44 for a step up and a step down, respectively. Since the square barrier can be
viewed as a step up (at x = a), followed by step down (at x = a), its transfer matrix is simply
a product of the transfer matrices of the individual steps, as is expressed by Eq. 1.45. The idea
can be applied to a potential of any shape, approximating the potential by a series of steps, as
is illustrated in Fig. 1.9.
The transfer matrix M is given by the product of the transfer matrices of each step
M = M1 M1 MN ,
where each Mi can be calculated by defining
p
2m [V (xi ) E]
i =
,
h
(1.49)
(1.50)
and using a properly modified Eq. 1.43 or Eq. 1.44. I leave the details up to you. In essence this
is the transfer matrix algorithm. It is used quite frequently in numerical calculations solving
14
scattering problems in various branches of physics; quantum mechanics, optics, acoustics, etc..
Obviously it is best suited for layered materials, i.e. systems that consist of layers of dierent
materials stacked on top of one another. If the i get large, which is for high barriers and/or
low energies, one might get into numerical problems because of the exponentials in Eqs. 1.43
and 1.44. There are some special tricks to handle these, but I wont go into details here. For
systems described in terms of real atoms, the techniques discussed in the following sections are
better suited.
1.3.3
The transfer matrix M is closely related to an elementary concept in scattering theory, called
the scattering matrix. Whereas the transfer matrix relates the modes on the right to the modes
on the left, the scattering matrix relates outgoing modes to incoming modes. For the example
shown in Fig. 1.8, the modes that are coming into the barrier are Aeikx (from the left) and
Geikx (from the right) and the modes that are going out from the barrier are Beikx (to the
left) and F eikx (to the right). The scattering matrix is defined by the relation
B
A
0
=S
.
(1.51)
F
G
Comparison to Eq. 1.45 gives for its matrix elements
0
=
S11
0
=
S21
M12
M21
0
; S12
= M22 M21
;
M11
M11
M12
1
0
; S22
=
.
M11
M11
(1.52)
I have put a prime on S0 because the scattering matrix S as it ordinarily used is defined as
r
vL 0
0
S ;
S11 = S11 ; S12 =
vR 12
r
vR 0
0
S ; S22 = S22
.
(1.53)
S21 =
vL 21
The reason for introducing the v factors is because one wants the scattering matrix S to
reflect a basic conservation law, namely the conservation of current. In a stationary problem
the current going out must be the same as the current coming in, since otherwise one would get
an accumulation or depletion of particles, as discussed Sec. 1.1.
2
Jout = Jin
v
B
v
A
L
L
=
.
vR F
vR G
(1.54)
S S = SS = 1.
(1.55)
From Eqs. 1.51 and 1.53 it is easily shown that this holds only if
In other words, conservation of current means that the scattering matrix is unitary.
It is custom to write the scattering matrix as
r t0
S=
.
(1.56)
t r0
15
potential barrier
V ( x)
AeikL x
Be-ikL x
FeikR x
left region
right region
x0 x1
xi
xN xN+1
(1.57)
hence the name transmission and reflection amplitudes for t and r, respectively. Note that the
relation 1 = R + T , see Eq. 1.24, then simply reflects the unitarity of the scattering matrix.
1.4
Mode matching
Although the transfer matrix algorithm is quite general, it becomes quite tedious if the potential
in the leads is not constant, which is when we want to make an atomistic model of a tunnel
junction, see Fig. 1.2. In this section I will discuss a technique that is easier to generalize, called
mode matching. By means of an introduction, I will discuss this technique in its simplest
form and take Fig. 1.10 as starting point. The potential is discretized as before, but the grid is
extended by one point into the left and right regions.
The basic idea is to discretize the whole Schr
odinger equation, Eq. 1.40, including the kinetic
energy. Approximating the second derivative by a simple, first order finite dierence one obtains
(
)
i+1 i i i1
h2
(1.58)
Vi i = 0,
E i +
2m
2
where i and Vi are shorthand notations for (xi ) and V (xi ), and = xi+1 xi . I only consider
equidistant grids here. As is usual in a scattering problem, i runs from to , so we have
an infinite number of equations. This is not only awkward, but also unnecessary. The potential
is localized in space, i.e. V (x) diers from a constant only for x1 x xN . For the left and
right regions, x < x1 and x > xN , we already know the solutions to the Schrodinger equation.
They are simple plane waves, as indicated in Fig. 1.10, with
p
p
2m(E VL )
2m(E VR )
; kR =
.
(1.59)
kL =
h
16
We need a way of matching these modes to the wave function in the region of the potential
barrier (the scattering region).
Lets start with x0 , and put the origin there, so x0 = 0. The finite dierence Schrodinger
equation for i = 0 is
E0 +
2
h
1 2 0 + 1 V0 0 = 0.
2
2m
(1.60)
We know that for x < 0 the wave function has the form (x) = AeikL x + BeikL x , so
1 = AeikL + BeikL = AeikL + ( 0 A) eikL ,
(1.61)
where the last result follows from the fact that the wave function has to be continuous at x = 0.
Remember that in a scattering problem we assume that we know the incoming wave, so A is
fixed. Eq. 1.60 can then be rewritten as
E 0 +
n
o
o
2 n
h
h2
ikL
ikL
ikL
+
e
=
A
e
V
0
1
0
0
0
2m2
2m2
(1.62)
The term on the left handside now only contains i with i 0, and the terms at the right
handside can be considered as the source of the incoming wave. This takes care of the left
boundary.
Now focus upon the right boundary; for i = N + 1 we have
E N+1 +
We now use
2
h
VN+1 N +1 = 0.
2
N+2
N+1
N
2m2
N +2 = F eikR (N +2) = N+1 eikR
(1.63)
(1.64)
to get
E N+1 +
o
2 n
h
ikR
e
+
2
N+1
N+1
N VN+1 N+1 = 0.
2m2
(1.65)
Note that we have again used our knowledge of the scattering boundary conditions. We have
assumed only a transmitted wave in Eq. 1.64 (no incoming wave from this side) and wave
function continuity at the boundary. The term on the left handside of Eq. 1.65 only contains
i with i N + 1. This takes care of the right boundary.
For i = 1, . . . , N we have no problems and we can use Eq. 1.58. We can collect these
equations together with Eqs. 1.62 and 1.65 and summarize the problem as
(1.66)
EI H0 = q.
n
o
2
h
ik
ik
A
e
,
e
2m2
(1.67)
2
h
.
2m2
(1.68)
17
All diagonal matrix elements are identical to that of the original finite dierence Hamiltonian
0
=
Hi,i
2
h
+ Vi ,
m2
(1.69)
except the first and the last one, which are modified to
2
h
+ V0 + L (E),
m2
h2
=
+ VN+1 + R (E).
m2
0
=
H0,0
0
HN+1,N+1
(1.70)
with
2 ikL
h
e
,
2m2
h2 ikR
e
.
R (E) =
2m2
L (E) =
(1.71)
The quantities L/R (E) are called the self-energies of the left and right leads.19 They take
care of the proper coupling of the potential barrier to the outer regions, and contain all the
information we require about the outer regions. The self-energy depends upon the energy of the
incoming and scattered waves, see Eq. 1.59. Note that, while Eq. 1.58 represents an infinite
dimensional problem, by introducing the self-energy (and the source), we have reduced it to a
finite, N + 2 dimensional problem, Eq. 1.66. That can be solved using standard algorithms
for solving linear equations.20
Once you have solved Eq. 1.66, the only thing remaining is to extract the transmission and
reflection amplitudes. The transmission amplitude is simply given by the wave function at the
right side of the barrier, normalized to the incoming wave, and normalized with the velocities
(to attain a unitary scattering matrix, see Eq. 1.53)
r
vR N+1
.
(1.72)
t=
vL A
The reflection amplitude is similarly determined from the wave function on the left side minus
the incoming wave, normalized to the incoming wave
r=
0 A
.
A
(1.73)
Some care should be taken in determining the velocities. Since we have discretized the
Schrodinger equation, it is consistent to discretize the expression for the current, Eq. 1.1, in a
similar way
i
i
h
i i+1
.
(1.74)
J=
i i+1
2m
This should actually give a position independent result, since we are considering a stationary
problem. For a simple plane wave Aeikx this expression gives
J=
19
20
i
h |A|2 ik
e
eik ,
2m
18
i
h ik
e
eik .
2m
(1.75)
i
hA
vL .
(1.76)
Note that the continuum limit lim0 of this expression gives Eq. 1.12.
The source term, Eq. 1.67, can then be simplified to
q0 =
In addition, from Eqs. 1.75 and 1.71 one can relate the velocity to the self-energy
vL/R =
1.4.1
2
Im L/R (E).
h
(1.77)
Using Green functions the mode matching results can be put into a very compact, albeit somewhat obscure, form. Define a matrix
1
.
(1.78)
G(E)= EI H0
It is called a Green-function matrix. Note that it has a dimension N + 2, as has the modified
Hamiltonian matrix H0 . One can also define the infinite dimensional retarded Green-function
matrix related to the original infinite dimensional Hamiltonian
Gr (E)= [(E + i)I H]1 ,
(1.79)
where is (real, positive) infinitesimal. The advanced Green-function matrix is defined as21
Ga (E) = [Gr (E)] .
(1.80)
For z a complex number in the lower half plane, the matrix elements of G and Gr in the
scattering region are identical.
Gi,j (z) = Gri,j (z) ; i, j = 0, . . . , N + 1
(1.81)
A proof of this can be found in the literature. Note that the modified Hamiltonian matrix H0
is non-Hermitian, essentially because the self-energy is not real, see Eqs. 1.70 and 1.71. One
can show that the eigenvalues of H0 are not real and lie in the upper half complex plane. Thus,
G(E) is a well-defined quantity for real energies E (unlike Gr ). It has the retarded boundary
condition build into it and one does not need the +i trick.22
The definition of G allows us to write,
N+1 = GN +1,0 (E)q0 ,
21
(1.82)
Since E is an eigenvalue of H, EI H is singular and its inverse does not exist for real energies E.
Adding/subtracting an imaginary i avoids the singularities. In textbooks on scattering theory it is shown that
this has a physical meaning. Gr can be used to construct the retarded wave function, which consists of a wave
coming in from the source to the target, and waves scattered out from the target. This is the physical solution.
Ga gives the reverse, i.e. waves scattered into the target, plus waves going into the source. This is unphysical,
but can sometimes be useful in formal mathematical manipulations. After having constructed the wave function,
the linit lim0 can be taken, so only serves as an intermediate to reflect the boundary conditions.
22
Mathematicians would call G(E) an analytical continuation of Gr (E). All the poles of G(E) are in the upper
half plane, which makes it analytical on the real axis, and a retarded form.
19
see Eq. 1.66. Eqs. 1.72, 1.76 and 1.82 then lead to a compact expression for the transmission
amplitude
t=
i
h
vR vL GN+1,0 (E).
(1.83)
An expression of this type is called a Fisher-Lee expression. It relates matrix elements of the
scattering matrix to matrix elements of the Green function.
For Green function junkies, we can even make it fancier and write, using Eq. 1.77
p
p
(1.84)
t = 2i Im R GN+1,0 (E) Im L .
(1.85)
with all quantities evaluated at the fixed energy E. This expression is known as the Caroli
expression.
Introducing a Green function to tackle this simple one-dimensional problem is like using a
sledgehammer to crack a nut. Green functions expressions can however be modified to include
multiple dimensions, a large bias voltage, interactions with vibrations and/or between electrons,
etc., where they give compact (though not necessarily practical) expressions. In case of a large
bias voltage, the Caroli expression is also known as the NEGF expression, where NEGF stands
for non-equilibrium Green function.23
1.4.2
If you prefer dierential equations over linear algebra, we can take the continuum limit of Eq.
1.66, i.e. lim0 . It is tricky, but straightforward to obtain
d(x)
h2 d2 (x)
d(x)
+
(x)
E(x) +
(x
a)
(1.86)
2m
dx2
dx
dx
hvL A(x),
V (x)(x) {(x)L (E) + (x a)R (E)} (x) = i
where I have put the left boundary at x = 0 and the right boundary at x = a. The s have the
form
L/R (E) =
i
hvL/R
.
2
(1.87)
The term on the right handside of Eq. 1.86 describes the source term, which only exists at the
left boundary. On the left handside there are -function terms that only exists at the boundaries
of the barrier with the outside regions. They take care of the coupling to the outside region.
d
terms in the kinetic energy ensure a continuous derivative across the boundaries. The
The dx
potential term between { } is called the embedding potential. The formalism is known in the
literature as the embedding formalism.24 The dierential equation needs to be solved for (x),
but only over the finite domain 0 x a. The coupling to the outside regions is taken care
of by the boundary terms. Eqs. 1.72 and 1.73 give the transmission and reflection amplitudes
with (a) replacing N+1 and (0) replacing 0 .
23
This is in contrast to the linear response regime, where the Green functions are ordinary, i.e. equilibrium
Green functions.
24
Be careful however, as in dierent fields the phrase embedding is attached to dierent things.
20
Alternatively, one may use Green function expressions also in this case. Define the Green
function G(x, x0 , E) as the solution of the equation
b 0 G(x, x0 , E) = (x x0 ),
(1.88)
EH
b 0 is the operator of Eq. 1.86 including all potential and boundary terms. One can prove
where H
that G(x, x0 , E) corresponds to the usual retarded Green function Gr for 0 x, x0 a.25 The
continuum equivalents of Eqs. 1.82-1.85 are
(a) = G(a, 0, E)i
hvL A,
p
p
t = i
h vR vL G(a, 0, E) = 2i Im R G(a, 0, E) Im L ,
T
(1.89)
I prefer linear algebra over dierential equations and I dont have experience with the embedding formalism, so it wont be discussed further. It has been proven however that the formalism
can be extended into a practical method for calculating transport through small systems, including all atomic details.
1.5
Tight-binding
In previous sections we have analyzed one-dimensional scattering problems starting from potentials and wave functions as functions of a continuous coordinate. We have discretized this
representation to show how the scattering problem can be solved in a practical way. This approach is natural if the potential variation is confined to the scattering region and outside this
region the potential is constant. The incoming, reflected and transmitted waves are then simply
plane waves. With the tunnel junction in mind, see Fig. 1.2, we would like to consider the
situation in which the whole space is filled with atoms of one kind or another. In one dimension
this gives an atomic wire, with atomic potentials everywhere along the wire, as is illustrated in
Fig. 1.11.
The usual approach is to construct a representation on a basis of atomic orbitals, i.e. expand
the wave functions in fixed atomic orbitals i (x)
X
(x) =
ci i (x Xi ),
(1.90)
i
where Xi denote the positions of the atoms. In chemistry this is known as the LCAO representation (linear combination of atomic orbitals), in physics it is usually called the tight-binding
representation. The wave function is represented by the column vector of the coecients
..
.
ci1
.
c
(1.91)
=
i
ci+1
..
.
Since we want to solve a scattering problem, the atomic orbitals should cover all of (oneb
dimensional) space, so the vector has an infinite dimension, i.e. i = , . . . , . Operators A
1.5. TIGHT-BINDING
21
V ( x)
left region
potential barrier
right region
x
Figure 1.11: Top: atomic chain. The leads (left and right region) are periodic chains of identical atoms. The middle region contains dierent atoms and/or disorder. Bottom: schematic
representation of the potential along the chain.
b j i and
are represented by infinite dimensional matrices A with matrix elements Ai,j = hi |A|
the Schrodinger equation becomes
(EI H) = 0,
(1.92)
with H the Hamiltonian matrix.26 To simplify the algebra we take just a single atomic orbital
per atomic site. Moreover, we assume that the diagonal elements of H are Hi,i = hi ; its odiagonal elements are Hi+1,i = i and all elements Hj,i = 0 for j > i + 1. The other matrix
elements are then set by demanding that the Hamiltonian matrix is Hermitian, i.e. H = H.
..
0
0
..
. hi1 i1
0
0
i
0
(1.93)
H=
.
0 i1 hi
..
0
0
i hi+1 .
..
.
0
0
The model is called the nearest neighbor tight-binding approximation by physicists and among
chemists it is known as the H
uckel approximation. The approximation is not essential and the
formalism that I will explain below can be made to work for any LCAO representation, but here
I want to keep the expressions as simple as possible.27
We divide our system into three parts: a left lead, a scattering region, and a right lead. The
left and right leads are perfect materials. They consist of identical atoms at equal distances
aL (for the left lead) and aR (for the right lead). In other words, the leads have translational
symmetry. Matrix elements in the leads must be identical, i.e. hi = hL/R and i = L/R for the
left/right leads. Only in the scattering region do we have site dependent matrix elements. The
basic idea is illustrated in Fig. 1.12.
26
I want to keep things as simple as possible, so I am using an orthogal basis. If the atomic orbital basis is
non-orthogonal, then introduce an overlap matrix S, with matrix elements Si,j = hi |j i, and substitute I by S.
27
In one-dimensional tight-binding one can assume that the hopping parameters i are real, without loss of
generality. However, in three dimensions or in magnetic fields this need no longer be true. Therefore, to be
prepared for the more general case, I keep complex i s for the moment.
22
V ( x)
L
hL
L
hL
hi
L
hL
hL
hR
left region
potential barrier
hR
hR
R
hR
right region
x
Figure 1.12: Nearest neighbor tight-binding model of an atomic chain. The periodic left and
right regions are characterized by the on-site and hopping matrix elements hL/R and L/R . The
scattering region has site dependent matrix elements hi and i .
1.5.1
(1.94)
with i running from to . These equations have the same mathematical form as the
discretized Schr
odinger equation of Eq. 1.58 if we make the substitutions
i ci ;
2
h
h2
+
V
;
h
i.
i
i
m2
2m2
We are going to solve the scattering problem by mode matching. First we have to find the modes
of the ideal leads. For sites i in the left lead the matrix elements are site independent and Eq.
1.94 becomes
L ci1 + (E hL )ci L ci+1 = 0.
(1.95)
A little pondering shows that this equation is mathematically the same as a discretized Schrodinger
equation for a particle in a constant potential. Since the same equations have the same solutions, the solutions must be (discretized) plane waves, i.e.
cn = AeikL naL ,
(1.96)
where the distance between the atoms aL is the discretization step. The same holds for the
right lead, replacing the subscript L by R.
One can give it a somewhat more mathematical flavor. The Bloch-Floquet theorem states
that functions in consecutive cells of a periodic system are related by a constant amplitude/phase
factor , i.e.28
if ci1 = c then ci = c and ci+1 = 2 c.
28
(1.97)
In the physics literature this is known as the Bloch theorem, and in the mathematics literature as the
Floquet theorem. Google a bit if you want to know the history.
1.5. TIGHT-BINDING
23
Now is the time where we simplifying things a little bit by choosing real,29 so Eq. 1.95 becomes
+ (E h) 2 = 0
"
#1
2
Eh 2
Eh
=
1 ,
2
2
(1.98)
Eh
more familiar form. For 2 1 we define a wave number k by30
cos(ka) =
Eh
,
2
(1.99)
(1.100)
is called the Bloch factor. Using Eq. 1.97 recursively, i.e. cn = n c0 , then leads to Eq.
1.96, with A = c0 , as expected. It describes propagating waves, where + describes a wave
propagating
to
the right, and a wave propagating to the left.
Eh
For 2 > 1 one can define by
E h
,
cosh(a) =
2
and obtain
Eh
> 1;
2
Eh
< 1.
if
2
(1.101)
= +ea if
= ea
(1.102)
Both these cases describe states that decay either to the right or to the left. These are called
evanescent states. They are not acceptable as solutions to the one-dimensional Schrodinger
equation because one cannot normalize them (not even in the wave packet sense). However, we
will have a use for them later on in three-dimensional problems.
1.5.2
Mode matching
We have the modes of the ideal leads, so we can match them to the scattering region, where the
matrix elements hi and i in Eq. 1.94 are site dependent. We assume that the scattering region
is localized in space, so i runs from 1 to N . The procedure we have used to solve the discretized
Schrodinger problem in Sec. 1.4 can be copied with only some small modifications. Using Eq.
1.100, Eqs. 1.61 and 1.64 read
1
1
1
c1 = A1
L,+ + BL, = AL,+ + (c0 A) L, ;
(1.103)
29
which is always possible in one-dimensional systems, provided one has spin degeneracy and time-inversion
symmetry, which we have here.
30
You may recognize E(k) = h + 2 cos ka as the dispersion relation describing an electronic band in the nearest
neighbor tight-binding model.
24
The convention is to write ci+n = n ci , where n is an integer (positive or negative), and let the
indicate waves propagating to the left and right, respectively.31 These relations can be used
to substitute Eq. 1.92 by
EI H0 = q,
(1.104)
similar to Eq. 1.66. is a finite dimensional vector that contains the coecients ci in the
scattering region plus those of the two boundaries, i.e. i = 0, . . . , N + 1. q is the source
vector of length N + 2, whose coecients are zero, except the first one
n
o
1
(1.105)
q0 = L A 1
L,
L,+ .
H0 is a finite (N + 2) (N + 2) Hamiltonian matrix. All its matrix elements are identical to that
of the original Hamiltonian matrix, Eq. 1.93, except for the first and the last diagonal element,
which are modified to
0
= h0 + L (E);
H0,0
0
= hN+1 + R (E),
HN+1,N+1
(1.106)
with
L (E) = L 1
L, ;
R (E) = R R,+ .
(1.107)
These self-energies contain all the information concerning the coupling of the scattering region
to the leads. As before, they are complex and energy dependent through Eqs. 1.99 and 1.100.
We have substituted an infinite dimensional problem, Eq. 1.92, by a finite dimensional one,
Eq. 1.104!! 32 Mathematically Eq. 1.104 is the same as Eq. 1.66. Again the same equations
have the same solutions, so according to Eq. 1.72, the transmission amplitude becomes
r
vR cN+1
(1.108)
t=
vL A
with, as in Eq. 1.77, the velocities given by
vL/R =
2aL/R
Im L/R (E).
h
(1.109)
In addition, the Green function expressions given in Sec. 1.4.1 remain valid.
31
In the one-dimensional case, one always has + = 1/ . In the three-dimensional case, this relation does not
necessarily hold. Thats why it is important to keep separate track of the powers of and the indices.
32
For those of you who have some background in this, you might suspect that the technique we are using here
has something to do with a technique known as partitioning. This suspicion is appropriate. Partitioning is
usually applied to operators and, in particular, to Green functions. I am applying it to wave functions here.
Chapter 2
2.1
2.1.1
Landauer in 3D
Conductance of model interfaces
We start with a generalization of the square barrier potential of Fig. 1.3. A two-dimensional
example is shown in Fig. 2.1. The potential V (x, y) is separable; the barrier is in the x-direction
and the potential is independent of y. The straightforward extension to three dimensions is a
potential that is independent of y and z. Such a potential landscape is a simple model for a thin
layer of an insulator sandwiched between two metals, i.e. a tunnel junction.
The Schrodinger equation
E(r) +
2 2
h
(r) V (r)(r) = 0
2m
25
(2.1)
26
Aeikr
V ( x, y)
Feikr
Be-ikr
y
left region
middle region
right region
x
Figure 2.1: A square barrier in two dimensions. In the left and right regions the potential is
constant, V (x, y) = V1 . In the middle region the potential is also constant, V (x, y) = V0 , where
0
V0 > V1 . The incoming, reflected and transmitted waves are given by Aeikr , Beik r and F eikr ,
see Fig. 2.2.
is separable in Cartesian coordinates
2 2
h d
h2 d2
2 d2
h
+
(x, y, z) = 0,
V
(x)
+
E+
2m dx2
2m dy 2 2m dz 2
and its solutions can be written as
(x, y, z) = (x)eiky y eikz z
0
= (x)eikk r ; kk = ky ,
kz
(2.2)
where eikk r describes a free particle in the direction parallel to the barrier. The function (x)
is the solution of the one-dimensional scattering problem
2 2
h d
Ex +
V (x) (x) = 0,
(2.3)
2m dx2
with the energy
Ex = E
2 kk2
h
2m
..
(2.4)
Note that kk is a good quantum number1 , which together with the energy E fixes the wave
function. We say that kk defines a mode of the
system.
A view of the scattering geometry in the kx , kk plane is given in Fig. 2.2. In the left and
right regions the wave function is
h
i
A eikr + rk (E)eik0 r ; r in left region
k
i
h
kk (r) =
(2.5)
2.1. LANDAUER IN 3D
27
kx
k ||
k
left region
k x
middle region
k ||
right region
kx
k ||
k
k'
Figure 2.2: Scattering from a planar square barrier as viewed in the (kx , kk ) plane.
with k = (kx , kk ) and k0 = (kx , kk ). I leave you the job of finding the expressions for the
transmission and reflection amplitudes tkk (E) and rkk (E). In three dimensions the probability
current is a vector
J(r, t) =
i
h
[(r, t) (r, t) (r, t)(r, t)]
2m
(2.6)
i
As in the one-dimensional case, for a stationary problem, (r, t) = (r)e h Et , the current is
constant. In three dimensions it is actually more appropriate to use the phrase current density
for J (as in classical electrodynamics), but I keep on using the phrase current for short.
The key point
For the experiments we are interested in, only Jx matters, i.e. the x-component of the current.
For devices like that shown in Fig. 2.1 (and all other devices that we will consider), one attaches
macroscopically large electrodes to the left and right regions and applies a bias voltage between
those electrodes. Only the current along the x-direction is then measured.2,3 So we are interested
in
i
h
(r)
(r)
Jx =
(r)
(r)
.
(2.7)
2m
x
x
From Eq. 2.5 one can easily show that the transmitted current carried by one mode kk is given
by
(2.8)
kx
with the density = |A|2 and the velocity in the x-direction vx = hm
. Deriving an expression
for the conductance follows the same steps as in Sec. 1.2. Applying a small bias V = eU
2
For those of you who are experts in scattering phenomena, note that this is dierent from what you are used
to in angle resolved three-dimensional scattering experiments in free space. There one is usually interested in all
three components of the current. In addition, if the scatterer is localized in space, one applies a transformation
to spherical coordinates.
3
More complicated measurements are possible. In multiprobe experiments more than two electrodes are
attached to the device. This is often done in combination with applying an external magnetic field, as in measuring
the Hall eect. The Landauer formalism can be extended to include multiprobe measurements. This extension is
due to B
uttiker.
28
between left and right regions, the transmitted current is carried by all modes that have an
energy Ek in the range from EF V to EF , see Fig. 1.7.
2
X
(2.9)
JT = 2
tkk (E) vx ,
EF V <Ek <EF
where the factor 2 accounts for the degeneracy. We can use the same tricks as in Sec. 1.2.
Using
states that areP
normalized
in a 3D, we have = 2L1 3 ; compare to Eq. 1.32.4 We write
P
P
kk
EF V <Ek <EF =
kx , where one has to sum only over those states that have their
P
x
energy in the indicated interval. Convert kx into a one-dimensional integral, use vx = h1 dE
dkx ,
and assume that tkk (E) is independent of the energy in the small energy range V . The algebra
is
Z
2
2
1 X X
1 X L
t
JT =
(E)
v
=
(E)
t
vx dkx
x
k
k
k
k
3
3
L
L
kk
kx
kk
h
2 k2
k
Z
Z
2 1 dE
1 1 X
1 1 X EF 2m
x
dkx = 2
tkk (E)
h
2 k2
k
L2
h dkx
L
h
EF V
kk
2
1 V X
tkk (EF ) .
2
L
h
kk
G=
kk
2m
2 kk2
h
) dEx
tkk (Ex +
2m
(2.10)
T (EF ).
tkk (EF ) =
(2.11)
kk
The expression is the Landauer formula, see Eq. 1.29, with the total transmission T expressed
as a sum over the transmissions of the individual modes kk . One has to sum over the kk that
contribute to the transmission at the Fermi energy EF .
The Fermi surface
The Fermi surface is defined by the relation Ek = EF , the Fermi energy being a materials
constant. It can be visualized as a surface in reciprocal space, i.e. in three-dimensional k-space.
For free electrons or electrons in a constant potential
Ek =
2 2
h
2 2
h
kx + ky2 + kz2 =
kx + kk2 = EF .
2m
2m
the Fermi surface is the surface of a sphere, as shown in Fig. 2.3(a). The Fermi surface can
help to visualize which modes contribute to the transmission. The latter can be enumerated by
projecting the part of Fermi surface with kx > 0 (a hemisphere in this case) onto the kk = (ky , kz )
plane, as shown in Fig. 2.3(b). All the modes kk within this projection exist at the Fermi energy
EF and they contribute to the transmission in Eq. 2.11. The scattering geometry in real space
can be deduced from Fig. 2.2. kk = (0, 0)5 corresponds to a wave with normal incidence to the
4
The function L12 eikk r is normalized in a 2D box of size L, as you can easily check yourself. The function
1
. The product gives the normalization factor
(x) is treated as in Eq. 1.32 and gets the normalization factor 2L
of (x, y, z), see Eq. 2.2.
5
called the -point in solid state physics folklore.
2.1. LANDAUER IN 3D
Ek = EF
29
kz
kz
ky
kx
kz
ky
(a)
(b)
ky
(c)
Figure 2.3: (a) The Fermi surface, as defined by Ek = EF , of electrons in a constant potential
is a sphere. (b) The projection of the surface in the kk = (ky , kz ) plane. The shaded area
2
denotes all the kk modes that contribute to the transmission at EF . (c) The transmission tkk
as function of kk . Red indicates a high transmission and blue a low transmission. The highest
transmission is for kk = (0, 0). It decreases to 0 towards the edge of the circle.
barrier. The larger kk = kk , the more glancing the incidence of the corresponding wave. At
h2 2
2 kk2
h
),
tkk (EF ) = T1D (Ex ) = T1D (EF
2m
(2.12)
with T1D given by Eqs. 1.47 and 1.42. The result is visualized in Fig. 2.3(c). Since T1D is a
monotonically increasing function of the energy, the maximal transmission is for kk = (0, 0), i.e.
for normal incidence. The transmission decreases monotonically with increasing kk , i.e. with
the angle of incidence, until it is zero for parallel incidence. Such a simple Fermi surface and a
simple transmission are typical for free electrons. For real materials both the Fermi surface and
the transmission are much more complex. I will come back to in Sec. 2.1.4.
2.1.2
The way we have derived Eq. 2.11 means it can easily be generalized to any system whose
Hamiltonian is separable in an x-term and a yz-term. For instance, the wave functions of the
wire shown in Fig. 2.4 can be written as
(x, r ) = (x) n (r ),
(2.13)
where n is a set of two quantum numbers labeling the modes; see Eq. 2.2. The conductance of
this quantum wire can be expressed as the sum of the transmissions of the individual modes
G=
e2 X
|tn (EF )|2 .
h n
(2.14)
Suppose we are dealing with an ideal wire in which all the modes are fully transmitted.6
6
30
r
x
e2
M (EF ),
(2.15)
h
where M (EF ) is the number of modes at the Fermi energy, supported by the wire. Gbal is called
the ballistic conductance. The number of modes at fixed energy in a wire of finite cross
section is finite. So even for a perfect wire the conductance is finite!! If the cross section is
macroscopically large, the number of modes is very large. Measuring the ballistic conductance
experimentally is then impossible. A real wire always contains impurities and imperfections (lattice defects, grain boundaries, impurity atoms, etc.), and scattering at these defects dominates
the conductance. However, thin and small wires can be made without any defects. Note that,
since the number of modes is an integer, the conductance of Eq. 2.15 is quantized.
A very clear example of quantized conductance is presented by the experiment of van Wees
et al. on the quantum conductance of a channel in a two-dimensional electron gas (2DEG). A
2DEG is formed at an interface between two well-chosen semiconductors. Putting electrodes on
top of the 2DEG it is possible do define a one-dimensional channel (the wire) by a suitable
electrostatic potential profile, as illustrated in Fig. 2.5. The width of the channel determines
the number of modes at the Fermi energy M (EF ). Upon widening the channel, which is done by
making the gate less repulsive for electrons, the number of modes increases and the conductance
e2
increases. However, it increases in steps of
h . This is demonstrated by the experimental results,
shown in Fig. 2.5.
Gbal =
2.1.3
In the real world interfaces nor wires are infinite. This leads to Hamiltonians that are not
separable in an x-term and a yz-term. A two-dimensional example is shown in Fig. 2.6, which
represents a canyon through a 2D square barrier. It is a simple model for a finite wire connected
to two electrodes (the left and right regions). An incoming wave Aeikr with k = (kx , kk ) can be
0
scattered into any other wave Ceik r with k0 = (kx0 , k0k ) provided the energy is conserved, i.e.
2 (kx2 + kk2 )
h
=E=
2 (kx02 + kk02 )
h
.
2m
2m
The idea is shown in Fig. 2.7. The wave function of Eq. 2.5 is generalized to
(2.16)
00
P
r
ikr
ik
kk (r) =
P
0 t
0 (E)e
A
k k ,k
k
(2.17)
00
where k0 labels the transmitted waves, i.e. kx0 > 0, and k00 labels the reflected waves, i.e. kx < 0,
all at the same energy. The reflection and transmission amplitudes rk ,k00 and tk ,k0 indicate
00
2.1. LANDAUER IN 3D
31
Figure 2.5: Quantized conductance of a ballistic waveguide Left: top view of the device. The
current flows from the source to the drain electrode. The gate has a negative potential and
repels the electrons towards the middle. This leaves an eective channel for the electrons. At
a small gate potential the channel is wide (indicated in red), and at a large gate potential the
channel is narrow (indicated in blue). Right: measured conductance as function of the gate
h, as shown by the plateaus; M is the
potential. The conductance is quantized in units of e2 /
number of modes at EF contributing to the conductance. See: B. J. van Wees et al., Phys. Rev.
Lett. 60, 848 (1988).
The derivation of the Landauer formula follows the same steps as in the previous section.
For instance, the current in the x-direction, carried by kk (r) is determined from Eq. 2.7
2
X
0
0 (E) v ,
t
(2.18)
JT,kk =
k ,k
x
0
kk
kx0
h
m ;
0 (E)
0 (E) v ,
t
v
=
2
x
x
k ,k
k ,k
EF V <Ek <EF k0
k
0
kk ,kk
kx0
(2.19)
where the sum over the energies has been replaced by a sum over the states that have their
0
energy in the right interval. Note that one does not have to sum over kx . By fixing kk , kk and
kx0 one has fixed kx , because of Eq. 2.16. One can follow the steps of Eq. 2.10 and obtain an
expression for the conductance which generalizes Eq. 2.11
2
e2 X
0 (EF ) .
t
(2.20)
G=
kk ,kk
h
0
kk ,kk
0
k ,kk
h
This is the Landauer formula for non-uniform wires and interfaces.
(2.21)
32
Aeikr
Feik 'r
V ( x, y)
Beik ''r
left region
middle region
right region
Figure 2.6: A simple model for a wire connected to two electrodes (the left and right regions)
as a canyon through a square barrier. The incoming wave Aeikr can be reflected to any wave
00
0
Beik r or transmitted to any wave F eik r , provided the energy is conserved.
kx
k'
k ||
k '||
k
k x'
left region
k"||
k"
k x"
middle region
right region
Figure 2.7: Scattering from a finite wire between two electrodes as viewed in the kx , kk plane.
Only one of the possible reflected kx00 , k00k waves and one of the transmitted kx0 , k0k waves are
indicated.
2.1.4
We derived the Landauer formula, Eq. 2.21, using a model in which the incoming, reflected
and transmitted waves in the asymtotic regions are plane waves eikr .7 One might wonder how
much of this derivation depends on having plane waves. Not too much, actually. We need to
know what the waves in the asymtotic region look like, but they do not need to be plane waves.
Consider the example shown in Fig. 2.8. It shows a mono-atomic wire of three atoms between
two planar electrodes. Such wires have been made of gold and other metals (albeit with a less
ideal structure).8 The scattering region comprises the wire and a region around the wire in
which the atomic potentials are dierent from bulk atomic potentials. Exactly how large that
region is, depends somewhat on the system, but it will be clear that far into the electrodes, i.e.
far into the left or right leads, the atoms are bulk metal atoms.9
7
The regions outside the scattering region, i.e. the left and right regions are called the asymtotic regions.
by a technique called mechanical break junctions, for instance, see J. van Ruitenbeek.
9
For metallic electrodes the potential at a few atomic layers from the surface is usually indistinguishable from
the bulk potential, because screening eects in metals are large.
8
2.1. LANDAUER IN 3D
33
Figure 2.8: Schematic mono-atomic wire consisting of three atoms between two metal electrodes.
Outside the scattering region one considers the electrodes as bulk metals, whose electronic states
are represented by Bloch waves.
Bloch modes
Assuming that the bulk leads consist of perfect crystalline material, the Bloch-Floquet
theorem tells us that the electronic states are represented by Bloch waves, which have the
form
kn (r) = ukn (r)eikr .
(2.22)
Here ukn (r) is a periodic function in three dimensions, and the primitive unit corresponding to
the three periods is called the unit cell. k is a wave vector within the reciprocal unit cell (called
the first Brillouin zone) and n is the band index (a quantum number that distinguishes between
states with the same wave vector). Proof of this can be found in any book on solid state physics.
These Bloch waves replace the plane waves as modes in the scattering formalism.
The relation between energy and wave vector Ekn is called a dispersion relation. It can
be obtained by solving the Schr
odinger equation for electrons in the periodic crystal potential.
As before, the Fermi surface is defined by the relation Ekn = EF , which gives a surface in
reciprocal space, i.e. in k-space, as in 2.3(a). Actually, it gives a surface for each n, called a
sheet of the Fermi surface.
Fig. 2.9 gives a few examples of Fermi surfaces for dierent metals. The Fermi surface of
some metals resembles that of free electrons; compare Figs. 2.9(a) and 2.3(a). These are called
simple metals; they usually involve s- and/or p-electrons only. In other cases the Fermi surface
can be quite complicated. It can consist of more than one sheet and has a shape that is far from
free electron like (i.e. spherical), as is shown in 2.3(b). This is common among metals involving
d-electrons.
The Landauer formula is still valid, however. The conservation of energy, Eq. 2.16, has to
be replaced by the more general relation
Ekn = E = Ek0 n0 ,
(2.23)
which basically states that one can scatter from a state kn to any state k0 n0 , provided both points
are on the same energy surface (the Fermi surface). In Eq. 2.17 one has to replace the plane
0
00
waves eikr , eik r , eik r by Bloch waves kn (r), k0 n0 (r), k00 n00 (r) and the scattering amplitudes
now also contain the band index, i.e. rk n,k00 n00 and tk n,k0 n0 . Another vital ingredient needed
k
34
kz
Ek1 = EF
Ek 2 = EF
Ek 3 = EF
Ek1 = EF
ky
kx
(a)
(b)
Figure 2.9: (a) The Fermi surface of Cu (copper). The wireframe indicates the first Brillouin
zone (the reciprocal unit cell). The Fermi surface is almost spherical with a few holes punched
through it. In Cu there is only one s-like band crossing the Fermi energy, which gives rise to
one sheet. (b) The Fermi surface of fcc Co (Cobalt) for the minority spin electrons. Co is a
ferromagnetic material. Only one s-like majority spin band crosses the Fermi energy, which
gives rise to a Fermi surface for majority spin electrons that resembles that of Cu. Three d-like
minority spin bands cross the Fermi energy, giving rise to three sheets of the Fermi surface that
are far from spherical, as is shown here.
is an expression for the velocity vx0 . In solid state textbooks it is shown that the Bloch velocity
is given by
vx0 =
1 Ek0 n0
.
kx0
h
(2.24)
A little pondering shows that this is just the relation we need to complete the derivation of the
Landauer formula.
2
e2 h i
e2 X
0 0 (EF ) =
t
Tr t t .
(2.25)
G=
kk n,kk n
h
0
kk n,kk n0
2.1. LANDAUER IN 3D
35
The Landauer formula can be generalized to include the spin states explicitly in a straightforward manner
2
e2
t
0 0 0 (EF ) ,
(2.26)
G=
k
n,k
n
k
k
2
h
0
kk n,kk n0 0
where , 0 = 12 , 12 . In absence of spin-orbit coupling the scattering potential does not flip the
spin and the transmission matrix is diagonal in the spin tk n,k0 n0 0 = tk n,k0 n0 0 . This is
k
tkk 1,kk 1 describes the transmission from the state kk 1 on the projected Fermi surface of
Cu (indicated by a black dot in Fig. 2.10(a)) to a state kk 1 on the projected majority spin
Fermi surface of Co (indicated by a black dot in Fig. 2.10(b)).10 The calculated transmission
2
tkk 1,kk 1 is shown in Fig. 2.10(d) in the kk plane. The transmission is a number between
0 and 1. Obviously the state kk 1 has to exist on the Cu Fermi surface; otherwise one has no
electrons to start with. Around kk = (0, 0) the Cu Fermi surface has a hole where there are
no states, which is indicated in white in 2.10(a) and (d). Then the state kk 1 has to exist on the
Co Fermi surface; otherwise there is no state to transport the electrons to and the transmission
is 0. These are the blue areas in Fig. 2.10(d). The remaining area is almost entirely red, which
means that there the transmission is close to 1.
2
P
Fig. 2.10(e) shows the calculated transmission 3n0 =1 tkk 1,kk n0 between the minority spin
modes of Cu and Co. As one can observe, the transmission pattern is much more complicated
than in the majority spin case with a fine-grained variation of the transmission between 0 and
1. Now compare Figs. 2.10(d) and (e) to Fig. 2.3 (c). It represents the dierence between using
real metals and using jellium.11
Integrating over the area shown in Fig. 2.10(d) gives the majority spin conductance; the
e2
calculated number is Gmaj = 2
h 0.73. Integrating over the area shown in Fig. 2.10(e) gives
e2
the minority spin conductance Gmin = 2
h 0.66. The dierence in the conductance between the
majority and the minority spin modes is 10%. One can enlarge this dierence by making a
multilayer of alternating Cu and Co layers since such a multilayer then contains many Cu/Co
interfaces. The majority/minority dierence in the conductance essentially gives the so-called
GMR (giant magneto-resistance) eect that is measured in such multilayers.12
n = 1 and n0 = 1, since both Fermi surfaces have only one sheet.
Jellium also does not give magnetism at these electronic densities, but I dont want to make a list of whats
wrong with jellium.
12
The GMR eect is used to make sensitive sensors for magnetic fields, which are used in the heads of magnetic
hard drives. It contributed to the rapid development in hard disks to the >102 Gb disks we have today.
10
11
36
Figure 2.10: (a) Fermi surface of Cu projected in the (111) direction on the kk plane. (b) Fermi
surface of the majority spin of Co projected on the same plane. (c) The three sheets of the Fermi
surface of the minority spin of Co projected in the same plane. Blue, green and red indicate
areas where 1,2 and 3 states are projected onto the same kk point. (d) The transmission across
a (111) interface between the Cu and the majority spin in Co, as a function of kk . The color
scale indicates the transmission from 0 (blue) to 1 (red). (e) As (d), but for the transmission
between Cu and the minority spin of Co. The white areas indicate where there are no states in
Cu.
2.2
As will be clear by now, the key quantity for calculating the conductance is the transmission
matrix t. This is quite a bit more complicated than in the one-dimensional case, but in principle
all the techniques discussed in Secs. 1.4 and 1.5 can be extended to calculate transmission
matrices in three dimensions for systems containing real atoms. Since the tight-binding example
given in Sec. 1.5 is chemically the most intuitive, I use this and focus upon the mode matching
technique.
The principal idea is to divide your system into layers of atoms, normal to the transport
direction. One chooses the layers to be suciently thick such that the Hamiltonian matrix only
contains matrix elements that either couple atoms within one layer, or atoms that are in nearest
BL
HL
HL
,......
BL
HL
37
B0
H1
i = 0, 1,...
BS
HS
BR
HR
BR
HR
...,S, S + 1,
HR
......,
Figure 2.11: Hamiltonian of a tunnel junction divided into layers. The transport direction is
along the horizontal. The left (L) and right (R) leads are ideal periodic wires containing the
layers i = , . . . , 0 and i = S + 1, . . . , , respectively. The layers i = 1, . . . , S consitute the
scattering region.
neighbor layers.13 The Hamiltonian matrix of Eq. 1.93 then
..
.
0
0
..
. Hi1 Bi1
0
H=
Bi
0 Bi1 Hi
0
0
Bi Hi+1
0
0
becomes
0
0
..
.
..
.
(2.27)
A schematic representation of the structure of the Hamiltonian is given Fig. 2.11. The matrices
Hi contain the interactions between atoms within the layer i, with i = , . . . , , whereas
the matrices Bi describe the coupling between the layers i and i + 1. Both of these are N N
matrices, where N is the total number of atomic orbitals for all atoms in a layer. The scattering
region is localized in the layers i = 1, . . . , S.14
This representation is valid both for systems with a finite cross section (i.e. wires), and for
infinite layered systems that are periodic along the interfaces (i.e. along the layers). In the last
case N refers to the number of atoms within the unit cell. kk is then a good quantum number
and the matrices Hi (kk ) depend on this quantum number. I wont discuss the exact expressions;
they can be found in the literature. In fact, in order to simplify the notation I will omit the
quantum number kk from now on. I will use the phrase wire to indicate both systems with a
finite cross section and systems with infinite periodic interfaces.
The wave function of Eq. 1.91 is generalized to
..
.
ci1
(2.28)
=
ci ,
ci+1
..
.
13
The formalism can be extended to include interactions of a longer range, but the expressions become a bit
messy. So far there has been no need for this extension.
14
Note that compared to Secs. 1.4 and 1.5 I have a slight change in notation here. Now N is the dimension of
the basis within one layer, and S is the number of layers in the scattering region. Sorry about that; its what you
get if you merge texts that have a dierent origin.
38
where ci is the N -vector of coecients of the atomic orbitals of all atoms in a layer. As in Sec.
1.5 we divide our system into three parts, with i = , . . . , 0 corresponding to the left lead
(L), i = 1, . . . , S to the scattering region (S) and i = S + 1, . . . , to the right lead (R). Using
the mode matching technique the scattering problem is solved in two steps. In the first step the
Bloch modes of the leads are calculated and in the second step these are then matched to the
scattering region.
2.2.1
The leads are assumed to be ideal wires characterized by a periodic potential. It is then appropriate to identify a layer with a translational period along the wire. By construction, the
Hamiltonian matrix then is the same for each layer in the leads, i.e. Hi = HL/R and Bi = BL/R
for the left/right leads, see Fig. 2.11. Eq. 1.95 is generalized to
BL/R ci1 + (E1 HL/R )ci BL/R ci+1 = 0,
(2.29)
where 1 is the N N identity matrix.15 We make the same ansatz as in Eq. 1.97, namely that
the coecients in successive layers are connected by a Bloch factor
ci1 c; ci = c; ci+1 = ci = 2 c
(2.30)
Bc + (E1 H)c 2 B c = 0,
(2.31)
where the subscripts L/R have been omitted to simplify the notation. Remember that we work
at a fixed energy E, so Eq. 2.31 is a quadratic eigenvalue equation in of dimension N . There
are standard tricks to solve such equations. For instance, by defining d =c one can convert it
into
c
0
1
1 0
= 0.
(2.32)
d
B E1 H
0 B
This is a linear (generalized) eigenvalue problem of dimension 2N , which can be solved using
standard numerical routines.
It can be shown that this equation indeed generally has 2N solutions, which can be divided
into N right-going modes and N left-going modes, labelled by a + and a subscript as
in Eqs. 1.100 and 1.102. Right-going modes are either evanescent waves that are decaying to
the right, or waves of constant amplitude that are propagating to the right, whereas left-going
modes are decaying or propagating to the left. Figs. 2.12 and 2.13 give you a simple idea.
In contrast to the 1D case, we find in three dimensions at a fixed energy in general both
evanescent and propagating modes. We denote the eigenvalues and eigenvectors of Eq.
2.31 by16
,n ; u,n ; n = 1, . . . , N.
(2.33)
Together these states form a complete basis set. In the following we assume that the vectors
u,n are normalized. However, note that in general they are not orthogonal.17 One can easily
15
Again, for non-orthogonal basis sets one can substitute 1 by S, the overlap matrix.
Again I remind you that I have omitted the kk quantum number. So if you see the index n to denote a mode
in the following, you can substitute it by kk n or by kk n to denote the full mode.
17
This follows from the quadratic eigenvalue problem, Eq. 2.31, or the equivalent generalized linear one, Eq.
2.32.
16
39
0.5
2
x
-0.5
-1
Figure 2.12: Right-going, i.e. +, modes. Black curve: propagating mode, |+ | = 1. Blue and
red curves: examples of evanescent modes, |+ | < 1.
1
0.5
2
x
-0.5
-1
Figure 2.13: Left-going, i.e. , modes. Black curve: propagating mode, | | = 1. Blue and
red curves: examples of evanescent modes, | | > 1.
distinguish right- from left-going evanescent modes on the basis of their eigenvalues. Right-going
evanescent modes, which decay to the right, have |+,n | < 1 and left-going evanescent modes,
which decay to the right, have |,n | > 1, see Eq. 2.30 and Figs. 2.12 and 2.13. Propagating
modes per definition have |,n | = 1, so here one has to determine the Bloch velocity in the
propagation direction and use its sign to distinguish right from left propagation. One can show
that for a tight-binding form of the Hamiltonian, the general expression for the Bloch velocities
of Eq. 2.24 becomes
h
i
2a
(2.34)
v,n = Im ,n u,n B u,n ,
h
where a is the thickness of the layer. The derivation of this expression is rather technical and
can be found in the literature. In addition one can show that the Bloch velocity is non-zero only
for propagating modes, i.e. propagating states have a Bloch velocity equal to zero.18
e ,n by
Since the eigenvectors are non-orthogonal, it is convenient to define dual vectors u
18
e ,m = n,m .
e ,n u,m = n,m ; u,n u
u
(2.35)
Which is what you might expect. Evanescent modes contribute a zero current, i.e. no particles, energy, or
whatever is transported by evanescent modes.
40
Any wave function in the leads can be expressed as a linear combination of the lead modes. This
can be done in a very compact way by defining the two N N Bloch matrices for right- and
left-going modes
F =
N
X
n=1
e ,n .
,n u,n u
(2.36)
e ,n .
i,n u,n u
(2.37)
These are the generalizations of the Bloch factors of the one-dimensional case, see Eqs. 1.100
and 1.102. Note that one can easily construct powers of Bloch matrices
Fi =
N
X
n=1
This is valid for any integer i, due to Eq. 2.35. A general solution in the leads can now be
expressed as a recursion relation
ij
ci = c+,i + c,i = Fij
+ c+,j + F c,j .
(2.38)
In a scattering problem one usually fixes the coecients in one layer using boundary conditions.
By using Eq. 2.38 one can then determine the solution in all the layers of the leads.
2.2.2
Mode matching
One can use these recursion relations to set up a set of equations that properly match the leads
to the scattering region. The scattering region is defined by the layers i = 1, . . . , S, see Fig.
2.11. Immediately left of the scattering regions one has the recursion relation
1
c1 = F1
L,+ c+,0 + FL, c,0 .
h
i
1
1
c1 = F1
F
L,+
L, c+,0 + FL, c0 .
(2.39)
c+,0 = uL,+,m .
(2.40)
The scattering boundary condition is introduced in the following way. The vector c+,0 is treated
as the source, i.e. as the incoming wave from the left lead, e.g. a specific (propagating) mode
of the left lead.
(2.41)
The relations Eqs. 2.39 and 2.41 can now be used to simplify the tight-binding equations
Bi ci1 + (E1 Hi )ci Bi ci+1 = 0.
(2.42)
41
1
1
(E1 HL BL F1
F
)c
c
=
B
F
L
1 1
L, 0
L,+
L, uL,+,m .
(2.43)
(2.44)
Eq. 2.42, for i = 1, . . . , S, and Eqs. 2.43 and 2.44 can be collected into
(E1 H0 ) = QL,+,m ,
(2.45)
with
and
0
H =
HL + L (E)
c0
..
.
=
ci
..
.
cS+1
The quantities
B1
..
.
0
Bi
..
.
0
0
B1
0
Bi1
Bi1
Hi
0
0
0
0
Bi
0
; QL,+,m =
BS
BS
HR + R (E)
h
i
1
BL F1
F
L,+
L, uL,+,m
..
.
0
..
.
0
(2.46)
(2.47)
L (E) = BL F1
L, ;
R (E) = BR FR,+ ,
(2.48)
are called the self-energies of the left and right leads, respectively, just as in Eq. 1.107. They
contain all the information about the coupling of the scattering region to the leads, as well as the
information about the scattering boundary conditions. Note that the self-energies depend upon
the energy E, since are expressed in the Bloch matrices and thus in the lead modes and the
latter have been determined at a fixed energy E. Moreover, the self-energies are non-Hermitian,
which makes the Hamiltonian H0 non-Hermitian.19 An important observation is that Eq. 2.45
has a finite dimension!20
Eq. 2.45 represents a set of linear equations of dimension (S + 2) N . These can be solved
using the common techniques, e.g. Gaussian elimination (or LU decomposition, as it is also
19
This is logical if you know your quantum mechanics. A particle in the scattering region has a finite probability
to leak into the leads, i.e. to disappear from the scattering region. In other words, the particle has a finite lifetime.
This can only be expressed by non-Hermitian Hamiltonians, as Hermitian ones would only give real energies
i
and time factors e h Et yielding constant probabilities, i.e. infinite lifetimes.
20
If you suspect that the partitioning technique is behind this, you are right.
42
known), followed by back substitution. Since the matrix H0 has a block tridiagonal form (the
blocks being of dimension N ), one can make use of this special form to make the Gaussian
elimination algorithm ecient. The details can be found in the literature. Transmission
matrix elements are obtained by expanding the wave function in the right lead into modes21
cS+1 =
N
X
n1
e R,+,n cS+1
uR,+,n t0n,m t0n,m = u
(2.49)
and normalize them with the velocities to ensure a unitary scattering matrix, see Eqs. 1.53 and
1.72.
r
vL,+,m
e
tn,m =
cS+1
(2.50)
u
vR,+,n R,+,n
2.2.3
For those of you who cannot live without Green functions, I can give you the Landauer formalism
expressed in Green functions. The derivations are rather technical, so I will skip most of them.
You can find them in the literature. With respect to the Hamiltonian matrix of Eq. 2.46 a finite
Green function matrix can be defined as
G(E) = (E1 H0 )1 .
(2.51)
It can be calculated by matrix inversion using essentially the same block Gaussian elimination
scheme as in solving the set of linear equations, Eq. 2.45. The matrix H0 is non-Hermitian; its
eigenvalues are not real, so the Green function matrix can be evaluated for real energies.24
With respect to the original infinite Hamiltonian H of Eq. 2.27 one can define the usual
retarded (infinite) Green function matrix
Gr (E) = [(E + i)1 H]1 ,
21
(2.52)
43
where one needs the infinitesimal to avoid the poles on the real axis. The advanced Green
function Ga (E) can be obtained from Eq. 1.80. One can show that for layers in the scattering
region one has
Gi,j (E) = Gri,j (E); i, j = 0, . . . , S + 1.
(2.53)
if one properly takes the lim0 . In terms of the Green function, the wave function in the
scattering region can be written as
= G(E) QL,+,m ,
(2.54)
vL,+,m
e
u
GS+1,0 (E) QL,+,m .
vR,+,n R,+,n
(2.55)
After some manipulation one can obtain a Fisher-Lee expression that generalizes the 1D
expression of Eq. 1.83
r
vR,+,n vL,+,m
e 0L,+,m .
e R,+,n GS+1,0 (E) u
tn,m = i
h
(2.56)
u
aL aR
The notation is a bit messy due to the fact the modes uL/R,+,m/n are not orthogonal; see Sec.
2.2.1. If they were orthogonal, then the expression would simply contain uR,+,n GS+1,0 (E)uL,+,m .
This indicates that the transmission amplitude from mode uL,+,m in the left lead to mode uR,+,n
in the right lead is determined by the Green function matrix that brings you from layer i = 0
to layer i = S + 1, i.e. across the scattering region. Proper orthogonalization of the modes
e 0L,+,m . The details are a bit messy and can be found in the
e R,+,n and u
leads to expressions for u
literature. The Bloch velocities vL/R,+,m/n serve to make scattering matrix unitary, as before.
Since vL/R,+,m/n 6= 0 only for propagating states, Eq. 2.56 expresses explicitly that the transmission is zero whenever n or m describes an evanescent mode. The layer thicknesses aL/R are
just normalization factors, since our modes are normalized within a layer.
After some algebra the Green function expressions we obtained in the one-dimensional case
can be generalized to three dimensions. The velocity of Eq. 1.77 becomes a velocity matrix
VL/R =
2aL/R
Im L/R .
h
(2.57)
It is a diagonal matrix of dimension N (the total number of modes). The diagonal matrix
elements are the mode velocities vL/R,,n of Eq. 2.34. Since vL/R,,n = 0 for evanescent states,
this means that in general the velocity matrices are singular. The transmission amplitudes of
Eq. 2.56 can be assembled in a transmission matrix t, which using the velocity matrices can be
expressed as
p
p
(2.58)
t = 2i Im R GS+1,0 (E) Im L .
Note that this generalizes Eq. 1.84. Finally, the total transmission can be expressed as
h i
(2.59)
T = Tr t t = 4Tr Im R GrS+1,0 (E) Im L Ga0,S+1 (E)
which generalizes Eq. 1.85. This is known as the Caroli expression or the NEGF expression.25
25
NEGF = non-equilibrium Green function. In the linear response regime one can use equilirium Green functions, however, which is what we are doing here.