Quantum Measurement and Control
Quantum Measurement and Control
Quantum Measurement and Control
The control of individual quantum systems promises a new technology for the twenty-first
century quantum technology. This book is the first comprehensive treatment of modern
quantum measurement and measurement-based quantum control, which are vital elements
for realizing quantum technology.
Readers are introduced to key experiments and technologies through dozens of recent
experiments in cavity QED, quantum optics, mesoscopic electronics and trapped particles,
several of which are analysed in detail. Nearly 300 exercises help build understanding, and
prepare readers for research in these exciting areas.
This important book will interest graduate students and researchers in quantum information, quantum metrology, quantum control and related fields. Novel topics covered include
adaptive measurement; realistic detector models; mesoscopic current detection; Markovian,
state-based and optimal feedback; and applications to quantum information processing.
ho ward m. wi s e m a n is Director of the Centre for Quantum Dynamics at Griffith
University, Australia. He has worked in quantum measurement and control theory since
1992, and is a Fellow of the Australian Academy of Science (AAS). He has received the
Bragg Medal of the Australian Institute of Physics, the Pawsey Medal of the AAS and the
Malcolm Macintosh Medal of the Federal Science Ministry.
gerard j. mi lb u r n is an Australian Research Council Federation Fellow at the University of Queensland, Australia. He has written three previous books, on quantum optics,
quantum technology and quantum computing. He has been awarded the Boas Medal of the
Australian Institute of Physics and is a Fellow of the Australian Academy of Science and
the American Physical Society.
An outstanding introduction, at the advanced graduate level, to the mathematical description of quantum
measurements, parameter estimation in quantum mechanics, and open quantum systems, with attention to
how the theory applies in a variety of physical settings. Once assembled, these mathematical tools are
used to formulate the theory of quantum feedback control. Highly recommended for the physicist who
wants to understand the application of control theory to quantum systems and for the control theorist who
is curious about how to use control theory in a quantum context.
Carlton Caves, University of New Mexico
A comprehensive and elegant presentation at the interface of quantum optics and quantum measurement
theory. Essential reading for students and practitioners, both, in the growing quantum technologies
revolution.
Howard Carmichael, The University of Auckland
Quantum Measurement and Control provides a comprehensive and pedagogical introduction to critical
new engineering methodology for emerging applications in quantum and nano-scale technology. By
presenting fundamental topics first in a classical setting and then with quantum generalizations, Wiseman
and Milburn manage not only to provide a lucid guide to the contemporary toolbox of quantum
measurement and control but also to clarify important underlying connections between quantum and
classical probability theory. The level of presentation is suitable for a broad audience, including both
physicists and engineers, and recommendations for further reading are provided in each chapter. It would
make a fine textbook for graduate-level coursework.
Hideo Mabuchi, Stanford University
This book present a unique summary of the theory of quantum measurements and control by pioneers in
the field. The clarity of presentation and the varied selection of examples and exercises guide the reader
through the exciting development from the earliest foundation of measurements in quantum mechanics to
the most recent fundamental and practical developments within the theory of quantum measurements and
control. The ideal blend of precise mathematical arguments and physical explanations and examples
reflects the authors affection for the topic to which they have themselves made pioneering contributions.
Klaus Mlmer, University of Aarhus
QUANTUM MEASUREMENT
AND CONTROL
HOWARD M. WISEMAN
Griffith University
GERARD J. MILBURN
University of Queensland
978-0-511-65841-9
eBook (NetLibrary)
ISBN-13
978-0-521-80442-4
Hardback
Contents
Preface
1 Quantum measurement theory
1.1
Classical measurement theory
1.2
Quantum measurement theory
1.3
Representing outcomes as operators
1.4
Most general formulation of quantum measurements
1.5
Measuring a single photon
1.6
Further reading
2 Quantum parameter estimation
2.1 Quantum limits to parameter estimation
2.2 Optimality using Fisher information
2.3
Examples of BC-optimal parameter estimation
2.4
Interferometry other optimality conditions
2.5
Interferometry adaptive parameter estimation
2.6 Experimental results for adaptive phase estimation
2.7 Quantum state discrimination
2.8
Further reading
3 Open quantum systems
3.1
Introduction
3.2
The BornMarkov master equation
3.3 The radiative-damping master equation
3.4
Irreversibility without the rotating-wave approximation
3.5
Fermionic reservoirs
3.6
The Lindblad form and positivity
3.7 Decoherence and the pointer basis
3.8
Preferred ensembles
3.9 Decoherence in a quantum optical system
3.10 Other examples of decoherence
vii
page xi
1
1
8
27
32
42
49
51
51
54
62
68
76
83
85
94
97
97
99
102
109
113
119
121
124
130
136
viii
Contents
141
146
148
148
149
154
157
166
172
181
190
201
215
216
216
217
231
237
246
251
259
265
269
269
270
278
283
308
312
337
341
341
343
347
353
362
368
375
379
Contents
7.9
Adaptive phase measurement and single-rail LOQC
7.10 Further reading
Appendix A: Quantum mechanics and phase-space
A.1 Fundamentals of quantum mechanics
A.2 Multipartite systems and entanglement
A.3 Position and momentum
A.4 The harmonic oscillator
A.5 Quasiprobability distributions
Appendix B: Stochastic differential equations
B.1 Gaussian white noise
B.2 Ito stochastic differential calculus
B.3 The ItoStratonovich relation
B.4 Solutions to SDEs
B.5 The connection to the FokkerPlanck equation
B.6 More general noise
References
Index
ix
390
395
398
398
404
407
410
414
418
418
420
422
423
424
425
430
449
Preface
The twenty-first century is seeing the emergence of the first truly quantum technologies;
that is, technologies that rely on the counter-intuitive properties of individual quantum systems and can often outperform any conventional technology. Examples include quantum
computing, which promises to be much faster than conventional computing for certain problems, and quantum metrology, which promises much more sensitive parameter estimation
than that offered by conventional techniques. To realize these promises, it is necessary to
understand the measurement and control of quantum systems. This book serves as an introduction to quantum measurement and control, including some of the latest developments
in both theory and experiment.
In using the term feedback or feedforward we are assuming that a measurement step intervenes see Section 5.8.1 for further
discussion.
xi
xii
Preface
We have not attempted to give a full review of research in the field. The following section
of this preface goes some way towards redressing this. The further reading section which
concludes each chapter also helps. Our selection of material is naturally biassed towards
our own work, and we ask the forbearance of the many workers in the field, past or present,
whom we have overlooked.
We have also not attempted to write an introduction to quantum mechanics suitable for
those who have no previous knowledge in this area. We do cover all of the fundamentals in
Chapter 1 and Appendix A, but formal knowledge is no substitute for the familiarity which
comes with working through exercises and gradually coming to grips with new concepts
through an introductory course or text-book.
Our book is therefore aimed at two groups wishing to do research in, or make practical
use of, quantum measurement and control theory. The first is physicists, for whom we
provide the necessary introduction to concepts in classical control theory. The second is
control engineers who have already been introduced to quantum mechanics, or who are
introducing themselves to it in parallel with reading our book.
In all but a few cases, the results we present are derived in the text, with small gaps
to be filled in by the reader as exercises. The substantial appendices will help the reader
less familiar with quantum mechanics (especially quantum mechanics in phase space)
and stochastic calculus. However, we keep the level of mathematical sophistication to a
minimum, with an emphasis on building intuition. This is necessarily done at the expense
of rigour; ours is not a book that is likely to appeal to mathematicians.
Historical background
Quantum measurement theory provides the essential link between the quantum formalism
and the familiar classical world of macroscopic apparatuses. Given that, it is surprising how
much of quantum mechanics was developed in the absence of formal quantum measurement
theory the structure of atoms and molecules, scattering theory, quantized fields, spontaneous emission etc. Heisenberg [Hei30] introduced the reduction of the wavepacket,
but it was Dirac [Dir30] who first set out quantum measurement theory in a reasonably
rigorous and general fashion. Shortly afterwards von Neumann [vN32] added a mathematicians rigour to Diracs idea. A minor correction of von Neumanns projection postulate by
Luders [Lud51] gave the theory of projective measurements that is still used today.
After its formalization by von Neumann, quantum measurement theory ceased to be of
interest to most quantum physicists, except perhaps in debates about the interpretation of
quantum mechanics [Sch49]. In most experiments, measurements were either made on a
large ensemble of quantum particles, or, if they were made on an individual particle, they
effectively destroyed that particle by detecting it. Thus a theory of how the state of an
individual quantum system changed upon measurement was unnecessary. However, some
mathematical physicists concerned themselves with generalizing quantum measurement
theory to describe non-ideal measurements, a programme that was completed in the 1970s
by Davies [Dav76] and Kraus [Kra83]. Davies in particular showed how the new formalism
Preface
xiii
could describe a continuously monitored quantum system, specifically for the case of
quantum jumps [SD81].
By this time, experimental techniques had developed to the point where it was possible
to make quantum-limited measurements on an individual quantum system. The prediction
[CK85] and observation [NSD86, BHIW86] of quantum jumps in a single trapped ion was
a watershed in making physicists (in quantum optics at least) realize that there was more
to quantum measurement theory than was contained in von Neumanns formalization. This
led to a second watershed in the early 1990s when it was realized that quantum jumps could
be described by a stochastic dynamical equation for the quantum state, giving a new numerical simulation method for open quantum systems [DCM92, GPZ92, Car93]. Carmichael
[Car93] coined the term quantum trajectory to describe this stochastic evolution of the
quantum state. He emphasized the relation of this work to the theory of photodetection, and
generalized the equations to include quantum diffusion, relating to homodyne detection.
Curiously, quantum diffusion equations had independently, and somewhat earlier, been
derived in other branches of physics [Bel02]. In the mathematical-physics literature,
Belavkin [Bel88, BS92] had made use of quantum stochastic calculus to derive quantum
diffusion equations, and Barchielli [Bar90, Bar93] had generalized this to include quantumjump equations. Belavkin had drawn upon the classical control theory of how a probability
distribution could be continuously (in time) conditioned upon noisy measurements, a process called filtering. He thus used the term quantum filtering equations for the quantum
analogue. Meanwhile, in the quantum-foundations literature, several workers also derived
these sorts of equations as attempts to solve the quantum-measurement problem by incorporating an objective collapse of the wavefunction [Gis89, Dio88, Pea89, GP92a, GP92b].
In this book we are not concerned with the quantum measurement problem. By contrast,
Belavkins idea of making an analogy with classical control theory is very important for
this book. In particular, Belavkin showed how quantum filtering equations can be applied
to the problem of feedback control of quantum systems [Bel83, Bel88, Bel99]. A simpler
version of this basic idea was developed independently by the present authors [WM93c,
Wis94]. Quantum feedback experiments (in quantum optics) actually date back to the mid
1980s [WJ85a, MY86]. However, only in recent years have sophisticated experiments been
performed in which the quantum trajectory (quantum filtering equation) has been essential
to the design of the quantum control algorithm [AAS+ 02, SRO+ 02].
At this point we should clarify exactly what we mean by quantum control. Control is,
very roughly, making a device work well under adverse conditions such as (i) uncertainties in
parameters and/or initial conditions; (ii) complicated dynamics; (iii) noise in the dynamics;
(iv) incomplete measurements; and (v) resource constraints. Quantum control is control
for which the design requires knowledge of quantum mechanics. That is, it does not mean
that the whole control process must be treated quantum mechanically. Typically only a
small part (the system) is treated quantum mechanically, while the measurement device,
amplifiers, collators, computers, signal generators and modulators are all treated classically.
As stated above, we are primarily concerned in this book with quantum feedback control.
However, there are other sorts of quantum control in which measurement theory does not
xiv
Preface
play a central role. Here we briefly discuss a few of these; see Ref. [MK05] for a fuller review
of types of quantum control and Ref. [MMW05] for a recent sample of the field. First, openloop control means applying control theory to manipulate the dynamics of systems in the
absence of measurement [HTC83, DA07]. The first models of quantum computing were all
based upon open-loop control [Pre98]. It has been applied to good effect in finite quantum
systems in which the Hamiltonian is known to great precision and real-time measurement
is impossible, such as in nuclear magnetic resonance [KGB02, KLRG03]. Second, there
is learning control, which applies to systems in which the Hamiltonian is not known well
and real-time measurement is again impossible, such as chemical reactions [PDR88]. Here
the idea is to try some control strategy with many free parameters, see what results, adjust
these parameters, and try again. Over time, an automated learning procedure can lead to
significant improvements in the performance of the control strategy [RdVRMK00]. Finally,
general mathematical techniques developed by control theorists, such as semi-definite
programming and model reduction, have found application in quantum information theory.
Examples include distinguishing separable and entangled states [DPS02] and determining
the performance of quantum codes [RDM02], respectively.
The structure of this book
The structure of this book is shown in Fig. 1. It is not a linear structure; for example,
the reader interested in Chapter 7 could skip most of the material in Chapters 2, 3 and
6. Note that the reliance relation (indicated by a solid arrow) is meant to be transitive.
That is, if Chapter C is indicated to rely upon Chapter B, and likewise Chapter B upon
Chapter A, then Chapter C may also rely directly upon Chapter A. (This convention avoids
a proliferation of arrows.) Not shown in the diagram are the two Appendices. Material in
the first, an introduction to quantum mechanics and phase space, is used from Chapter 1
onwards. Material in the second, on stochastic differential equations, is used from Chapter 3
onwards.
For the benefit of readers who wish to skip chapters, we will explain the meaning of
each of the dashed arrows. The dashed arrow from Chapter 2 to Chapter 7 is for Section 2.5 on adaptive measurements, which is used in Section 7.9. That from Chapter 3
to Chapter 4 is for Section 3.6, on the Lindblad form of the master equation, and Section 3.11, on the Heisenberg picture dynamics. That from Chapter 3 to Chapter 6 is for
Section 3.8 on preferred ensembles. That from Chapter 5 to Chapter 6 is for Section 5.5
on homodyne-based Markovian feedback. Finally, that from Chapter 6 to Chapter 7 is for
the concept of an optimal quantum filter, which is introduced in Section 6.5. Of course,
there are other links between various sections of different chapters, but these are the most
important.
Our book is probably too long to be covered in a single graduate course. However,
selected chapters (or selected topics within chapters) could be used as the basis of such a
course, and the above diagram should aid a course organizer in the selection of material.
Here are some examples. Chapters 1, 3 and 4 could be the text for a course on open quantum
Preface
xv
Fig. 1 The structure of this book. A solid arrow from one chapter to another indicates that the latter
relies on the former. A dashed arrow indicates a partial reliance.
systems. Chapters 1, 4 and 6 (plus selected other sections) could be the text for a course on
state-based quantum control. Chapters 1 and 2 could be the text for a course on quantum
measurement theory.
Acknowledgements
This book has benefited from the input of many people over the years, both direct (from
reading chapter drafts) and indirect (in conversation with the authors). Andy Chia deserves
particular thanks for his very thorough reading of several chapters and his helpful suggestions for improvements and additions. Prahlad Warszawski also deserves mention for
reading and commenting upon early drafts of the early chapters. The quality of the figures
xvi
Preface
is due largely to the meticulous labour of Andy Chia. Much other painstaking work was
assisted greatly by Nadine Wiseman, Jay Gambetta, Nisha Khan, Josh Combes and Ruth
Forrest. We are grateful to all of them for their efforts.
In compiling a list of all those who deserve thanks, it is inevitable that some will be inadvertently omitted. At the risk of offending these people, we would also like to acknowledge
scientific interaction over many years with (in alphabetical order) Slava Belavkin, Andy
Berglund, Dominic Berry, Luc Bouten, Sam Braunstein, Zoe Brady, Howard Carmichael,
Carlton Caves, Anushya Chandran, Yanbei Chen, Andy Chia, Josh Combes, Lajos Diosi,
Andrew Doherty, Jay Gambetta, Crispin Gardiner, J. M. Geremia, Hsi-Sheng Goan, Ramon
van Handel, Kurt Jacobs, Matt James, Sasha Korotkov, Navin Khaneja, P. S. Krishnaprasad,
Erik Lucero, Hans Maasen, Hideo Mabuchi, John Martinis, Ahsan Nazir, Luis Orozco, Neil
Oxtoby, Mohan Sarovar, Keith Schwab, John Stockton, Laura Thomsen and Stuart Wilson.
Chapter 6 requires a special discussion. Sections 6.36.6, while building on Ref. [WD05],
contain a large number of hitherto unpublished results obtained by one of us (H.M.W) in
collaboration with Andrew Doherty and (more recently) Andy Chia. This material has
circulated in the community in draft form for several years. It is our intention that much of
this material, together with further sections, will eventually be published as a review article
by Wiseman, Doherty and Chia.
The contributions of others of course take away none of the responsibility of the authors
for errors in the text. In a book of this size, there are bound to be very many. Readers are
invited to post corrections and comments on the following website, which will also contain
an official list of errata and supplementary material:
www.quantum-measurement-and-control.org
1
Quantum measurement theory
This space is often called phase space, with configuration space referring only to the space of positions. We will not use
configuration space with this meaning.
conscientiously for the first two chapters, but in subsequent chapters we will become more
relaxed about such issues in order to avoid undue notational complexity.
The system state, as we have defined it, represents an observers knowledge about the
system variables. Unless the probability distribution is non-zero only for a single configuration, we say that it represents a state of uncertainty or incomplete knowledge. That is,
in this book we adopt the position that probabilities are subjective: they represent degrees of
certainty rather than objective properties of the world. This point of view may be unfamiliar
and lead to uncomfortable ideas. For example, different observers, with different knowledge
about a system, would in general assign different states to the same system. This is not a
problem for these observers, as long as the different states are consistent. This is the case
as long as their supports on configuration space are not disjoint (that is, as long as they all
assign a non-zero probability to at least one set of values for the system variables). This
guarantees that there is at least one state of complete knowledge (that is, one configuration)
that all observers agree is a possible state.
We now consider measurement of a classical system. With a perfect measurement of X,
the observer would simply find out its value, say x . The system state would then be a state
of complete knowledge about this variable. For discrete variables this is represented by the
Kronecker -function (x) = x,x , whereas for a continuous variable it is represented by
the Dirac -function (x) = (x x ). For comparison with the quantum case (in following
sections), it is more enlightening to consider imperfect measurements. Suppose that one only
has access to the values of the system variables indirectly, through an apparatus variable
Y . The state of the apparatus is also specified by a probability distribution (y). By some
physical process, the apparatus variable becomes statistically dependent on the system
variable. That is, the configuration of the apparatus is correlated (perhaps imperfectly) with
the configuration of the system. If the apparatus variable is observed, {(y): y} is simply
the probability distribution of measurement outcomes.
One way of thinking about the systemapparatus correlation is illustrated in Fig. 1.1.
The correlation is defined by a functional relationship among the readout variable, Y , the
system variable, X, before the measurement, and a random variable, , which represents
extra noise in the measurement outcome. We can specify this by a function
Y = G(X, ),
(1.1)
together with a probability distribution ( ) for the noise. Here, the noise is assumed to
be independent of the system, and is assumed not to affect the system. That is, we restrict
our consideration for the moment to non-disturbing measurements, for which X after the
measurement is the same as X before the measurement.
MEASURING
APPARATUS
SYSTEM
= G (X, )
NOISE
we have introduced another abuse of notation, namely that (x := a) means (x) evaluated
at x = a, where a is any number or variable. In other words, it is another way of writing
Pr[X = a] (for the case of discrete variables). The convenience of this notation will become
evident.
We assume that the apparatus and noise are also described by binary variables with values
0 and 1. We take the output variable Y to be the binary addition (that is, addition modulo
2) of the system variable X and the noise variable . In the language of binary logic, this
is called the exclusive or (XOR) of these two variables, and is written as
Y = X .
(1.2)
(1.3)
(y := 0) = (x := 0) + (1 )(x := 1).
(1.4)
This may be written more succinctly by inverting Eq. (1.2) to obtain = X Y and
writing
(y) =
1
x=0
( := x y)(x).
(1.5)
In the case of a binary variable X with distribution (x), it is easy to verify that the mean
is given by E[X] = (x := 1). Here we are using E to represent expectation of. That is,
in the general case,
E[X] =
x Pr[X = x] =
x(x).
(1.6)
x
More generally,
E[f (X)] =
x
f (x)Pr[X = x] =
f (x)(x).
(1.7)
(1.8)
(1.9)
(1.10)
Equation (1.9) shows that the average measurement result is the system variable mean,
scaled by a factor of 2 1, plus a constant off-set of 1 . The scaling factor also appears
in the variance equation (1.10), together with a constant (the first term) due to the noise
added by the measurement process. When the measurement is ideal ( = 1), the mean and
variance of the readout variable directly reflect the statistics of the measured system state.
(1.11)
where A and B are events, A B is their intersection and A|B is to be read as A given B.
In an obvious generalization of this notation from events to the values of system variables,
Bayes theorem says that the conditional system state may be written in terms of the a-priori
(or prior) system state (x) as
(y|x)(x)
.
(1.12)
(x|y) =
(y)
Here the prime emphasizes that this is an a-posteriori state, and (sticking to the discrete
case as usual)
(y) =
(y|x)(x),
(1.13)
x
Here (y|x, ) means the state of y given the values of x and . If the output function
Y = G(X, ) is invertible in the sense that there is a function G1 such that = G1 (X, Y ),
then we can further simplify this as
(y|x) =
,G1 (x,y) ( ) = ( := G1 (x, y)).
(1.15)
( := G1 (x, y))(x)
.
(y)
(1.16)
Exercise 1.2 If you are unfamiliar with probability theory, derive the first equality in
Eq. (1.14).
As well as defining the conditional post-measurement system state, we can define an
unconditional posterior state by averaging over the possible measurement results:
(x) =
(x|y)(y) =
( := G1 (x, y))(x).
(1.17)
y
The terms conditional and unconditional are sometimes replaced by the terms selective and
non-selective, respectively. In this case of a non-disturbing measurement, it is clear that
(x) = (x).
(1.18)
That is, the unconditional posterior state is always the same as the prior state. This is the
counterpart of the statement that the system variable X is unaffected by the measurement.
Exercise 1.3 Determine the posterior conditional states (x|y) in the above binary example for the two cases y = 0 and y = 1. Show that, in the limit 1, (x|y) x,y
(assuming that (x := y) = 0), whereas, in the case = 1/2, (x|y) = (x). Interpret
these results.
(1.19)
so that = G1 (X, Y ) = Y X. We must also specify the probability density for the
noise variable , and a common choice is a zero-mean Gaussian with a variance 2 :
( ) = (2 2 )1/2 e
/(22 )
(1.20)
The post-measurement apparatus state is given by the continuous analogue of Eq. (1.13),
(y|x)(x)dx.
(1.21)
(y) =
Exercise 1.4 Show that the mean and variance of the state (y) are E[X] and Var[X] + 2 ,
respectively. This clearly shows the effect of the noise.
Finding the conditional states in this case is difficult in general. However, it is greatly
simplified if the a-priori system state is Gaussian:
2
(x x)
2 1/2
(x) = (2 )
exp
,
(1.22)
2 2
because then the conditional states are still Gaussian.
Exercise 1.5 Verify this, and show that the conditional mean and variance given a result
y are, respectively,
x =
2 y + 2 x
,
2 + 2
( )2 =
2 2
.
2 + 2
(1.23)
Hence show that, in the limit 0, the conditional state (x|y) converges to (x y),
and an ideal measurement is recovered.
result (nothing, or flames) will certainly reveal whether or not there was petrol inside the
can, but the final state of the system after the measurement will have no petrol fumes inside
in either case.
We can generalize Bayes theorem to deal with this case by allowing a state-changing
operation to act upon the state after applying Bayes theorem. Say the system state is (x).
For simplicity we will take X to be a discrete random variable, with the configuration
space being {0, 1, . . ., n 1}. Say Y is the result of the measurement as usual. Then this
state-changing operation is described by an n n matrix By , whose element By (x|x ) is the
probability that the measurement will cause the system to make a transition, from the state
in which X = x to the state in which X = x, given that the result Y = y was obtained.
Thus, for all x and all y,
By (x|x ) 0,
By (x|x ) = 1.
(1.24)
x
(1.25)
(1.26)
Here we are introducing the convention of using a tilde to indicate an unnormalized state,
with a norm of less than unity. This norm is equal to
Oy (x|x )(x ),
(1.28)
(y) =
x
x
the probability of obtaining the result Y = y. Maps that take states to (possibly unnormalized) states are known as positive maps. The normalized conditional system state is
Oy (x|x )(x )/(y).
(1.29)
(x|y) =
x
From the properties of Oy , it follows that it is possible to find an n-vector Ey with positive
elements Ey (x), such that the probability formula simplifies:
Oy (x|x )(x ) =
Ey (x)(x).
(1.30)
x
x
(1.31)
(1.32)
This is the only mathematical restriction on Oy : y (apart from requiring that it be a
positive map).
Exercise 1.6 Formulate the match-in-the-tin measurement technique described above. Let
the states X = 1 and X = 0 correspond to petrol fumes and no petrol fumes, respectively.
Let the results Y = 1 and Y = 0 correspond to flames and no flames, respectively. Determine
the two matrices Oy:=0 and Oy:=1 (each of which is a 2 2 matrix).
The unconditional system state after the measurement is
Oy (x|x )(x ) =
O(x|x )(x ).
(x) =
y
x
(1.33)
x
Oy .
(1.34)
Exercise 1.7 Show that O is the identity if and only if there is no back-action.
10
Exercise 1.8 Show, from this definition, that two different pure states 1 and 2 cannot be
consistent states for any system.
Hint: Consider the operator j = j ||. Assuming j = ||, show that
2
and hence deduce the result.
Tr j2 > Tr j
which are real and which we have assumed for conwhere {} are the eigenvalues of
is called the projection operator, or projector, onto the subspace
venience are discrete.
with eigenvalue . If the spectrum (set of eigenvalues {}) is nonof eigenstates of
degenerate, then the projector would simply be the rank-1 projector = ||. We will
call this special case von Neumann measurements.
is a
are N -fold degenerate,
In the more general case, where the eigenvalues of
N
rank-N projector, and can be written as j =1 |, j , j |. For example, in the simplest
model of the hydrogen atom, if is the energy then would be the principal quantum
number n and j would code for the angular-momentum and spin quantum numbers l, m
and s of states with the same energy. The projectors are orthonormal, obeying
= ,
.
(1.36)
The existence of this orthonormal basis is a consequence of the spectral theorem (see
Box 1.1).
When one measures , the result one obtains is one of the eigenvalues . Say the
measurement begins at time t and takes a time T . Assuming that the system does not evolve
significantly from other causes during the measurement, the probability for obtaining that
particular eigenvalue is
].
Pr[(t) = ] = = Tr[(t)
(1.37)
After the measurement, the conditional (a-posteriori) state of the system given the result
is
(t)
.
(1.38)
(t + T ) =
Pr[(t) = ]
into the corresponding subspace of
That is to say, the final state has been projected by
the total Hilbert space. This is known as the projection postulate, or sometimes as state
collapse, or state reduction. The last term is best avoided, since it invites confusion with the
reduced state of a bipartite system as discussed in Section A.2.2 of Appendix A. This process
11
j
should be compared to the classical Bayesian update rule, Eq. (1.12). A consequence of
this postulate is that, if the measurement is immediately repeated, then
] = , .
Pr[(t + T ) = |(t) = ] = Tr[ (t + T )
(1.39)
That is to say, the same result is guaranteed. Moreover, the system state will not be changed
by the second measurement. For a deeper understanding of the above theory, see Box 1.2.
For pure states, (t) = |(t)(t)|, the formulae (1.37) and (1.38) can be more simply
expressed as
|(t)
Pr[(t) = ] = = (t)|
(1.40)
12
and
|(t)/ .
| (t + T ) =
(1.41)
However, if one wishes to describe the unconditional state of the system (that is, the state
if one makes the measurement, but ignores the result) then one must use the state matrix:
.
(t)
(t + T ) =
Pr[(t) = ] (t + T ) =
(1.42)
Thus, if the state were pure at time t, and we make a measurement, but ignore the result,
then in general the state at time t + T will be mixed. That is, projective measurement,
unlike unitary evolution,2 is generally an entropy-increasing process unless one keeps track
of the measurement results. This is in contrast to non-disturbing measurements in classical
mechanics, where (as we have seen) the unconditional a-posteriori state is identical to the
a-priori state (1.17).
Exercise 1.9 Show that a projective measurement of decreases the purity Tr 2 of the
unconditional state unless the a-priori state (t) can be diagonalized in the same basis as
can .
2
Of course unitary evolution can change the entropy of a subsystem, as we will discuss in Chapter 3.
13
(t)
(t) and show that , , p 0. Then express
Hint:
Let
p = Tr
Tr (t)2 and Tr (t + T )2 in terms of these p .
From the above measurement theory, it is simple to show that the mean value for the
result is
=
Pr[ = ]
]
Tr[
= Tr
= Tr[ ].
(1.43)
Here we are using angle brackets as an alternative notation for expectation value when
dealing with quantum observables.
Exercise 1.10 Using the same technique, show that
2
.
2 Pr[ = ] = Tr
2 =
(1.44)
Thus the mean value (A.6) and variance (A.8) can be derived rather than postulated,
provided that they are interpreted in terms of the moments of the results of a projective
measurement of .
Continuous spectra. The above results can easily be generalized to treat physical quantities
with a continuous spectrum, such as the position X of a particle on a line. Considering this
non-degenerate case for simplicity, the spectral theorem becomes
x
(x)dx =
x|xx|dx.
(1.45)
X=
(x
) = (x x )
(x),
(x)
(1.46)
(1.47)
14
j =
(x)dx,
(1.49)
xj
(1.50)
for some . By considering the norm of these two vectors (which must be equal) it can be
seen that ei must equal unity.
Exercise 1.12 Prove this, and hence that
b ] = 0.
a,
a, b, [
(1.51)
B]
= 0, and means that there is a basis, say
This is equivalent to the condition that [A,
where the eigenvalues ak may be degenerate (that is, there may exist k and k such that
ak = ak ) and similarly for bk . Thus, one way of making a simultaneous measurement of
A and B is to make a measurement of K = k k|k k |, and from the result k determine
the appropriate values ak and bk for A and B.
15
made in such a way that the apparatus adds no classical noise to the measurement result.
A more interesting reason is that there are many measurements in which the a-posteriori
conditional system state is clearly not left in the eigenstate of the measured quantity corresponding to the measurement result. For example, in photon counting by a photodetector,
at the end of the measurements all photons have been absorbed, so that the system (e.g.
the cavity that originally contained the photons) is left in the vacuum state, not a state containing the number n of photons counted. Another interesting reason is that non-projective
measurements allow far greater flexibility than do projective measurements. For example,
the simultaneous measurement of position and momentum is a perfectly acceptable idea,
so long as the respective accuracies do not violate the Heisenberg uncertainty principle, as
we will discuss below.
The fundamental reason why projective measurements are inadequate for describing real
measurements is that experimenters never directly measure the system of interest. Rather,
the system of interest (such as an atom) interacts with its environment (the continuum of
electromagnetic field modes), and the experimenter observes the effect of the system on
the environment (the radiated field). Of course, one could argue that the experimenter does
not observe the radiated field, but rather that the field interacts with a photodetector, which
triggers a current in a circuit, which is coupled to a display panel, which radiates more
photons, which interact with the experimenters retina, and so on. Such a chain of systems is
known as a von Neumann chain [vN32]. The point is that, at some stage before reaching the
mind of the observer, one has to cut the chain by applying the projection postulate. This cut,
known as Heisenbergs cut [Hei30], is the point at which one considers the measurement
as having been made.
If one were to apply a projection postulate directly to the atom, one would obtain wrong
predictions. However, assuming a projective measurement of the field will yield results
negligibly different from those obtained assuming a projective measurement at any later
stage. This is because of the rapid decoherence of macroscopic material objects such as
photodetectors (see Chapter 3). For this reason, it is sufficient to consider the field to be
measured projectively. Because the field has interacted with the system, their quantum
states are correlated (indeed, they are entangled, provided that their initial states are pure
enough). The projective measurement of the field is then effectively a measurement of the
atom. The latter measurement, however, is not projective, and we need a more general
formalism to describe it.
Let the initial system state vector be |(t), and say that there is a second quantum
system, which we will call the meter, or apparatus, with the initial state | (t). Thus the
initial (unentangled) combined state is
|(t) = |(t)|(t).
(1.53)
Let these two systems be coupled together for a time T1 by a unitary evolution operator
U (t + T1 , t), which we will write as U (T1 ). Thus the combined systemmeter state after
16
this coupling is
|(t + T1 ) = U (T1 )| (t)|(t).
(1.54)
(1.55)
(1.56)
The measurement on the meter disentangles the system and the meter, so that the final
state (1.55) can be written as
|r (t + T ) = |r
M r |(t)
,
(1.57)
where M r is an operator that acts only in the system Hilbert space, defined by
M r = r|U (T1 )| (t).
(1.58)
We call it a measurement operator. The probability distribution (1.56) for R can similarly
be written as
r = (t)|M r M r |(t).
(1.59)
17
Note that we have used an analogous notation to the classical case, so that |y := is
the apparatus state |y with y taking the value . To make a measurement, the system
and apparatus states must become correlated. We will discuss how this may take place
physically in Section 1.5. For now we simply postulate that, as a result of the unitary
interaction between the system and the apparatus, we have
=
a sx |y := G(x, )|x,
(1.61)
|(t + T1 ) = G|(t)
x,
(1.62)
Note that the interaction between the system and the apparatus has been specified by
reference to a particular basis for the system and apparatus, {|y|x}. We will refer to this
(for the system, or apparatus, or both together) as the measurement basis.
as defined is unitary if there exists an inverse function G1 in
Exercise 1.13 Show that G
the sense that, for all y, y = G(x, G1 (x, y)).
G
= 1 = G
G
using the matrix representation in the measurement
Hint: Show that G
basis.
The invertibility condition is the same as we used in Section 1.1.3 for the classical binary
measurement model.
As an example, consider G(x, ) = x , as in the classical case, where again this
=G
1 . The system state is unknown and is thus
indicates binary addition. In this case G
arbitrary. However, the apparatus is assumed to be under our control and can be prepared
in a fiducial state. This means a standard state for the purpose of measurement. Often the
fiducial state is a particular state in the measurement basis, and we will assume that it is
|y := 0, so that a = ,0 . In this case the state after the interaction is
|(t + T1 ) =
sx |y := x|x
(1.63)
x
and there is a perfect correlation between the system and the apparatus. Let us say a
projective measurement (of duration T2 ) of the apparatus state in the measurement basis is
made. This will give the result y with probability |sy |2 , that is, with exactly the probability
that a projective measurement directly on the system in the measurement basis would have
given. Moreover, the conditioned system state at time t + T (where T = T1 + T2 as above),
given the result y, is
|y (t + T ) = |x := y.
(1.64)
18
Again, this is as would have occurred with the appropriate projective measurement of
duration T on the system, as in Eq. (1.41).
This example is a special case of a model introduced by von Neumann. It would appear
to be simply a more complicated version of the description of standard projective measurements. However, as we now show, it enables us to describe a more general class of
measurements in which extra noise appears in the result due to the measurement apparatus.
Suppose that for some reason it is not possible to prepare the apparatus in one of the
measurement basis states. In that case we must use the general result given in Eq. (1.61).
Using Eq. (1.58), we find
M y |(t) = y|(t + T1 )
y,G(x, ) a sx |x
=
x,
aG1 (x,y) sx |x
aG1 (x ,y) |x x |
x
sx |x.
(1.65)
(1.66)
(1.67)
Returning to the more general form of Eq. (1.65), we find that the probability for the
result y is
(y) = (t)|M y M y |(t) =
|sx |2 |aG1 (x,y) |2 .
(1.68)
x
If we define
( ) = |a |2 ,
(1.69)
(1.70)
where (t) = |(t)(t)| is the system state matrix, then the probability distribution for
measurement results may then be written as
(y) =
( := G1 (x, y))(x).
(1.71)
x
This is the same form as for the classical binary measurement scheme; see Eq. (1.13)
and Eq. (1.15). Here the noise distribution arises from quantum noise associated with the
fiducial (purposefully prepared) apparatus state. It is quantum noise because the initial
19
apparatus state is still a pure state. The noise arises from the fact that it is not prepared in
one of the measurement basis states. Of course, the apparatus may be prepared in a mixed
state, in which case the noise added to the measurement result may have a classical origin.
This is discussed below in Section 1.4.
The system state conditioned on the result y is
aG1 (x,y) sx |x/ (y).
(1.72)
|y (t + T ) = M y |(t)/ (y) =
x
If, from this, we calculate the probability |x|y (t + T )|2 for the system to have X = x
after the measurement giving the result y, we find this probability to be given by
(x|y) =
(y|x)(x)
.
(y)
(1.73)
Again, this is the same as the classical result derived using Bayes theorem. The interesting
point is that the projection postulate does that work for us in the quantum case. Moreover,
it gives us the full a-posteriori conditional state, from which the expectation value of any
observable (not just X) can be calculated. The quantum measurement here is thus more
than simply a reproduction of the classical measurement, since the conditional state (1.72)
cannot be derived from Bayes theorem.
Exercise 1.14 Consider two infinite-dimensional Hilbert spaces describing a system and
defined in the joint position basis |y|x by
a meter. Show that the operator G,
:= |x = |y := + x|x,
G|y
is unitary. Let the fiducial apparatus state be
1/2
d (2 2 )1/2 exp( 2 /(22 )) |y := .
| =
(1.74)
(1.75)
Following the example of this subsection, show that, insofar as the statistics of X are
concerned, this measurement is equivalent to the classical measurement analysed in
Section 1.1.4.
M r |(t)
.
(1.76)
20
As seen above, the probabilities are given by the expectation of another operator, defined
in terms of the measurement operators by
E r = M r M r .
(1.77)
These operators are known as probability operators, or effects. The fact that r r must
equal unity for all initial states gives a completeness condition on the measurement operators:
E r = 1 S .
(1.78)
r
This restriction, that {E r : r} be a resolution of the identity for the system Hilbert space, is
the only restriction on the set of measurement operators (apart from the fact that they must
be positive, of course).
The set of all effects {E r : r} constitutes an effect-valued measure more commonly known
as a probability-operator-valued measure (POM3 ) on the space of results r. This simply
means that, rather than a probability distribution (or probability-valued measure) over
the space of results, we have a probability-operator-valued measure. Note that we have
left behind the notion of observables in this formulation of measurement. The possible
measurement results r are not the eigenvalues of an Hermitian operator representing an
observable; they are simply labels representing possible results. Depending on the circumstances, it might be convenient to represent the result R by an integer, a real number, a
complex number, or an even more exotic quantity.
If one were making only a single measurement, then the conditioned state |r would be
irrelevant. However, one often wishes to consider a sequence of measurements, in which
case the conditioned system state is vital. In terms of the state matrix , which allows the
possibility of mixed initial states, the conditioned state is
r (t + T ) =
J [M r ](t)
,
r
(1.79)
(1.80)
Or = J [M r ]
(1.81)
The superoperator
The abbreviation POVM is used also, and, in both cases, PO is sometimes understood to denote positive operator rather than
probability operator.
21
A A = S A.
(1.82)
The final property deserves some comment. It might have been thought that positivity
of a superoperator would be sufficient to represent a physical process. However, it is
always possible that a system S is entangled with another system R before the physical
process represented by S acts on system S. It must still be the case that the total state of
both systems remains a physical state with a positive state matrix. This gives condition
3.
If a superoperator satisfies these three properties then it is called an operation, and
has the Kraus representation [Kra83], or operator sum representation,
K j K j
S() =
(1.83)
j
(1.84)
A )U SA (S | A |)U SA
,
(1.85)
SS = TrA (1 S
A is some projector for the ancilla system A. This is essentially the converse
where
of the construction of operations for measurements from a systemapparatus coupling
in Section 1.2.3.
22
If the measurement were performed but the result R ignored, the final state of the system
would be
(t + T ) =
r r (t + T ) =
J [M r ](t) O(t).
(1.86)
r
| = [|0 + exp(i)|1]/ 2.
(1.87)
In this case the effects are
1
E = ||,
d E = |00| + |11| = 1.
(1.88)
(1.89)
Although E is proportional to a projection operator it is not equal to one. It does not square
to itself: (E d)2 = E d(d/). Neither are different effects orthogonal in general:
E E = 0 unless = + . Thus, even if the system is initially in the state |, there is
a finite probability for any result to be obtained except + .
The effects E r need not even be proportional to projectors, as the next example shows.
23
Example 3. Consider again an infinite-dimensional Hilbert space, but now use the continuous basis |x (see Section 1.2.2 and Appendix A), for which x|x = (x x ). Define
an effect
Ey =
dx(2 2 )1/2 exp[(y x)2 /(22 )]|xx|.
(1.90)
This describes an imprecise measurement of position. It is easy to verify that the effects are
not proportional to projectors by showing that E y2 is not proportional to E y . Nevertheless,
they are positive operators and obey the completeness relation
dy E y = 1.
(1.91)
Exercise 1.15 Verify Eq. (1.91). Also show that these effects can be derived from the
measurement model introduced in Exercise 1.14.
The previous examples indicate some of the flexibility that arises from not requiring the
effects to be projectors. As mentioned above, another example of the power offered by
generalized measurements is the simultaneous measurement of position X and momentum
P . This is possible provided that the two measurement results have a certain amount of
error. A simple model for this was first described by Arthurs and Kelly [AK65]. A more
abstract description directly in terms of the resulting projection valued measure was given
by Holevo [Hol82]. The description given below is based on the discussion in [SM01].
Example 4. The model of Arthurs and Kelly consists of two meters that are allowed to interact instantaneously with the system. The interaction couples one of the meters to position
and the other to momentum, encoding the results of the measurement in the final states of
the meters. Projective measurements are then made on each of the meter states separately.
These measurements can be carried out simultaneously since operators for distinct meters
commute. For appropriate meter states, this measurement forces the conditional state of
the system into a Gaussian state (defined below). We assume some appropriate length
scale such that the positions and momenta for the system are dimensionless, and satisfy
P ] = i.
[X,
The appropriate unitary interaction is
U = exp i X P1 + P P2 .
(1.92)
Here the subscripts refer to the two detectors, which are initially in minimum-uncertainty
states (see Appendix A) |d1 and |d2 , respectively. Specifically, we choose the wavefunctions in the position representation to be
xj |dj = (/2)1/4 exj .
2
(1.93)
After the interaction, the detectors are measured in the position basis. The measurement result is thus the pair of numbers (X1 , X2 ). Following the theory given above, the
24
(1.94)
1 , x2 ) is proportional to a projection
With a little effort it is possible to show that M(x
operator:
1 , x2 ) = 1 |(x1 , x2 )(x1 , x2 )|.
M(x
2
(1.95)
Here the state |(x1 , x2 ) is a minimum-uncertainty state for the system, with a position
probability amplitude distribution
1
1/4
2
x|(x1 , x2 ) = ( )
exp ixx2 (x x1 ) .
(1.96)
2
From Appendix A, this is a state with mean position and momentum given by x1 and x2 ,
respectively, and with the variances in position and momentum equal to 1/2.
Exercise 1.16 Verify Eq. (1.95).
The corresponding probability density for the observed values, (x1 , x2 ), is found from
the effect density
1 , x2 )dx1 dx2 = 1 |(x1 , x2 )(x1 , x2 )|dx1 dx2 .
E(x
2
Exercise 1.17 Show that
dx1
1 , x2 ) = 1.
dx2 E(x
(1.97)
(1.98)
E[X1 ] = X,
E[X2 ] = P ,
1
E[X12 ] = X 2 + ,
2
1
E[X22 ] = P 2 + ,
2
(1.99)
(1.100)
= Tr[A]
is the quantum expectation, while E is a classical average computed
where A
by evaluating an integral over the probability density (x1 , x2 ). Thus the readout variables
X1 and X2 give, respectively, the position and momentum of the system with additional
noise.
It is more conventional to denote the
state |(x1 , x2 ) by |, where the single complex
parameter is given by = (x1 + ix2 )/ 2. In this form the states are known as coherent
25
(1.101)
As explained there, it is not possible to assign a state vector to the system at time t + T1 ,
because it is entangled with the meter. However, it is possible to assign a state matrix to the
system. This state matrix is found by taking the partial trace over the meter:
(t + T1 ) = TrA [|(t + T1 )(t + T1 )|]
(1.102)
where {|j A: j } is an arbitrary set of basis states for the meter. But this basis can of course
be the basis {|r: r} appropriate for a measurement of R on the meter. Thus the reduced
system state (t + T1 ) is the same as the average system state (t + T ) (for T T1 )
of Eq. (1.86), which is obtained by averaging over the measurement results. That is, the
non-selective system state after the measurement does not depend on the basis in which the
meter is measured.
Different measurement bases for the meter can be related by a unitary transformation
thus:
Ur,s
|s,
(1.103)
|r =
s
Exercise 1.18 Verify that the unconditional final state under the new measurement operators {M s } is the same as that under the old measurement operators {M r }.
The binary example. Although the unconditional system state is the same regardless of
how the meter is measured, the conditional system states are quite different. This can be
illustrated using the binary measurement example of Section 1.2.4. Consider the simple
case in which the fiducial apparatus state is the measurement basis state |0A = |y := 0.
The measurement basis states are eigenstates of the apparatus operator
Y =
1
y|yy|.
(1.105)
y=0
Then, if the apparatus is measured in the measurement basis, the measurement operators are
A = |x := yx := y|.
M y = A y|G|0
(1.106)
26
As stated before, these simply project or collapse the system into its measurement basis,
the eigenstates of
X =
1
x|xx|.
(1.107)
x=0
Now consider an alternative orthonormal basis for the apparatus, namely the eigenstates
of the complementary operator
PA =
1
p|pA p|.
(1.108)
p=0
|pA = 21/2 |y := 0 + eip |y := 1 ,
(1.109)
and X and P are complementary in the sense that X is maximally uncertain for a system in a
P -eigenstate, and vice versa. In this case the measurement operators are, in the measurement
(x) basis,
(1.110)
M p = 21/2 |00| + eip |11| .
Exercise 1.19 Verify that the non-selective evolution is the same under these two different measurements, and that it always turns the system into a mixture diagonal in the
measurement basis.
Clearly, measurement of the apparatus in the complementary basis does not collapse the
system into a pure state in the measurement basis. In fact, it does not change the occupation
probabilities for the measurement basis states at all. This is because the measurement yields
no information about the system, since the probabilities for the two results are independent
of the system:
Pr[PA = p] = (t)|M p M p |(t) = 1/2.
(1.111)
This measurement merely changes the relative phase of these states by if and only if
p = 1:
M p x sx |x
sx eipx |x.
(1.112)
=
Pr[PA = p]
x
That is to say, with probability 1/2, the relative phase of the system states is flipped. In
this guise, the interaction between the system and the apparatus is seen not to collapse the
system into a measurement eigenstate, but to introduce noise into a complementary system
property: the relative phase.
This dual interpretation of an interaction between a system and another system (the meter)
is very common. The non-selective evolution reduces the system to a mixture diagonal in
some basis. One interpretation (realized by measuring the meter in an appropriate way)
is that the system is collapsed into a particular state in that basis, but an equally valid
27
interpretation (realized by measuring the meter in a complementary way) is that the meter
is merely adding noise into the relative phases of the system components in this basis. In the
following section, we will see how both of these interpretations can be seen simultaneously
in the Heisenberg picture.
Exercise 1.20 Consider the quantum position-measurement model introduced in Exercise 1.14. Show that, if the apparatus is measured in the momentum basis (see Appendix A),
then the measurement operators are
2 1/4
2
Show also that the non-selective evolution is the same as in Exercise 1.14, and that the
selective evolution in this exercise can be described in terms of random momentum kicks
on the system.
f () = Tr f ()
.
(1.114)
represents an observable , then any function of the result of a
That is, if an operator
Here is the state of
measurement of is represented by that function of the operator .
the system at the time of the measurements. Clearly, if evolves after the measurement
has finished, then the formula (1.114) using this new might no longer give the correct
expectation values for the results that had been obtained.
This problem can be circumvented by using the systemmeter model of measurement
we have presented. Let us assume an entangled systemmeter state of the form
|(t)S ,
|A
(1.115)
|(t + T1 ) =
: is the set of eigenwhere {|A: } is an orthonormal set of apparatus states and
projectors of the system observable S . This is the ideal correlation for the apparatus to
measure S . The apparatus observable represented by
A =
|A |
(1.116)
has identical moments to the system observable S for the original system state |(t), or
indeed for the (mixed) system state at time t + T1 derived from Eq. (1.115).
Exercise 1.21 Show this.
28
(1.117)
where FS is an Hermitian system operator, and not have to worry about the operator
ordering. In fact, insofar as the system is concerned, this Hamiltonian is equivalent to the
Hamiltonian
H = FS ,
(1.118)
where here is the measurement result (a random variable) obtained in the projective
measurement of the system at time t.
Exercise 1.22 Convince yourself of this.
The action of Hamiltonians such as these (a form of feedback) will be considered in greater
detail in later chapters.
This idea of representing measurement results by meter operators is not limited to
projective measurements of the system. Say one has the entangled state between system
and meter
|(t + T1 ) = U (T1 )| A |(t)S ,
(1.119)
and one measures the meter in the (assumed non-degenerate) eigenbasis {|rA } of the
operator
R A =
r|rA r|.
(1.120)
r
Then the operator R A represents the outcome of the measurement that, for the system, is
described using the measurement operators M r = r|U (T1 )| . Recall that the results r are
just labels, which need not be real numbers, so R A is not necessarily an Hermitian operator.
If the result R is a complex number, then R A is a normal operator (see Box 1.1). If R is a
real vector, then R A is a vector of commuting Hermitian operators.
It is important to note that R A represents the measurement outcome whether or not
the projective measurement of the apparatus is made. That is, it is possible to represent
a measurement outcome simply by modelling the apparatus, without including the extra
step of apparatus state collapse. In this sense, the von Neumann chain can be avoided, not
by placing the Heisenberg cut between apparatus and higher links (towards the observers
consciousness), but by ignoring these higher links altogether. The price to be paid for this
29
parsimony is a high one: the loss of any notion of actual outcomes. The measurement
result R remains a random variable (represented by the operator R A ) that never takes any
particular one of its possible values r. Within this philosophical viewpoint one denies the
existence of events, but nevertheless calculates their statistics; in other words, correlations
without correlata [Mer98].
(1.122)
(1.123)
(1.124)
30
which here is for time t, before the measurement interaction between system and apparatus.
This interaction, of duration T1 , changes R A to
(1.125)
R A (t + T ) = U (T1 ) R A (t) 1 S U (T1 )
=
r U (T1 )(|rA r| 1 S )U (T1 ).
(1.126)
r
Here T is any time greater than or equal to T1 , since we are assuming that the measurement
interaction ceases at time t + T1 and that R A is a QND observable for all subsequent
evolution of the meter.
It follows trivially from the analysis of Section A.1.3 that the Heisenberg-picture operator
RA (t + T ) with respect to total (t) has the same statistics as does the Schrodinger-picture
operator with respect to total (t + T ), evolved according to the measurement interaction.
Hence, if the initial apparatus state is pure,
A = | A |,
(1.127)
as we assumed, then these statistics are identical to those of the random variable RA , the
result of a measurement on the system with measurement operators {M r }.
Being an apparatus operator, R A (s) commutes with system operators at all times s. For
s t (that is, before the system and apparatus interact), it is also uncorrelated with all
system operators. That is, for s t, expectation values factorize:
(1.128)
O S (t)f (R A (t)) = O S (t) f (R A (t)) .
Here O S is an arbitrary system operator and f is an arbitrary function. For s > t, this is no
longer true. In particular, for s = t + T the correlation with the system is the same as one
would calculate using state collapse, namely
r f (r)Tr O S r (t + T ) ,
(1.129)
O S (t + T )f (R A (t + T )) =
r
31
no change in the system variables, but in the quantum case any measurement will necessarily cause changes to the system operators. This quantum back-action is best illustrated
by example, as we will do in the next subsection. The same distinction between quantum
and classical mechanics is also present in the Schrodinger picture, but only in the weaker
form given in Exercise 1.9.
exp i k X |p = |p k,
(1.130)
exp i nP |x = |x n.
(1.131)
It is now easy to see that the measurement interaction between the system and the apparatus
may be realized by
= exp i X S PA .
(1.132)
G
Exercise 1.24 Show that this does produce Eq. (1.63).
In the Heisenberg picture, this unitary operator transforms the operators according to
+ T1 ) = G
G,
where O is an arbitrary operator. Thus we find
O(t)
O(t
X S (t + T1 ) = X S (t),
(1.133)
PS (t + T1 ) = PS (t) PA (t),
(1.134)
X A (t + T1 ) = X S (t) X A (t),
(1.135)
PA (t + T1 ) = PA (t).
(1.136)
(1.137)
x,y
Y = XA (t + T1 ),
= XA (t),
(1.138)
then Eq. (1.135) is identical in form and content to the classical Eq. (1.2). The noise
term is seen to arise from the initial apparatus state. Note that X S is unchanged by the
interaction. This quantity is a QND variable and the measurement interaction realizes a
QND measurement of X S . However, unlike in the classical case, the system is affected by
32
the measurement. This quantum back-action is seen in the change in the complementary
system quantity, PS , in Eq. (1.134). The quantum noise added to the system here is PA ,
which is another QND variable. Clearly, if one were to measure PA , one would gain no
information about the system. (Indeed, one gains most information about the system by
measuring the apparatus in the X A basis, which is a basis complementary to the PA basis).
However, by measuring PA , one directly finds out the noise that has affected the system, as
discussed in Section 1.2.6. We see now that, in the Heisenberg picture, both interpretations
of the interaction, namely in terms of gaining information about the system and in terms of
adding noise to the system, can be seen simultaneously.
Exercise 1.25 Analyse the case of a generalized position measurement from Exercise 1.14
in the same manner as the binary example of this subsection.
Hint: First show that P generates displacements of position X and vice versa. That is,
eiq P |x = |x + q and eikX |p = |p + k (see Section A.3). Note that, unlike in the binary
example, there is no in the exponential.
Mr and use only operations and effects. The operation Or for the result r is a completely
positive superoperator (see Box 1.3), not restricted to the form of Eq. (1.81). It can nevertheless be shown that an operation can always be written as
r,j ],
Or =
J [
(1.139)
j
r,j : j }.
for some set of operators {
r,j : j } is not unique. For this reason it would be
For a given operation Or , the set {
wrong to think of the operators r,j as measurement operators. Rather, the operation is
the basic element in this theory, which takes the a-priori system state to the conditioned
a-posteriori state:
r (t + T ) = Or (t).
4
(1.140)
It is possible to be even more general by allowing the apparatus to be initially correlated with the system. We do not consider
this situation because it removes an essential distinction between apparatus and system, namely that the former is in a fiducial
state known to the experimenter, while the latter can be in an arbitrary state (perhaps known to a different experimenter). If the
two are initially correlated they should be considered jointly as the system.
33
The state in Eq. (1.140) is unnormalized. Its norm is the probability r for obtaining the
result R = r,
r = Tr[Or (t)],
(1.141)
r (t + T ) = Or (t)/r .
(1.142)
r,j
r,j ,
(1.143)
(1.144)
(1.145)
(1.146)
(1.147)
r,j
In terms of the unitary operator U (T1 ) coupling system to apparatus, this operation can also
be defined by
O TrA U (T1 )( A )U (T1 ) ,
(1.148)
where A is the initial apparatus state matrix.
Exercise 1.26 By decomposing A into an ensemble of pure states, and considering an
r,j . Also show the non-uniqueness of the
apparatus basis {|r}, derive an expression for
r,j : j }.
set {
This completes our formal description of quantum measurement theory. Note that the
above formulae, from Eq. (1.139) to Eq. (1.147), are exact analogues of the classical formulae from Eq. (1.26) to Eq. (1.34). The most general formulation of classical measurement
was achieved simply by adding back-action to Bayes theorem. The most general formulation of quantum measurement should thus be regarded as the quantum generalization of
34
Quantum formula
Bayesian formula
Initial state
such that
Measurement result
For each r define
such that
Pr[R = r]
can be written as
where
Conditioned state
Interpretation
r Er = 1
r (t + T ) = r (t + T )/(r)
a matter of debate!
Bayes theorem, in which back-action is an inseparable part of the measurement. This difference arises simply from the fact that a quantum state is represented by a positive matrix,
whereas a classical state is represented by a positive vector (i.e. a vector of probabilities).
This analogy is summarized in Table 1.1.
We now give a final example to show how generalized measurements such as these
arise in practice, and why the terminology inefficient is appropriate for those measurements
for which measurement operators cannot be employed. It is based on Example 1 in Section 1.2.5, which is a description of efficient photon counting if |n is interpreted as the
state with n photons.
Say one has an inefficient photon detector, which has only a probability of detecting
each photon. If the perfect detector would detect n photons, then, from the binomial
expansion, the imperfect detector would detect r photons with probability
r
nr n
.
(1.149)
(r|n) = (1 )
r
Thus, if r photons are counted at the end of the measurement, the probability that n photons
would have been counted by the perfect detector is, by Bayes theorem,
(r|n)n|(t)|n
.
m (r|m)m|(t)|m
(n|r) =
(1.150)
(1.151)
(1.152)
Or (t)
,
Tr[(t)E r ]
(1.153)
35
Name
Definition
E
C
S
O
BAE
MD
P
VN
Efficient
Complete
Sharp
Of an observable X
Back-action-evading
Minimally disturbing
Projective
von Neumann
r, M r , Or = J [M r ]
, r, Or Or 1
r, rank(E r ) = 1
r, E r = Er (X)
Tr[
x O]
x ] = Tr[
O with , x (X),
E with r, Mr = Mr
MD and O
P and S
(1 )
|nn|.
Er =
r
n
(1.154)
(1.155)
36
Fig. 1.2 A Venn diagram for the eight classes of quantum measurements described in Table 1.2.
It is only for the class of efficient measurements that one can derive the following
powerful theorem [Nie01, FJ01]:
H [(t)]
r H [r (t + T )].
(1.156)
r
Here, H [] is any measure of the mixedness of that is invariant under unitary transformations of and satisfies
H [w1 1 + w2 2 ] w1 H [1 ] + w2 H [2 ]
(1.157)
An even stronger version of this theorem, using majorization to classify the relative mixedness of two states, has also been
proven [Nie01, FJ01].
37
Exercise 1.27 Prove the foregoing statement, by finding an example for a binary system.
Hint: This is a classical phenomenon, so the measurement operators and state matrix in
the example can all be diagonal in the same basis.
[C]: Complete measurements. The definition of complete measurements in Table 1.2 implies
that, for all results r, the conditioned a-posteriori state
r (t + T ) = Or (t)/Tr[E r (t)]
(1.158)
where and denote (possibly unnormalized) system states. From this, it is easy to see
that the conditioned state, independently of (t), is
|rk rk |
.
(1.160)
r (t + T ) = k
k rk |rk
The concept of complete measurements (or, more particularly, incomplete measurements) will be seen to be very useful when discussing adaptive measurements in Section 2.5.
[S]: Sharp measurements. The definition of sharp measurements in Table 1.2 implies that the
effects are rank-1 positive operators. That is to say, each effect is of the form E r = |r r |,
for some (possibly unnormalized) state |r . This implies that the operations must be of the
form
J [|rk r |] .
(1.161)
Or =
k
From this it is apparent that sharp measurements are a subclass of complete measurements. Also, it is apparent that, for efficient measurements, sharpness and completeness are
identical properties.
The significance of sharpness is that a sharp measurement cannot be an unsharp version of
a different measurement [MdM90a, MdM90b]. That is, the results of a sharp measurement
cannot be generated by making a different measurement and then rendering it unsharp
by classically processing the results. Mathematically, a sharp measurement {E r } is one for
which there is no other measurement {E s : s} such that
wr|s E s ,
(1.162)
E r =
s
38
x (t) ,
=
Er (x)Tr
(1.163)
r = Tr Er (X)(t)
x
x the correwhere {x} are the (assumed discrete for simplicity) eigenvalues of X and
then it is
sponding projectors. If all of the effects are functions of the same operator X,
evident that the measurement is equivalent to a (possibly unsharp) measurement of the
observable X. That is, the result R could be obtained by making a projective measurement
of X and then processing the result. Note that this definition places no restriction on the
state of the system after the measurement.
The class labelled O in Fig. 1.2 should be understood to be the class of measurements
that are measurements of some observable X. Note that, by virtue of the definition here, a
measurement in this class may be a measurement of more than one observable. For example,
it is obvious from the above definition that any measurement of X2 is also a measurement of
X. However, if X has eigenvalues of equal magnitude but opposite sign, then the converse
is not true. This is because, for example, it is not possible to write the effects for a projective
which are
measurement of X,
E x = |xx| = X,x
,
(1.164)
as a function of S = X 2 . This is the case even though the projectors for the latter are
functions of X:
x 2 ,s |xx| = X 2 ,s .
(1.165)
E s =
x
By binning results (corresponding to values of X with the same magnitude), one can
convert the measurement of X into a measurement of X2 . However, it is not permissible
to allow such binning in the above definition, because then every measurement would be
a measurement of any observable; simply binning all the results together gives a single
which can be written as a (trivial) function of any observable.
E = 1,
[BAE]: Back-action-evading measurements. Consider a measurement of an observable X
according to the above definition. A hypothetical projective measurement of X before
this measurement will not affect the results of this measurement, because the effects are
39
(1.166)
(1.167)
where U (T1 ) is the unitary operator describing the coupling of the system to the meter, as
in Section 1.3.2, so that X is to be understood as X S 1 A .
The condition for a back-action-evading measurement (1.166) is implied by (and hence
is weaker than) that for a quantum non-demolition measurement. To see this, first note that
a unitary transformation preserves eigenvalues, so that Eq. (1.167) implies that, for all x,
x 1 A )U (T1 ).
x 1 A = U (T1 )(
(1.168)
Now post-multiply both sides of Eq. (1.168) by A , where A is the initial apparatus
state. This gives
x ) A = U (T1 )(
x 1 A )U (T1 )( A ).
(
Now pre- and post-multiply by U (T1 ) and U (T1 ), respectively. This gives
x ) A U (T1 ) = (
x 1 A )U (T1 )( A )U (T1 ).
U (T1 ) (
(1.169)
(1.170)
Taking the total trace of both sides then yields Eq. (1.166), from the result in Eq. (1.148).
Often the terms back-action-evading (BAE) measurement and quantum non-demolition
(QND) measurement are used interchangeably, and indeed the authors are not aware of
any proposal for a BAE measurement that is not also a QND measurement. The advantage
of the BAE definition given above is that it is formulated in terms of the operations and
effects, as we required.
It is important not to confuse the non-selective and selective a-posteriori states. The
motivating definition (1.166) is formulated in terms of the non-selective total operation
O. The definition would be silly if we were to replace this by the selective operation Or
(even if an appropriate normalizing factor were included). That is because, if the system
were prepared in a state with a non-zero variance in X, then the measurement would in
general collapse the state of the system into a new state with a smaller variance for X.
40
That is, the statistics of X would not remain the same. The actual definition ensures that on
average (that is, ignoring the measurement results) the statistics for X are the same after
the measurement as before.
[MD]: Minimally disturbing measurements. Minimally disturbing measurements are a
subclass of efficient measurements. The polar decomposition theorem says that an arbitrary
operator, such as the measurement operator M r , can be decomposed as
M r = U r Vr ,
(1.171)
where U r is unitary and Vr = E r is Hermitian and positive. We can interpret these two
operators as follows. The Hermitian Vr is responsible for generating the necessary backaction (the state collapse) associated with the information gained in obtaining the result r
(since the statistics of the results are determined solely by E r , and hence solely by Vr ). The
unitary U r represents surplus back-action: an extra unitary transformation independent of
the state.
A minimally disturbing measurement is one for which U r is (up to an irrelevant phase
factor) the identity. That is,
M r = E r ,
(1.172)
so that the only disturbance of the system is the necessary back-action determined by
the probability operators E r . The name minimally disturbing can be justified rigorously
as follows. The fidelity between an a-priori state of maximal knowledge | and the
a-posteriori state r = Or ||, averaged over r and , is
Faverage = dHaar ()
|r |.
(1.173)
r
Here dHaar () is the Haar measure over pure states, the unique measure which is invariant
under unitary transformations. For a given POM {E r }, this is maximized for efficient
measurements with measurement operators given by Eq. (1.172) [Ban01].
Exercise 1.28 Show that, for a given POM and a particular initial state , a minimally
disturbing measurement (as defined here) is in general not the one which maximizes the
fidelity between a-priori and a-posteriori states.
Hint: Consider a QND measurement of z on a state | = |z := 1 + |z := 1.
Compare this with the non-QND measurement of z with measurement operators
|z := 1| and |z := 1|.
For minimally disturbing measurements, it is possible to complement the relation (1.156)
by the following equally powerful theorem:
H [(t + T )] H [(t)],
(1.174)
41
ones information about the system can only decrease. This does not hold for measurements in general; for the measurement in Example 1, the a-posteriori state is the pure
state |0 regardless of the a-priori state. However, it does hold for a slightly broader
class than minimally disturbing measurements, namely measurements in which the surplus
back-action Ur in Eq. (1.171) is the same for all r. These can be thought of as minimally
disturbing measurements followed by a period of unitary evolution.
A minimally disturbing measurement of an observable X is a BAE measurement of
that observable, but, of course, minimally disturbing measurements are not restricted to
measurements of observables. Finally, it is an interesting fact that the class of minimally
disturbing measurements does not have the property of closure. Closure of a class means
that, if an arbitrary measurement in a class is followed by another measurement from the
same class, the total measurement (with a two-fold result) is guaranteed to be still a
member of that class.
Exercise 1.29 Find an example that illustrates the lack of closure for the MD class.
[P]: Projective measurements. These are the measurements with which we began our
discussion of quantum measurements in Section 1.2.2. They are sometimes referred to as
orthodox measurements, and as Type I measurements (all other measurements being Type
II) [Pau80]. From the definition that they are minimally disturbing and a measurement of
an observable, it follows that the measurement operators M r and effects E r are identical
r.
and equal to projectors
[VN]: Von Neumann measurements. Sometimes the term von Neumann measurement
is used synonymously with the term projective measurements. We reserve the term for
sharp projective measurements (that is, those with rank-1 projectors). This is because von
Neumann actually got the projection postulate wrong for projectors of rank greater than 1,
as was pointed out (and corrected) by Luders [Lud51]. Von Neumann measurements are
the only measurements which are members of all of the above classes.
42
state. Any ket containing a complex number , or indicates a coherent state, defined as
| = e
||2 /2
n
|n.
n!
n=0
(1.175)
See also Appendix A. It is also useful to define sets E and O, the even and odd counting
numbers, respectively. If the result r is denoted n, then the resolution of the identity is
d2 E . If denoted E, O then it is E E + E O .
n=0 En . If it is denoted then it is
We also use the following operators in the list below. The operator D denotes a displacement operator defined by how it affects a coherent state:
D | = | + ,
(1.176)
for some non-zero complex number . The number operator N has the number states as its
O are defined by
E and
eigenstates. The two operators
E,O =
|nn|.
(1.177)
nE,O
O = 1 J [||]
2
O = d2 2 e|| J [||]
1
O = J [|0|]
2
O = d2 2 e| | J [| |]
2
2
2
1 | |2
d 2 e|| J [| |]
O = d e
1/2
1
O = J E , E = (2 ) (|| + ||)
O = J D E 1/2 , E = (2 )1 (|| + ||)
On = J [|nn|]
(n|m)J [|mm|]
On =
m=0
On =
m=0 (n|m)J D |mm|
On = J [|0n|]
2(m+1) J [|mn|]
On =
m=0
OE,O = J
E,O
E,O
OE,O = J exp(i N)
OE = nE J [|0n|] , OO = nO J [|1n|]
E,O
OE,O = J D
OE,O = nE,O J [|0n|]
43
systems. In the experiment performed by the Haroche group in Paris in 1999 [NRO+ 99],
the measured system was the state of an electromagnetic field in a microwave cavity. Apart
from small imperfections, the preparation procedure produced a pure state containing
no more than a single photon. Thus the state of the cavity field may be written as | =
c0 |0 + c1 |1. The measured variable is the photon number with result 0 or 1. The apparatus
was an atom with three levels: ground state |g, excited state |e, and an auxiliary state
|i. The final readout on the apparatus determines whether the atom is in state |g by
a selective ionization process, which we will describe below. This final readout is not
ideal and thus we will need to add an extra classical noise to the description of the
measurement.
We begin with a brief description of the interaction between the cavity field and a single
two-level atom in order to specify how the correlation between the system and the apparatus
is established. If, through frequency or polarization mismatching, the cavity mode does not
couple to the auxiliary level |i, then we can define the atomic lowering operator by
= |ge|. The field annihilation operator is a (see Section A.4). The relevant parts of the
total Hamiltonian are
H = c a a + g |gg| + e |ee| + i |ii| + (i/2)( + )(a a ), (1.178)
where is known as the single-photon Rabi frequency and is proportional to the dipole
moment of the atom and inversely proportional to the square root of the volume of the cavity
mode. We work in the interaction frame (see Section A.1.3) with the free Hamiltonian
H 0 = c a a + g |gg| + (g + c )|ee| + (g + d )|ii|,
(1.179)
where d is the frequency of a driving field, a classical microwave field (to be discussed
later). The interaction Hamiltonian V = H H 0 becomes the time-dependent Hamiltonian VIF (t) in the interaction frame. However, the evolution it generates is well approximated
by the time-independent Hamiltonian
VIF = (i a i a )/2 + + |ii|,
(1.180)
44
Let us now assume that the atom is resonant with the cavity ( = 0), in which case
the Hamiltonian (1.180) (apart from the final term) is known as the Jaynes-Cummings
Hamiltonian. If this Hamiltonian acts for a time on an initial state |1, g, the final
state is
exp(iVIF )|1, g = cos(/2)|1, g + sin(/2)|0, e,
(1.181)
(c0 |0 + c1 |1) |g (c0 |0 c1 |1) |g.
(1.182)
It is called conditional because the sign of the state is flipped if and only if there is one
photon present. Note that we are not using the term here in the context of a measurement
occurring.
As it stands this is not of the form of a binary quantum measurement discussed in
Section 1.2.4 since the meter state (the atom) does not change at all. In order to configure
this interaction as a measurement, we need to find a way to measure the relative phase
shift introduced by the interaction between the field and the atom. This is done using the
auxiliary electronic level, |i, which does not interact with the cavity mode and cannot
undergo a conditional phase shift. We begin by using a classical microwave pulse R1 of
frequency d , to preparethe atom in a superposition of the auxiliary state and the ground
state: |g (|g + |i)/ 2. For the moment, we assume that this is resonant, so that = 0
in Eq. (1.180). After the conditional interaction, C, between the atom and the cavity field,
another microwave pulse R2 of frequency d again mixes the states |g
and |i. It reverses
the action of R1 , taking |g (|g |i)/ 2 and |i (|g + |i)/ 2.
Exercise 1.33 Show that this transformation is unitary and reverses R1 .
45
C
De
c0
c0
R1
Dg
+ c1
c1
R2
Fig. 1.3 A schematic diagram of the Haroche single-photon measurement [NRO+ 99]. A single
atom traverses three microwave fields R1 , C and R2 , the middle one being described by a singlemode cavity field. It then encounters two ionization detectors, De and Dg , which detect whether
the atom is in the excited state or ground state, respectively. The driving fields R1 and R2 are
produced by the same microwave source, which locks their relative phase. Adapted by permission
from Macmillan Publishers Ltd: G. Nogues et al., Nature 400, 239242 (1999), (Fig. 1), copyright
Macmillan Magazines Ltd 1999.
Finally, a projective readout of the ground state |g is made, as shown in Fig. 1.3. The full
measurement protocol can now be described:
1
R1
(c0 |0 + c1 |1)|g (c0 |0 + c1 |1) (|i + |g)
2
1
C
(c0 |0(|i + |g) + c1 |1(|i |g))
2
R2
c0 |0|g + c1 |1|i.
(1.183)
An ideal measurement of the ground state of the atom gives a yes (no) result with probability
|c0 |2 (|c1 |2 ), and a measurement of the photon number has been made without absorbing
the photon.
To compare this with the binary measurement discussed in Section 1.2.4, we use the
apparatus state encoding |g |0A , |i |1A . The overall interaction (R2 C R1 )
between the system and the apparatus is then defined by Eq. (1.62). We can then specify
the apparatus operators X A and PA used in Section 1.2.6,
X A = |ii|,
(1.184)
1
(1.185)
PA = (|g |i)(g| i|).
2
Likewise the equivalent operators for the system can be defined in the photon-number basis,
X S = |11|, PS = (|0 |1)(0| 1|)/2. Provided that the atom is initially restricted to
the subspace spanned by {|g, |i}, the action of R2 C R1 can be represented in terms
46
(1.186)
Certain aspects of the Paris experiment highlight the kinds of considerations that distinguish an actual measurement from simple theoretical models. To begin, it is necessary
to prepare the states of the apparatus (the atoms) appropriately. Rubidium atoms from a
thermal beam are first prepared by laser-induced optical pumping into the circular Rydberg
states with principal quantum numbers 50 (for level g) or 51 (for level e). The e g
transition is resonant with a cavity field at 51.1 GHz. The auxiliary level, i, corresponds to
a principal quantum number of 49 and the i g transition is resonant at 54.3 GHz.
Next it is necessary to control the duration of the interaction between the system and the
apparatus in order to establish the appropriate correlation. To do this, the atoms transiting the
cavity field must have a velocity carefully matched to the cavity length. The optical-pumping
lasers controlling the circular states are pulsed, generating at a preset time an atomic
sample with on average 0.30.6 atoms. Together with velocity selection, this determines
the atomic position at any time within 1 mm. The single-photon Rabi frequency at the
cavity centre is /(2 ) = 47 kHz. The selected atomic velocity is 503 m s1 and the beam
waist inside the cavity is 6 mm, giving an effective interaction time such that = 2 .
Finally, a small external electric field Stark-shifts the atomic frequency out of resonance
with the cavity. This gives rise to an adjustable detuning in Eq. (1.180), which allows
fine control of the effective interaction.
The experiment is designed to detect the presence or absence of a single photon. Thus it
is necessary to prepare the cavity field in such a way as to ensure that such a state is typical.
The cavity is cooled to below 1.2 K, at which temperature the average thermal excitation of
photon number n in the cavity mode is 0.15. The thermal state of a cavity field is a mixed
state of the form
1
c = (1 + n)
en |nn|,
(1.187)
n=0
where = c /(kB T ). At these temperatures, 1 and we can assume that the cavity
field is essentially in the vacuum state |0. The small components of higher photon number
lead to experimental errors.
In order to generate an average photon number large enough for one to see a singlephoton signal, it is necessary to excite a small field coherently. This is done by injecting
a preparatory atom in the excited state, |e, and arranging the interaction time so that
the atom-plus-cavity state is |0, e + |1, g. The state of this atom is then measured after
the interaction. If it is found to be |g, then a single photon has been injected into the
cavity field mode. If it is found to be |e, the cavity field mode is still the vacuum.
Thus each run consists of randomly preparing either a zero- or a one-photon state and
measuring it. Over many runs the results are accumulated, and binned according to what
initial field state was prepared. The statistics over many runs are then used to generate the
47
0.9
0.8
0.7
0.6
g
0.5
0.4
0.3
0.2
0.1
30
20
10
10
20
30
40
(kHz)
Fig. 1.4 The experimental results of the Paris single-photon experiment, showing the probability
of measuring the atom in the ground state versus detuning of the cavity field. The dashed line
corresponds to an initial field with a single photon, whereas the solid line is for an initial vacuum
field state. Reprinted by permission from Macmillan Publishers Ltd: G. Nogues et al., Nature 400,
239242 (1999), (Fig. 2), copyright Macmillan Magazines Ltd 1999.
conditional probability of finding the atom in the state |g when there is one photon in the
cavity.
Another refinement of the experiment is to use the detuning of the fields R1 and R2
to vary the quality of the measurement. This is a standard technique in atomic physics
known as Ramsey fringe interferometry, or just Ramsey interferometry. This is explained
in Box 1.4, where |e plays the role of |i in the present discussion. The extra Hamiltonian
|ii| causes free evolution of the atomic dipole. Its net effect is to introduce an extra
phase factor T , proportional to the time T between applications of each of these fields.
The probability of finding the atom in state |g at the end of measurement is then given by
g = 0 + 1 (1 ),
(1.188)
where 0 and 1 are the probabilities that the cavity contains no or one photon, respectively,
and = cos2 (T ). If 0 = 1 or 0 at the start of the measurement, then g is an oscillatory
function of the detuning , and the phase of the oscillation distinguishes the two cases.
In Fig. 1.4 we show the experimental results from the Paris experiment. Two cases are
shown: in one case (dashed line) the initial state of the field was prepared in a one-photon
state (the preparatory atom exited in the ground state), whereas in the second case (solid
48
(1.189)
The atom is prepared in the ground state and injected through a classical field R1 with
frequency d that differs from the atomic resonance frequency eg by a small detuning
. The atomic velocity is chosen so that the atom interacts with the field for a precise
time . The interaction induces a superposition between the ground and excited states
of the form
|g |g + |e,
(1.190)
where the coefficients depend on and and the Rabi frequency for the transition. (The
Rabi frequency is roughly the dot product of the classical electric field with the electric
dipole moment of the atomic transition, divided by . It also equals the single-photon
Rabi frequency times the square root of the mean number of photons in the field. For a
classical field in a mode with a large mode volume (as here), the former is very small
and the latter very large, giving a finite
product.) If the detuning is small enough, one
can arrange to obtain = = 1/ 2.
The atom then evolves freely for a time T during which the Hamiltonian in the
interaction frame is VIF = |ee|. This changes to eiT . After this it interacts with
another classical field, R2 , of the same frequency, which undoes the transformation R1 .
This means that we have to adjust T and/or the phase of R2 so that, if = 0, all atoms
emerge in the ground state. Then the state of the atom after the second field is
cos(T /2)|g i sin(T /2)|e.
(1.191)
The probability that an atom will emerge in the excited state when = 0 is thus
e () = sin2 (T /2).
(1.192)
By varying the frequency d of the driving fields R1 and R2 , and sampling this
probability by repeated measurement, we produce interference fringes with a spacing
effect is that, for large detuning, the coefficients
proportional to T 1 . A complicating
and are not exactly 1/ 2, but also depend on the detuning in both amplitude
and phase, and this causes the interference-fringe visibility to decrease.
49
line) the field was prepared in a zero-photon state (the preparatory atom exited in the excited
state). In both cases the probability of finding the apparatus atom in the ground state, |g,
is plotted as a function of the detuning of the R1 and R2 fields. Note that the two cases are
out of phase, as expected.
It is quite apparent from the data that the measurement is far from perfect. The
probabilities do not vary from zero to unity, so the contrast or visibility, defined as
(max min )/(max + min ), is not unity. A primary source of error is the efficiency
of the ionization detectors, which is as low as 30%. Also, the interaction that correlates
the field and the apparatus is not perfect, and there is a 20% residual probability for the
apparatus atom to absorb the photon, rather than induce a conditional phase shift. Other
sources of error are imperfections of the /2 Ramsey pulses, samples containing two atoms
in the cavity, the residual thermal field in the cavity and the possibility that the injected
photon will escape from the cavity before the detection atom enters.
Exercise 1.34 What is the effect of imperfect ionization detection on the readout? Calculate
the mean and variance of the readout variable Y in terms of the system (X) mean and
variance for the binary asymmetric measurement defined by the conditioned probabilities
(y|x) below:
(1|1) = = 1 (0|1),
(1|0) = = 1 (0|0).
Here is the detection efficiency, while is related to the rate of so-called dark-counts.
50
2
Quantum parameter estimation
52
(2.1)
is an Hermitian operator known as the generator, and X is a real parameter. The aim
Here G
of the receiver is to estimate X. The receiver does this by measuring a quantity Xest , which
is the receivers best estimate for X. We assume that Xest is represented by an Hermitian
operator X est .
The simplest way to characterize the quality of the estimate is the mean-square deviation
(Xest X)2 X = (Xest )2 X + [b(X)]2 .
(2.2)
Here this is decomposed into the variance of the estimator (in the transformed state),
(2.3)
(Xest )2 X = Tr (X est Xest X )2 X ,
plus the square of the bias of the estimator Xest ,
b(X) = Xest X X,
(2.4)
that is, how different the mean of the estimator Xest X = Tr X est X is from the true value
of X.
We now derive an inequality for the mean-square deviation of the estimate. First we note
that, from Eq. (2.1),
dXest X
X .
= i Tr [X est , G]
dX
Using the general Heisenberg uncertainty principle (A.9),
(Xest )2 X (G)2 X
1
X 2 ,
Tr [Xest , G]
4
(2.5)
(2.6)
we find that
(Xest )2 X (G)2 X
53
1 dXest X 2
.
4 dX
(2.7)
This inequality then sets a lower bound for the mean-square deviation,
(Xest X)2 X
[1 + b (X)]2
+ b2 (X).
4(G)2 X
(2.8)
If there is no systematic error in the estimator, then Xest X = X and the bias is zero,
b(X) = 0. In this case, the lower bound to the mean-square deviation of the estimate is
(Xest )2 X
1
.
4(G)2 0
(2.9)
X est ] = i.
[G,
(2.10)
In this case the parameter-estimation uncertainty principle, given in Eq. (2.9), follows
directly from the general Heisenberg uncertainty relation (2.6) on using the commutation
relations (2.10).
The obvious example of canonically conjugate operators is position and momentum. Let
the unitary parameter transformation be exp(iXP ), where P is the momentum operator.
be the canonically conjugate position operator, defined by the inner product q|p =
Let Q
P ] = i (see Appendix A). Choosing X est = Q
Tr Q
0 gives
exp(ipq)/ 2 . Then [Q,
(Xest X)2
1
.
4(P )2 0
(2.11)
Indeed, in general
In general, X est need not be canonically conjugate to the generator G.
X est need not be an operator in the Hilbert space of the system at all. This means that the
HolevoHelstrom lower bound on the mean-square deviation in the estimate applies not
only to projective measurements on the system. It also applies to generalized measurements
described by effects. This is because, as we have seen in Section 1.3.2, a generalized
measurement on the system is equivalent to a projective measurement of a joint observable
on system and meter, namely the unitarily evolved meter readout observable R A (t + T ).
In fact, it turns out that in many cases the optimal measurement is just such a generalized
measurement. This is one of the most important results to come out of the work by Helstrom
and Holevo.
54
In the above we have talked of optimal measurement without defining what we mean by
optimal. Typically we assume that the receiver knows the fiducial state 0 and the generator
but has no information about the parameter X. The aim of the receiver is then to minimize
G,
lower bound
some cost function associated with the error in Xest . The HelstromHolevo
relates to a particular cost function, the mean-square error (Xest X)2 . An alternative cost
function is (Xest X), which when minimized yields the maximum-likelihood estimate.
However, the singular nature of this function makes working with it difficult. We will
consider other alternatives later in this chapter, and in the next section we treat in detail
optimality defined in terms of Fisher information.
(Xest )2 X
MF (X)
M4(G)2 0
Here M is the number of copies of the system used to obtain the estimate Xest . The deviation
Xest is not Xest X, but rather
Xest =
Xest
X.
|dXest X /dX|
(2.14)
This is necessary to compensate for any bias in the estimate, and, for an unbiased estimate,
(Xest )2 X reduces to the mean-square error. Equation (2.13) will be derived later in
this section, but first we show how the Fisher information arises from consideration of
distinguishability.
55
found in the ground state by the final measurement is g = cos2 , where = T /2. Here
is an adjustable detuning and T is the time interval to be measured.
Clearly a measurement on a single atom would not tell us very much about the parameter
. If we could actually measure the probability g then we could easily determine .
However, the best we can do is to measure whether the atom is in the ground state on a large
number M of atoms prepared in the same way. For a finite sample we will then obtain an
estimate fg of g , equal to the observed frequency of the ground-state outcome. Owing to
statistical fluctuations, this estimate will not be exactly the same as the actual probability.
In a sample of size M, the probability of obtaining mg outcomes in the ground state is
given by the binomial distribution
M
(M)
(2.15)
(mg ) =
g ( )mg (1 g ( ))Mmg .
mg
The mean and variance for the fraction fg = mg /M are
fg = g ( ),
(fg )2 = g ( )(1 g ( ))/M.
(2.16)
(2.17)
It is then easy to see that the error in estimating by estimating the probability g ( ) in a
finite sample is
dg 1 g (1 g ) 1/2
dg 1
=
.
(2.18)
=
g
d
d
M
In order to be able to measure a small shift, = , in the parameter from some
fiducial setting, , the shift must be larger than this error: .
Since is the minimum distance in parameter space between two distinguishable
distributions, we can characterize the statistical distance between two distributions as the
number of distinguishable distributions that can fit between them, along a line joining
them in parameter space. This idea was first applied to quantum measurement by Wootters
[Woo81]. Because varies inversely with the square root of the the sample size M, we
define the statistical distance between two distributions with close parameters and as
1
s = lim
.
M
M
Strictly, for any finite difference we should use the integral form
1
d
.
s(, ) = lim
M
M
(2.19)
(2.20)
Exercise 2.2 Show that, for this case of Ramsey interferometry, = 1/2 M, independently of , so that s(, ) = 2| |.
The result in Eq. (2.20) is a special case of a more general result for a probability
distribution for a measurement with K outcomes. Let k be the probability for the outcome
56
k. It can be shown that the infinitesimal statistical distance ds between two distributions,
k and k + dk , is best defined by [CT06]
(ds)2 =
K
(dk )2
k=1
K
k (d ln k )2 .
(2.21)
k=1
(2.22)
where this quantity is known as the Fisher information. The generalization for continuous
readout results was already given in Eq. (2.12).
Clearly the Fisher information has the same dimensions as X2 . From the above arguments we see that the reciprocal square root of MF (X) is a measure of the change X in
the parameter X that can be detected reliably by M trials. It can be proven that (X)2 is a
lower bound to the mean of the square of the debiased error (2.14) in the estimate Xest of
X from the set of M measurement results. That is,
(Xest )2 X
1
.
MF (X)
(2.23)
This, the first half of Eq. (2.13), is known as the CramerRao lower bound.
Exercise 2.3 Show that, for the Ramsey-interferometry example, F ( ) = 4, independently
of . If, for M = 1, one estimates as 0 if the atom is found in the ground state and /2 if
it is found in the excited state, show that
(est )2 = 2 cos2 + ( |csc(2)|)2 sin2 .
(2.24)
Verify numerically that the inequality Eq. (2.23) is always satisifed, and is saturated at
discrete points, 0, 1.1656, 1.8366, . . ..
Estimators that saturate the CramerRao lower bound at all parameter values are known
in the statistical literature as efficient. We will not use that term, because we use it with
a very different meaning for quantum measurements. Instead we will call such estimators
CramerRao optimal (CR optimal).
Exercise 2.4 Show that, if ( |X) is a Gaussian of mean X, then Xest = is a Cramer
Rao-optimal estimate of X.
57
(2.25)
By contrast, the third expression in Eq. (2.13) involves only the properties of (X) =
exp(iGX)(0)exp(i
GX).
Thus, to prove the second inequality in Eq. (2.13), we
must
seek an upper bound on the Fisher information over the set of all possible POMs E : .
Recall that the Fisher information is related to the squared statistical distance between two
distributions ( |X) and ( |X + dX):
(ds)2 = (dX)2 F (X).
(2.26)
What we want is a measure of the squared distance between two states X and X+dX that
generate the distributions:
(dsQ )2 = (dX)2 max F (X).
{E : }
(2.27)
Here we use the notation dsQ to denote the infinitesimal quantum statistical distance
between two states. Clearly (dsQ )2 /(dX)2 will be the sought upper bound on F (X).
We now present a heuristic (rather than rigorous) derivation of an explicit expression for
(dsQ )2 /(dX)2 . The classical statistical distance in Eq. (2.21) can be rewritten appealingly
as
(ds)2 = 4
where ak =
k .
K
(dak )2 ,
(2.28)
k=1
(2.29)
58
Exercise 2.6 Show that this expression holds for any projective measurement in the twodimensional Hilbert space spanned by |1 and |2. (This follows from the result of Exercise 2.2.)
GX |X dX.
(2.30)
|d = i G
Thus we get
dsQ
dX
2
= 4 (G)2 0 ,
(2.31)
proving the second lower bound in Eq. (2.13) for the pure-state case.
The case of mixed states is considerably more difficult. The explicit form for the quantum
statistical distance turns out to be
(ds)2Q = Tr[d L[]d] .
(2.32)
Here L[] is a superoperator taking as its argument. If has the diagonal representation
= j pj |j j |, then
L[]A =
j,k
2
Aj k |j k|,
pj + pk
(2.33)
where the prime on the sum means that it excludes the terms for which pj + pk = 0. If
has all non-zero eigenvalues, then L[] can be defined more elegantly as
L[] = R1 [],
(2.34)
R[]A = ( A + A)/2.
(2.35)
It is clear that L[] is a superoperator version of the reciprocal of . With this understanding,
Eq. (2.32) also looks like a quantum version of the classical statistical distance (2.21).
Exercise 2.7 Show that, for the pure-state case, Eq. (2.32) reduces to Eq. (2.29), by using
the basis |1, |2 defined above.
Now consider again the case of a unitary transformation as X varies, so that
]dX.
d = i[G,
(2.36)
To find (ds)2Q from Eq. (2.32), we first need to find the operator A = R1 []d. From
Eq. (2.35), this must satisfy
= 2i[G,
]dX.
( A + A)
(2.37)
59
]dX = 2 d is a solution of
If = (a pure state satisfying 2 = ),
then A = 2i[G,
this equation. That gives
dsQ 2
]2 = 4 (G)2 ,
= 2 Tr [G,
(2.38)
0
dX
as found above. If is not pure then it can be shown that
dsQ 2
4 (G)2 0 .
dX
Putting all of the above results together, we have now three inequalities:
1
1
dX 2
2
M(Xest ) X
F (X)
dsQ
4(G)2 0
(2.39)
(2.40)
The first of these is the classical CramerRao inequality. The second we will call the
BraunsteinCaves inequality. It applies even if the transformation of (X) as X varies
that generates the transformation. The final
is non-unitary. That is, even if there is no G
inequality obviously applies only if there is such a generator. In the case of pure states, it
can be replaced by an equality.
Omitting the second term (the classical Fisher information) gives what we will call
the HelstromHolevo inequality. Like the CramerRao inequality, this cannot always be
saturated for a given set{X }X . The advantage of the BraunsteinCaves inequality is that it
can always be saturated, as is clear from the definition of the quantum statistical distance
omitting both the second
in Eq. (2.27). If there is a unitary transformation generated by G,
and the third term gives the inequality (2.9) for the special case of unbiased estimates with
M = 1. As is apparent, the inequality (2.9) was derived much more easily than those in
Eq. (2.40), but the advantage of generality and saturability offered by Eq. (2.40) should
also now be apparent.
( |X) = 0 |eiGX E(
)eiGX |0 ,
(2.42)
X).
)eiXG = E(
eiXG E(
(2.43)
60
Such measurements are called covariant by Holevo [Hol82]. In addition we will posit that
the optimal POM is a multiple of a projection operator:
)d = | |d
E(
(2.44)
for a real constant. It is important to note that we do not require the states {| } to be
orthogonal.
Since the POM is independent of a change of phase for the states | , we can choose
with no loss of generality
eiXG | = | + X.
(2.45)
|eiXG | = X| = exp X
|.
(2.46)
where ( ) |/ .
For the POM to be optimal, it must maximize the Fisher information at F = 4(G)2 0 .
For a covariant measurement, the Fisher information takes the form
[ ( )]2
,
(2.49)
F = d 0
0 ( )
where the prime here denotes differentiation with respect to the argument. Note that the
conditioning on the true value X has been dropped, because for a covariant measurement
F is independent of X. Braunstein and Caves have shown [BC94] that F is maximized if
and only if the wavefunction of the fiducial state is, up to an overall phase, given by
(2.50)
0 ( ) = 0 ( )eiG0 .
in the | representation for the
To see this we can calculate the mean and variance of G
i( )
,
state 0 ( )e
G0 = d 0 ( ) i
(2.51)
0 ( )
(2.52)
= d 0 ( ) ( ),
2
(G) 0 = d
i
G0 0 ( )
[0 ( )]2
1
d
+ d 0 ( )[ ( ) G0 ]2 .
=
4
0 ( )
0 ( )
(2.53)
(2.54)
61
d 0 ( X)( X )2 = ()2 0 .
(2.56)
Thus there may be a global bias in the mean of , but the variance of is independent of
X. Suppose now we make M measurements and form the unbiased estimator
Xest =
M
1
(j 0 ).
M j =1
(2.57)
(2.58)
Thus, to be a CR-optimal measurement for any M, the POM must saturate the CramerRao
bound for M = 1. It can be shown that this requires that 0 ( ) be a Gaussian. (Recall
Exercise 2.4.) It is very important to remember, however, that, for a given generator G,
physical restrictions on the form of the wavefunctions may make Gaussian states impossible.
Thus there may be no states that achieve the CramerRao lower bound for a BC-optimal
measurement. Moreover, if we choose estimators other than the sample mean, the fiducial
wavefunction that achieves the lower bound need not be Gaussian. In particular, for M
, a maximum-likelihood estimate of X will be CR optimal for any wavefunction of the
form (2.50).
The relation
()2 (G)2 1/4
(2.59)
= i / in the
looks like the Heisenberg uncertainty relation of the usual form, since G
representation. However, nothing in our derivation assumed that the states | were the
eigenstates of an Hermitian operator. Indeed, as we shall see, there are many examples
for which the BC-optimal measurement is described by a POM with non-orthogonal elements. This is an important reason for introducing generalized measurements, as we did
in Chapter 1. One further technical point should be made. In order to find the BC-optimal
is a displacemeasurement, we must carefully consider the states for which the generator G
ment operator. If G has a degenerate spectrum then it is not possible to find a BC-optimal
measurement in terms of a POM described by a single real number . Further details may
be found in [BCM96].
62
(2.60)
1
.
4M
(2.61)
(2.63)
where |q is a canonical position state, defined by Eq. (2.62) with = q and f (p) 0.
(See Appendix A.)
In this case the states | are eigenstates of an Hermitian operator, namely
+ f (P ),
=Q
(2.64)
(2.65)
(2.66)
where r , the Fourier transform of r, is a skew-symmetric function (that is, r (k) = r (k)).
Thus, if any f (p) is allowed, the condition on |0 for achieving BC optimality is just
that |p|0 |2 be symmetric in p about p0 P 0 . If we allow only canonical position
63
(2.67)
(2.68)
|0 = |r, = exp r e2i a 2 e2i a 2 2 |0.
As discussed in Appendix A, this squeezed state is in fact a zero-amplitude coherent state
and P , defined by
for rotated and rescaled canonical coordinates, Q
er + iP er )ei .
Q + iP = (Q
(2.69)
If we graphically represent a vacuum state as a circle in phase space with the parametric
equation
2 + P 2 )|0 = 1,
Q2 + P 2 = 0|(Q
(2.70)
then the squeezed vacuum state can be represented by an ellipse in phase space with the
parametric equation
2
2
2 + P 2 )|0 = 1.
Q + P = 0 |(Q
(2.71)
This ellipse, oriented at angle , is shown in Fig. 2.1. These curves can also be thought of
as contours for the Wigner or Q function see Section A.5.
The momentum wavefunction for this fiducial state can be shown to be
p2
p|0 exp
,
(2.72)
2
where is a complex parameter
=
(2.73)
The condition for BC optimality is Eq. (2.66). In this case, sinceP 0 = 0, it reduces to
p|0 eif (p) = p|0 e+if (p) .
(2.74)
p 2 Im( )
.
2| |2
(2.75)
(2.76)
64
In other words, if we use the squeezed coherent state as a fiducial state, then the optimal
and P . In this case the
representation of the
measurement is a linear combination of Q
fiducial state is a Gaussian state as expected, with the probability density
2
1 1/2
0 ( ) = Re[ ]
exp
.
(2.77)
Re[ 1 ]
This has a mean of zero (indicating that is an unbiased estimator) and a variance of
0 |()2 |0 =
1
1
Re( 1 ) =
.
2
40 |(P )2 |0
(2.78)
Since the probability density is Gaussian, we do not need to appeal to the large-M limit to
achieve the CramerRao lower bound. The sample mean of provides an efficient unbiased
estimator of X for all values of M.
can be written as
The optimal observable
= Q cos + P sin ,
cos
(2.79)
where = artan[Im( 1 )]. This operator can be thought of as a modified position operator
arising from the rotation in the phase plane by an angle followed by a rescaling by 1/cos .
65
as it does in
The rescaling means that displacement by X produces the same change in
the canonical position operator Q. Note that the optimal rotation angle is not the same as
, the rotation angle that defines the major and minor axes of the squeezed state. Figure 2.1
and its caption give an intuitive explanation for this optimal measurement.
(2.80)
Here N is the number operator (see Section A.4). In a time the phase of a local oscillator
changes by = . Thus the unitary operator for a phase shift is
exp(iH ) = exp(iN).
(2.81)
E()d
= ||d,
(2.82)
such that, following Eq. (2.45), | is a state for which N generates displacement:
exp(iN)|
= | + .
(2.83)
ein+if (n) |n.
(2.84)
n=0
The canonical choice is f (n) 0. This will be appropriate if the fiducial state is of the
form
|0 =
n ei0 n |n.
(2.85)
This is the case for many commonly produced states, such as the coherent states (see
Section A.4).
66
Since E()
is periodic with period 2 , we have to restrict the range of results, for
example to the interval < . Normalizing the canonical phase POM then gives
1
E()d
=
||d
2
(2.86)
with
| =
ein |n.
(2.87)
n=0
These are the SusskindGlogower phase states [SG64], which are not orthogonal:
i
1
+ .
(2.88)
| = ( ) cot
2
2
2
They are overcomplete and are not the eigenstates of any Hermitian operator.
As mooted earlier, this example illustrates the important feature of quantum parameter
estimation, namely that it does not restrict us to measuring system observables, but rather
allows general POMs. The phase states are in fact eigenstates of a non-unitary operator
i = (N
+ 1)1/2 a = a N 1/2 =
e
|n 1n|,
(2.89)
n=1
such that
i | = ei |.
e
(2.90)
(2.91)
(2.92)
This can be satisfied only for a limited class of states because n is discrete and bounded
below by zero. Specifically, choosing 0 = 0, it is satisfied by states of the form
|0 =
2
n |n: n = 2n ,
(2.93)
n=0
where (integer or half-integer) is the mean photon number. States for which this is
satisfied achieve BC optimality:
(2.94)
F () = 4 (N )2 .
However, because n has finite support (that is, it is zero outside a finite range of ns),
0 () = ||0 |2 cannot be a Gaussian. Thus, even assuming that is restricted to
67
10
12
14
16
Q
Fig. 2.2 A heuristic phase-space representation of a coherent state (dashed) with amplitude = 6 and
a phase-squeezed state (solid) with amplitude = 6 and squeezing parameters r = 2 and = /2.
The contours are defined by parametric equations like Eq. (2.70) and Eq. (2.71).
(2.95)
1
1
>
,
MF ()
4M
(2.96)
where the inequalities can be approximately satisfied for large . The 1 scaling is known
as the standard quantum limit. Here standard arises simply because coherent states are
the easiest suitable states to produce.
One could beat the standard quantum limit with another fiducial state, such as a phasesqueezed state of a simple harmonic oscillator. We have already met a class of squeezed
state, defined in Eq. (2.68). The phase-squeezed state is this state displaced in phase space
in the direction orthogonal to the direction of squeezing (see Fig. 2.2).
The ultimate quantum limit arises from choosing a BC-optimal state with the largest
number variance for a fixed mean number . Clearly this is a state of the form
2|0 = |0 + |2,
(2.97)
which has a variance of 2 . Thus, for a fixed mean photon number, the ultimate limit
is
F () = 42 .
(2.98)
68
1
cos2 [( + )].
(2.99)
Although this satisfies Eq. (2.98), for 1 it is clear that is useless for finding an
estimate for [, ), because Eq. (2.99) has a periodicity of /. The explanation
is that the Fisher information quantifies how well small changes in can be detected, not
how well an unknown can be estimated.
Exercise 2.11 Show Eq. (2.99), and verify Eq. (2.98) by calculating the Fisher information
directly from Eq. (2.99).
rank-1 projectors (Eq. (2.44)), such that G generates displacements in the effect basis
(Eq. (2.45)). However, the states that minimize the cost may be very different from the
states that maximize the Fisher information. (To obtain a finite optimal state in both cases it
may be necessary to apply some constraint, such as a fixed mean energy, as we considered
above.) In this section we investigate this difference in the context of phase-difference
estimation.
69
Fig. 2.3 The MachZehnder interferometer. The unknown phase to be estimated is . Both beam ) whose outcome
splitters (BS) are 50 : 50. The final measurement is described by a POM E(
(The value shown for this was chosen arbitrarily.) Figure 1
is used to obtain a phase estimate .
adapted with permission from D. W. Berry et al., Phys. Rev. A 63, 053804, (2001). Copyrighted by
the American Physical Society.
(2.100)
Jy = (a b a b )/(2i),
(2.101)
Jz = (a a b b)/2,
(2.102)
(2.103)
j = (a a + b b)/2
(2.104)
where
has integer and half-integer eigenvalues. This is known as the Schwinger representation
of angular momentum, because the operators obey the usual angular-momentum operator
algebra. A set of operators is said to form an operator algebra if all the commutators are
70
(2.105)
(2.106)
where the represents two choices for a phase convention. For convenience we will take
the first beam-splitter to be described by B + and the second by B . Thus, in the absence of
a phase shift in one of the arms, the nett effect of the MZI is nothing: B B + = 1 and the
beams a and b come out in the same state as that in which they entered. The choice of B is a
convention, rather than a physically determinable fact, because in optics the distances in the
interferometer are not usually measured to wavelength scale except by using interferometry.
Thus an experimenter would set up an interferometer with the unknown phase set to
zero, and then adjust the arms until the desired output (no change) is achieved.
The effect of the unknown phase shift in the lower arm of the interferometer is described
appears here
The operator a (rather than b)
by the unitary operator U () = exp(ia a).
because the input beam a is identified with the transmitted (i.e. straight through) beam.
Because j is a constant with value j , we can add exp(ij ) to this unitary operator with
no physical effect, and rewrite it as U () = exp(iJz ). If we also include a known phase
shift in the other arm of the MZI, as shown in Fig. 2.3 (this will be motivated later), then
we have between the beam-splitters
U ( ) = exp[i( )Jz ].
(2.107)
(2.108)
Exercise 2.13 Show this. First show the following theorem for arbitrary operators R and
S:
2
R
S]
+ [R,
[R,
S]]
+ .
e R Se
= S + [R,
2!
(2.109)
Then use the commutation relations for the Js to show that B Jz B + = Jy . Use this to
show that B f (Jz )B + = f (Jy ) for an arbitrary function f .
71
The MZI unitary operator I( ) transforms the photon-number difference operator
from the input 2Jz to the output
(2Jz )out = cos( )2Jz + sin( )2Jx .
(2.110)
Here the subscript out refers to an output operator (that is, a Heisenberg-picture operator
for a time after the pulse has traversed the MZI). An output operator is related to the
corresponding input operator (that is, the Heisenberg-picture operator for a time before the
pulse has met the MZI) by
O out = I( )O I( ) .
(2.111)
Exercise 2.14 Show Eq. (2.110) using similar techniques to those in Exercise 2.13 above.
We can use this expression to derive the standard quantum limit (SQL) to interferometry.
As defined in Section 2.3.3 above, the SQL is smply the limit that can be obtained using an
easily prepared state and a simple measurement scheme. The easily prepared state is a state
with all photons in one input, say the a mode.1 That is, the input state is a Jz eigenstate with
eigenvalue j . If is approximately known already, we can choose + /2. Then
the SQL is achieved simply by measuring the output photon-number difference operator
(2Jz )out = sin( + /2 )2Jz cos( + /2 )2Jx
(2.112)
( + /2 )2Jz 2Jx
(2.113)
= ( + /2 )2j 2Jx .
(2.114)
This operator can be measured simply by counting the numbers of photons in the two output
modes and subtracting one number from the other. We can use this to obtain an estimate
via
= (2Jz )out /(2j ) (/2 ),
(2.115)
where (2Jz )out is the result of the measurement. It is easy to verify that for a Jz eigenstate
Jx = 0, so that the mean of the estimate is approximately , as desired. From Eq. (2.114)
the variance is
2 2 2
2
(2.116)
Jx /j .
2 2
For the state Jz = j , we have Jz2 = j 2 , while by symmetry
2 Jx = Jy . Since the sum of
these three squared operators is j (j + 1), it follows that Jx = j/2. Thus we get
2
2
1/(2j ).
(2.117)
Actually it is not easy experimentally to prepare a state with a definite number of photons in one mode. However, it is easy to
prepare a state with an indefinite number of photons in one mode, and then to measure the photon number in each output beam
(as discussed below). Since the total number of photons is preserved by the MZI, the experimental results are exactly the same
as if a photon-number state, containing the measured number of photons, had been prepared.
72
That is, provided that the unknown phase is approximately known already, the SQL for the
variance in the estimate is equal to the reciprocal of the photon number.
In fact, this SQL can be obtained without the restriction that + /2, provided
that one uses a more sophisticated technique for estimating the phase from the data. This
can be understood from the fact that, with all 2j photons entering one port, the action of the
MZI is equivalent to that of the Ramsey interferometer (Section 2.2.1) repeated 2j times.
Exercise 2.15 Convince yourself of this fact. Note that the parameter in the Ramseyinterferometry example is analogous to /2 in the MZI example.
As shown in Exercise 2.2, the Fisher information implies that the minimum detectable
phase shift is independent of the true phase. Indeed, with M = 2j repetitions we get
(2 )2 = 1(2j ), which is exactly the same as the SQL found above for the MZI.
Note, however, that using the Fisher information to define the SQL has problems, as
discussed previously. In the current situation, it is apparent from Eq. (2.112) that the same
measurement statistics will result if + /2 is replaced by + /2 .
Exercise 2.16 Convince yourself of this. Remember that Jx is pure noise.
That is, the results make it impossible to distinguish from 2 . Thus, it is still
necessary to have prior knowledge, restricting to half of its range, say [0, ).
More importantly, if one tries to go beyond the SQL by using states entangled across both
input ports (as will be considered in Section 2.4.3) then the equivalence between the MZI
and Ramsey interferometry breaks down. In such cases, the simple measurement scheme
of counting photons in the output ports will not enable an estimate of with accuracy
independent of . Rather, one finds that one does need to be able to set + /2 in
order to obtain a good estimate of . To get around the restriction (of having to know
before one tries to estimate it), it is necessary to consider measurement schemes that go
beyond simply counting photons in the output ports. It is to this topic that we now turn.
j
ei |j, y ,
(2.118)
=j
with an angle variable. We could have included an additional exponential term eif () for
an arbitrary function f , analogously to Eq. (2.62). By choosing f 0 we are defining
canonical phase states.
73
(2.119)
j
1 i()
e
|j, y j, |d.
E( )d =
2 ,=j
(2.120)
j, |0 = y j, 2j |0 .
(2.121)
We also want to be an unbiased estimate of . For cyclic variables, the appropriate sense
of unbiasedness is that
)I()|0 ei d = .
arg ei = arg 0 |I() E(
(2.122)
Exercise 2.17 Show that this will be the case, if we make the coefficients y j, |0 real
and positive.
Since we are going to optimize over the input states, we can impose these restrictions
without loss of generality. Similarly, there is no need to consider the auxiliary phase shift
.
The fiducial state in the |j, y basis is not easily physically interpretable. We would
prefer to have it in the |j, z basis, which is equivalent to the photon-number basis for the
two input modes:
|j, z = |na := j + |nb := j .
(2.123)
It can be shown [SM95] that the two angular-momentum bases are related by
y
j
j |j z = ei(/2)() I
(/2),
(2.124)
where I (/2) are the interferometer matrix elements in the |j, z basis given by
j
(/2)
I
=2
(j )! (j + )!
(j )! (j + )!
1/2
(,+)
Pj
(0),
(2.125)
(,)
where Pn (x) are the Jacobi polynomials, and the other matrix elements are obtained
using the symmetry relations
j
j
j
() = (1) I
() = I, ().
I
(2.126)
74
S e
d ()ei() ,
(2.127)
(2.128)
where the mean phase is here defined by the requirement that S is real and non-negative.
If the Holevo variance is small then it can be shown that
HV
4 sin2
()d.
(2.129)
2
j 1
c c+1 .
(2.130)
=j
We wish to maximize this subject to the constraint |c |2 = 1. From linear algebra the
solution can be shown to be [BWB01]
Smax = cos
(2.131)
2j + 2
75
0.6
0.5
0.4
z
j opt
0.3
0.2
0.1
0
0
10
20
Fig. 2.4 The coefficients z j, |opt for the state optimized for minimum phase variance under ideal
measurements. All coefficients for a photon number of 2j = 40 are shown as the continuous line, and
those near = 0 for a photon number of 2j = 1200 as crosses. Figure 2 adapted with permission
from D. W. Berry et al., Phys. Rev. A 63, 053804, (2001). Copyrighted by the American Physical
Society.
for
( + j + 1)
1
c =
.
sin
2j + 2
j +1
2
+ O(j 3 ).
=
HV = tan
2j + 2
(2j )2
(2.132)
(2.133)
This is known as the Heisenberg limit and is indeed quadratically improved over the
SQL. Note that the coefficients (2.132) are symmetric about the mean, so these states are
also BC optimal. However, they are very different from the states that maximize the Fisher
information, which, following
the argument in Section 2.3.3, would have only two non-zero
coefficients, cj = 1/ 2.
Using Eq. (2.124), the state in terms of the eigenstates of Jz is
j
( + j + 1) i(/2)() j
1
|opt =
sin
I (/2)|j z .
e
2j + 2
j + 1 ,=j
(2.134)
An example of this state for 40 photons is plotted in Fig. 2.4. This state contains contributions
from all the Jz eigenstates, but the only significant contributions are from 9 or 10 states
near = 0. The distribution near the centre is fairly independent of photon number. To
demonstrate this, the distribution near the centre for 1200 photons is also shown in Fig. 2.4.
In Ref. [YMK86] a practical scheme for generating a combination of two states near = 0
was proposed. Since the optimum states described here have significant contributions
76
10
10
10
HV
10
10
10
10
10
10
10
10
10
Fig. 2.5 Variances in the phase estimate versus input photon number N = 2j . The lines are exact
results for canonical measurements on optimized states |opt (continuous line) and on states with all
photons incident on one input port |jj z (dashed line). The crosses are the numerical results for the
adaptive phase-measurement scheme on |opt . The circles are numerical results for a non-adaptive
phase-measurement scheme on |opt . Figure 3 adapted with permission from D. W. Berry et al.,
Phys. Rev. A 63, 053804, (2001). Copyrighted by the American Physical Society.
77
Fig. 2.6 The adaptive MachZehnder interferometer, allowing for feedback to control the phase
. Figure 1 adapted with permission from D. W. Berry et al., Phys. Rev. A, 63, 053804, (2001).
Copyrighted by the American Physical Society.
it might not be possible to perform the optimal measurement with available experimental
techniques. For this reason, it is often necessary to consider measurements constrained by
some practical consideration. It is easiest to understand this idea in a specific context, so in
this section we again consider the case of interferometric phase measurements.
As we explained in Section 2.4.1, the standard way to do quantum-limited interferometry
is simply to count the total number of photons exiting at each output port. For a 2j -photon
input state, the total number of output photons is fixed also, so all of the information from
the measurement is contained in the operator
(2.135)
c1 = bout .
cu (, ) = b sin
+ a cos
.
2
2
(2.136)
(2.137)
78
Exercise 2.19 Show this, again using the technique of Exercise 2.13.
As noted above, for an arbitrary input state, measuring (Jz )out gives a good estimate of
only if + /2. This of course requires prior knowledge of , which is not always
true. However, it is possible to perform a measurement, still constrained to be realized
by photon counting, whose accuracy is close to that of the canonical measurement and
independent of . In this respect it is like the canonical measurement, but its accuracy
will typically be worse than that of the canonical measurement. In order to realize the
measurement we are referring to, it is necessary to make the auxiliary phase timevarying. That is, it will be adjusted over the course of a single measurement. For example,
it could be changed after each detection. This requires breaking down the measurement
into individual detections, rather than counting only the total number of detections at one
detector minus the total at the other. The measurement operators which describe individual
detections are in fact just proportional to the output annihilation operators defined above,
as we will now show.
Let us denote the result u from the mth detection as um (which is 0 or 1 according to
whether the photon is detected in mode c0 or c1 , respectively) and the measurement record
up to and including the mth detection as the binary string rm um . . . u2 u1 . The state of
the two-mode field after m detections will be a function of the measurement record and
we denote it as |(rm ). Denoting the null string by r0 , the state before any detections is
| = |(r0 ). Since we are considering demolition detection, the state after the (m 1)th
detection will be a two-mode state containing exactly 2j + 1 m photons.
Define measurement operators corresponding to the two outcomes resulting from the
mth photodetection:
cum
=
.
M u(m)
m
2j + 1 m
(2.138)
cum cum
2j + 1 m
(2.139)
satisfy
a a + b b
.
E 1(m) + E 0(m) =
2j + 1 m
(2.140)
(2.141)
79
Now, if is fixed, the M u(m) are independent of m (apart from a constant). Moreover, M 1m
and M 0m commute, because a out and bout commute for fixed. Thus we obtain
Pr[R2j = u2j u2j 1 . . . u2 u1 ] =
1
na
a
b nb
(b )nout
|,
|(a )nout
bout a out
(2j )!
(2.142)
where
na = 2j nb =
2j
um .
(2.143)
m=1
80
(2.145)
where r = (q, p). Depending upon what sort of information one wishes to obtain, it may
be advantageous to choose a different second measurement
depending on the result of the
first measurement. That is, the measurement Oq : q will depend upon p. This is the idea
of an adaptive measurement.
By making a measurement adaptive, the greater measurement may more closely approach
the ideal measurement one would like to make. As long as the greater measurement remains
incomplete, one may continue to add to it by making more adaptive measurements. Obviously it only makes sense to consider adaptive measurements in the context of measurements
that are constrained in some way. For unconstrained measurements, one would simply make
the ideal measurement one wishes to make.
It is worth emphasizing again that when we say adaptive measurements we mean measurements on a single system. Another concept of adaptive measurement is to make a
(perhaps complete) measurement on the system, and use the result to determine what sort
of measurement to make on a second identical copy of the system, and so on. This could
be incorporated into our definition of adaptive measurements by considering the system to
consist of the original system plus the set of all copies.
The earliest example of using adaptive measurements to make a better constrained
measurement is due to Dolinar [Dol73] (see also Ref. [Hel76], p. 163). The Dolinar receiver
was proposed in the context of trying to discriminate between two non-orthogonal (coherent)
states by photodetection, and has recently been realized experimentally [CMG07]. Adaptive
measurements have also been found to be useful in estimating the phase (relative to a phase
reference called a local oscillator) of a single-mode field, with the measurement again
constrained to be realized by photodetection [Wis95]. An experimental demonstration of
this will be discussed in the following section. Meanwhile we will illustrate adaptive
detection by a similar application: estimating the phase difference in an interferometer as
introduced in Ref. [BW00] and studied in more detail in Ref. [BWB01].
Unconstrained interferometric measurements were considered in Section 2.4.2, and constrained interferometric measurements in Section 2.5.1. Here we consider again constrained
measurements, where all one can do is detect photons in the output ports, but we allow the
measurement to be adaptive, by making the auxiliary phase depend upon the counts so
far. Using the notation of Section 2.5.1, the phase m , before the detection of the mth photon, depends upon the record rm1 = um1 u1 of detections (where uk = 0 or 1 denotes
a detection in detector 0 or 1, respectively). The question is, how should m depend upon
rm1 ?
81
We will assume that the two-mode 2j -photon input state | is known by the experimenter only the phase is unknown. The state after m detections will be a function of
m , ). It is determined by the
the measurement record rm and , and we denote it as |(r
0 , ) = | and the recurrence relation
initial condition |(r
m1 , ).
m rm1 , ) = M u(m) (, m )|(r
|(u
m
(2.146)
These states are unnormalized, and the norm of the state matrix represents the probability
for the record rm , given :
m , )|(r
m , ).
(rm |) = (r
(2.147)
Thus the probability of obtaining the result um at the mth measurement, given the previous
results rm1 , is
(um |, rm1 ) =
m rm1 , )
m rm1 , )|(u
(u
.
m1 , )
(rm1 , )|(r
(2.148)
(2.149)
where N(rm ) is a normalization factor. To obtain this we have used Bayes theorem assuming
a flat prior distribution for (that is, an initially unknown phase). A Bayesian approach to
interferometry was realized experimentally in Ref. [HMP+ 96], but only with non-adaptive
measurements.
With this background, we can now specify the adaptive algorithm for m . The sharpness
of the distribution after the mth detection is given by
2
i
( |um rm1 )e d .
S(um rm1 ) =
(2.150)
0
A reasonable (not necessarily optimal) choice for the feedback phase before the mth
detection, m , is the one that will maximize the sharpness after the mth detection. Since we
do not know um beforehand, we weight the sharpnesses for the two alternative results by
their probabilities of occurring on the basis of the previous measurement record. Therefore
the expression we wish to maximize is
(um |rm1 )S(um rm1 ).
(2.151)
M(m |rm ) =
um =0,1
Using Eqs. (2.148), (2.149) and (2.150), and ignoring the constant Nm (rm ), the maximand
can be rewritten as
2
i
The controlled phase m appears implicitly in Eq. (2.152) through the recurrence relain Eq. (2.138) is defined in terms
tion (2.146), since the measurement operator M u(m)
m
82
of cum (, m ) in Eq. (2.137). The maximizing solution m can be found analytically
[BWB01], but we will not exhibit it here.
of from the
The final part of the adaptive scheme is choosing the phase estimate
complete data set r2j . For cyclic variables, the analogue to minimizing the mean-square
error is to maximize
) .
cos(
(2.153)
is chosen to be the appropriate mean of the posterior distribution ( |r2j ),
To achieve this,
which from Eq. (2.149) is
2
2j , )|(r
2j , )ei d.
= arg
(r
(2.154)
0
This completes the formal description of the algorithm. Its effectiveness can be determined numerically, by generating the measurement results randomly with probabilities
determined as above. From Eq. (2.127),
determined using = 0, and the final estimate
M
}=1 of M final estimates allows the Holevo phase variance to be approxan ensemble {
imated by
2
M
1
i
HV 1 + M
e .
(2.155)
=1
It is also possible to determine the phase variance exactly by systematically going through
all the possible measurement records and averaging over 1 (the auxiliary phase before the
first detection). However, this method is feasible only for photon numbers up to about 30.
The results of using this adaptive phase-measurement scheme on the optimal input states
determined above are shown in Fig. 2.5. The phase variance is very close to the phase
variance for ideal measurements, with scaling very close to j 2 . The phase variances do
differ relatively more from the ideal values for larger photon numbers, however, indicating
a scaling slightly worse than j 2 . For comparison, we also show the variance from the
non-adaptive phase measurement defined by Eq. (2.144). As is apparent, this has a variance
scaling as j 1 . Evidently, an adaptive measurement has an enormous advantage over a
non-adaptive measurement, at least for the optimal input state.
We can sum up the results of this section as follows. Constrained non-adaptive measurements are often far inferior to constrained adaptive measurements, which are often
almost as good as unconstrained measurements. That is, a measurement constrained by
some requirement of experimental feasibility typically reaches only the standard quantum
limit of parameter estimation. This may be much worse than the Heisenberg limit, which
can be achieved by the optimal unconstrained measurement. However, if the experiment
is made just a little more complex, by allowing adaptive measurements, then most of the
difference can be made up. Note, however, that achieving the Heisenberg limit, whether by
adaptive or unconstrained measurements, typically requires preparation of an optimal (i.e.
non-standard) input state.
83
(2.156)
(
As in the interferometric case, if the phase to be estimated was known approximately
before the measurement, then a simple scheme would allow the phase to be estimated with
an uncertainty close to the canonical limit. This is the technique of homodyne detection,
so called because the local oscillator frequency is the same as that of the signal. But,
in a communication context, the phase would be completely unknown. Since canonical
measurements are not feasible, the usual alternative is heterodyne detection. This involves
a local oscillator, which is detuned (i.e. at slightly different frequency from the system) so
that it cycles over all possible relative phases with the system. That is, it is analogous to
the non-adaptive interferometric phase measurement introduced in Section 2.5.1. Again,
Specifically,
as in the interferometric case, this technique introduces noise scaling as 1/n.
the heterodyne limit to phase measurements on a coherent state is twice the canonical limit
[WK97]:
het )2 1/(2n).
(2.157)
(
The aim of the experiments by Armen et al. was to realize an adaptive measurement
that can beat the standard limit of heterodyne detection. As in the interferometric case, this
involves real-time feedback to control an auxiliary phase , here that of the local oscillator.
Since each optical pulse has some temporal extent, the measurement signal generated by
the leading edge of a given pulse can be used to form a preliminary estimate of its phase.
84
Fig. 2.7 Apparatus used to perform both adaptive homodyne and heterodyne measurements (see the
text) in the experiment of Armen et al. Solid lines denote optical paths, and dashed lines denote
electrical paths. PZT indicates a piezoelectric transducer. Figure 2(a) adapted with permission from
M. A. Armen et al., Phys. Rev. Lett., 89, 133602, (2002). Copyrighted by the American Physical
Society.
This can then be used to adjust the local oscillator phase in a sensible way before the
next part of the pulse is detected, and so on. Detailed theoretical analyses of such adaptive
dyne schemes [WK97, WK98] show that they are very close to canonical measurements.
Specifically, for coherent state inputs the difference is negligible even for mean photon
numbers of order 10.
Accurately assessing the performance of a single-shot measurement requires many repetitions of the measurement under controlled conditions. Figure 2.7 shows a schematic
diagram of the experimental apparatus [AAS+ 02]. Light from a single-mode (continuouswave) laser enters the MachZehnder interferometer at beam-splitter 1 (BS 1), thereby
creating two beams with well-defined relative phase. The local oscillator (LO) is generated
using an acousto-optic modulator (AOM) driven by a radio-frequency (RF) synthesizer
(RF 1 in Fig. 2.7). The signal whose phase is to be measured is a weak sideband to the
carrier (local oscillator). That is, it is created from the local oscillator by an electro-optic
modulator (EOM) driven by a RF synthesizer (RF 2) that is phase-locked to RF 1. A pair
of photodetectors is used to collect the light emerging from the two output ports of the final
50 : 50 beam-splitter (BS 2). Balanced detection is used: the difference of their photocurrents provides the basic signal used for either heterodyne or adaptive phase estimation. The
measurements were performed on optical pulses of duration 50 s.
In this experimental configuration, the adaptive measurement was performed by feedback
control of the phase of RF 2, which sets the relative phase between the signal and the LO.
The real-time electronic signal processing required in order to implement the feedback
85
algorithm was performed by a field-programmable gate array (FPGA) that can execute
complex computations with high bandwidth and short delays. The feedback and phaseestimation procedure corresponded to the Mark II scheme of Ref. [WK97], in which
the photocurrent is integrated with time-dependent gain to determine the instantaneous
feedback signal. When performing heterodyne measurements, RF 2 was simply detuned
from RF 1 by 1.8 MHz. For both types of measurement, both the photocurrent, I (t), and
the feedback signal, (t), were stored on a computer for post-processing. This is required
because the final phase estimate in the Mark II scheme of Ref. [WK97] is not simply
the estimate used in the feedback loop, but rather depends upon the full history of the
photocurrent and feedback signal. (This estimate is also the optimal one for heterodyne
detection.)
The data plotted in Fig. 2.8(a) demonstrate the superiority of an adaptive homodyne
measurement procedure over the standard heterodyne measurement procedure. Also plotted
is the theoretical prediction for the variance of ideal heterodyne measurement (2.157),
both with (thin solid line) and without (dotted line) correction for a small amount of
excess electronic noise in the balanced photocurrent. The excellent agreement between the
heterodyne data and theory indicates that there is no excess phase noise in the coherent
signal states. In the range of 10300 photons per pulse, most of the adaptive data lie below
the absolute theoretical limit for heterodyne measurement (dotted line), and all of them
lie below the curve that has been corrected for excess electronic noise (which also has a
detrimental effect on the adaptive data).
For signals with large mean photon number, the adaptive estimation scheme used in the
experiment was inferior to heterodyne detection, because of technical noise in the feedback
loop. At the other end of the scale (very low photon numbers), the intrinsic phase uncertainty
of coherent states becomes large and the relative differences among the expected variances
for adaptive, heterodyne and ideal estimation become small. Accordingly, Armen et al.
were unable to beat the heterodyne limit for the mean-square error in the phase estimates
for mean photon numbers less than about 8.
However, Armen et al. were able to show that the estimator distribution for adaptive
homodyne detection remains narrower than that for heterodyne detection even for pulses
with mean photon number down to n 0.8. This is shown in Fig. 2.8(b), which plots the
adaptive and heterodyne phase-estimator distributions for n 2.5. Note that the distributions are plotted on a logarithmic scale. The adaptive phase distribution has a narrower
peak than the heterodyne distribution, but exhibits rather high tails. These features agree
qualitatively with the numerical and analytical predictions of Ref. [WK98]. It can be partly
explained by the fact that the feedback loop occasionally locks on to a phase that is wrong
by .
2.7 Quantum state discrimination
So far in this chapter we have considered the parameter to be estimated as having a continuous spectrum. However, it is quite natural, especially in the context of communication, to
86
(a)
(b)
( )
()
2
10
10
10
10
/2
10
10
10
(Average photon number per pulse)
n
Fig. 2.8 Experimental results from the adaptive and heterodyne measurements. (a) Adaptive (circles)
and heterodyne (crosses) phase-estimate variance versus mean photon number per pulse. The dash
dotted line is a second-order curve through the adaptive data, to guide the eye. The thin lines are the
theoretical curves for heterodyne detection with (solid) and without (dotted) corrections for detector
electronic noise. The thick solid line denotes the fundamental quantum uncertainty limit, given the
overall photodetection efficiency. (b) Probability distributions for the error in the phase estimate for
adaptive (circles) and heterodyne (crosses) measurements, for pulses with mean photon number of
about 2.5. Figure 1 adapted with permission from M. A. Armen et al., Phys. Rev. Lett. 89, 133602,
(2002). Copyrighted by the American Physical Society.
consider the case in which the parameter can take values in a finite discrete set. In this case
the best estimate should be one of the values in this set, and the problem is really that of
deciding which one. This is known as a quantum decision or quantum state-discrimination
problem.
87
and thus which symbol, was transmitted in each case. This is an interesting problem for the
case of non-orthogonal states, 1 |0 = = 0. We will suppose that the prior probability
distributions for the source are 0 and 1 for the states |0 and |1 , respectively. The
measurements are described by some POM {E a: a}. If the states were orthogonal, they
could be discriminated without error, but otherwise there is some finite probability that an
error will be made. An error is when the receiver assigns a value 1 when the state transmitted
was in fact 0, or conversely. We want to find the optimal POM, which will minimize the
effect of errors in a suitable sense to be discussed below. Pioneering work on this question
was done by Helstrom [Hel76]. We will follow the presentation of Fuchs [Fuc96].
For a given POM, the above problem also arises in classical decision problems. We need
consider only the two probability distributions (a|s) for s = 0 and s = 1. Classically,
such probability distributions arise from noisy transmission and noisy measurement. In
order to distinguish s = 0 and s = 1, the receiver must try to discriminate between the
two distributions (a|s) on the basis of a single measurement. If the distributions overlap
then this will result in errors, and one should minimize the probability of making an error.2
Another approach is to relax the requirement that the decision is conclusive; that is, to allow
for the possibility of three decisions, yes, no and inconclusive. It may then be possible to
make a decision without error, at the expense of a finite probability of not being able to
make a decision at all. We will return to this approach in Section 2.7.3.
In each trial of the measurement there are n possible results {a}, but there are only
two possible decision outcomes, 0 and 1. A decision function must then take one of
the n results of the measurement and give a binary number; : {1, . . ., n} {0, 1}. The
probability that this decision is wrong is then
e () = 0 ( := 1|s := 0) + 1 ( := 0|s := 1).
(2.158)
The optimal strategy (for minimizing the error probability) is a Bayesian decision function,
defined as follows. The posterior conditional probability distributions for the two states,
given a particular outcome a, are
(s|a) =
(a|s)s
,
(a)
(2.159)
where (a) = 0 (a|s := 0) + 1 (a|s := 1) is the total probability for outcome a in the
measurement. Then the optimal decision function is
A more sophisticated strategy is to minimize some cost associated with making the wrong decision. The simplest cost function
is one that is the same for any wrong decision. That is, we care as much about wrongly guessing s = 1 as we do about wrongly
guessing s = 0. This leads back simply to minimizing the probability of error. However, there are many situations for which
other cost functions may be more appropriate, such as in weather prediction, where s = 1 indicates a cyclone and s = 0 indicates
none.
88
n
a=1
n
(2.161)
(2.162)
a=1
In this case we wish to minimize the error over all POMs. Because we sum over all the
results a when making our decision, we really need consider only a binary POM with
outcomes {0, 1} corresponding to the decision introduced above. Then the probability of
error becomes
e = 0 Tr[0 E 1 ] + 1 Tr[1 E 0 ].
(2.164)
It is not difficult to see that Tr E 0 will be minimized if we choose
opt
|j j |.
E 0 =
(2.167)
j :j <0
j :j <0
j ,
(2.168)
89
opt
opt
opt
The optimal measurement {E 0 , E 1 } with E 1 = 1 E 0 can be performed by making
and sorting the results into the outcomes s = 0, s = 1
a measurement of the operator ,
or either, according to whether they correspond to positive, negative or zero eigenvalues,
respectively. This is exactly as given in Eq. (2.163), where a plays the role of . This shows
that the Helstrom lower bound can be achieved using a projective measurement.
We now restrict the discussion to pure states s = |s s |, in which case =
1 |1 1 | o |0 0 |. The eigenvalues are given by
0 1
1
1 40 1 ||2 .
(2.169)
=
2
2
Exercise 2.24 Show this.
Hint: Only two basis states are needed in order to express as a matrix.
Thus we find the well-known Helstrom lower bound for the error probability for descriminating two pure states,
"
1!
1 1 40 1 ||2 ,
(2.170)
emin =
2
2.7.2 Experimental demonstration of the Helstrom bound
Barnett and Riis [BR97] performed an experiment that realizes the Helstrom lower bound
when trying to discriminate between non-orthogonal polarization states of a single photon.
The polarization state of a single photon is described in a two-dimensional Hilbert space
with basis states corresponding to horizontal (H) and vertical (V) polarization. Barnett and
Riis set up the experiment to prepare either of the two states,
|0 = cos |H + sin |V ,
(2.171)
|1 = cos |H sin |V ,
(2.172)
(2.173)
1
[1 sin(2)].
2
(2.175)
90
In order to perform this experiment as described above we would need a reliable source
of single-photon states, suitably polarized. However, deterministic single-photon sources
do not yet exist, though they are in active development. Instead Barnett and Riis used an
attenuated coherent state from a pulsed source. A coherent-state pulse has a non-determinate
photon number, with a Poissonian distribution (see Section A.4). In the experiment light
from a mode-locked laser produced a sequence of pulses, which were heavily attenuated
(with a neutral-density filter) so that on average each pulse contained about 0.1 photons. For
such a low-intensity field, only one in 200 pulses will have more than one photon and most
will have none. The laser was operated at a wavelength of 790 nm and had a pulse-repetition
rate of 80.3 MHz. The output was linearly polarized in the horizontal plane. Each pulse
was then passed through a GlanThompson polarizer set at or to produce either of
the two prescribed input states.
The polarization measurement was accomplished by passing the pulse through a polarizing beam-splitter, set at an angle /4 to the horizontal. This device transmits light polarized
in this direction while reflecting light polarized in the orthogonal direction. If the pulse
was transmitted, the measurement was said to give an outcome of 1, whereas if it was
reflected, the outcome was 0. A right output is when the outcome a = 0 or 1 agreed with
the prepared state |0 , or |1 , respectively, and a wrong output occurs when it did not.
The pulses were directed to photodiodes and the photocurrent integrated, this being simply
proportional to the probability of detecting a single photon.
In the experiment the probability of error was determined by repeating the experiment
for many photons; that is, simply by running it continually. Call the integrated output
from the wrong output IW , and that from the right output IR , in arbitrary units. If the
GlanThompson polarizer is set to then the error probability is given by the quantity
IW
IR IW 1
0
= 2+
.
(2.176)
e =
IR + IW
IW
If this polarizer is rotated from to , the corresponding error probability e1 is determined similarly. The mean probability of these errors is then taken as an experimental
determination of the error probability
e =
1 0
( + e1 ).
2 e
(2.177)
Barnett and Riis determined this quantity as a function of over the range 0 to /4.
The results are shown in Fig. 2.9. Good agreement with the Helstrom lower bound was
found.
91
Fig. 2.9 The experimental results of Barnett and Riis. The measured error probability is plotted as a
function of the half-angle between the two linear polarization states. The solid curve is the Helstrom
bound. Figure 2 adapted with permission from S. M. Barnett and E. Riis, Experimental demonstration
of polarization discrimination at the Helstrom bound, Journal of Modern Optics 44, 1061, (1997),
Taylor & Francis Ltd, http://www.informaworld.com, reprinted by permission of the publisher.
(2.178)
where |0 and |1 constitute an orthonormal basis, and without loss of generality we can
take 0 /4. The first step in the protocol is to couple the system to an ancilla
two-level system in an appropriate way, so that in the full four-dimensional tensor-product
space we can have at least two mutually exclusive outcomes (and hence at most two
inconclusive outcomes). Let the initial state of the ancilla be |0. The coupling is the
exchange coupling and performs a rotation in the two-dimensional subspace of the tensorproduct space spanned by {|0 |1, |1 |0}. The states |0 |0 and |1 |1 are
invariant. On writing the unitary operator for the exchange as U , and parameterizing it by
92
(2.179)
(2.180)
where = cos2 .
Exercise 2.26 Verify this.
Since the amplitudes in the first term are orthogonal for the two different input states,
they may be discriminated by a projective readout. If we measure the operators x z ,
where z = |11| |00| and x = |10| + |01|, the results will be (1, 1) only for
the state |+ and (1, 1) only for the state | . The other two results could arise from
either state and indicate an inconclusive result. The probability of an inconclusive result is
easily seen to be
i = | |+ | = cos(2).
(2.181)
This is the optimal result, the IDP bound. Of course, the unitary interaction between
ancilla and system followed by a projective measurement is equivalent to a generalized
measurement on the system alone, as explained in Section 1.2.3.
Exercise 2.27 Determine the effects E , E + and E i (operators in the two-dimensional
system Hilbert space) corresponding to the three measurement outcomes.
(2.182)
|0 = |V a ,
(2.183)
93
=?
VACUUM
INPUT
c
+
,
PBS 2
a
PBS 3
PBS 1
HWP
VACUUM
INPUT
PBS 4
transmit vertically polarized light and reflect horizontally polarized light. The total state of
the system after PBS 1 is
|(1) = (cos |0a |H b sin |V a |0b )|0c ,
(2.184)
(2.185)
where we assume that the transmitted photon does not change its polarization. Thus, just
after PBS 2 we can write the total state as
|(2) = (1 )1/2 [|0a |H b |V a |0b ] |0c + (2 1)1/2 |0a |0b |H c ,
(2.186)
94
0.8
Ideal theory
0.6
i
0.4
0.2
10
20
30
40
Fig. 2.11 Experiment results for an IDP state-discrimination measurement for the states
cos |H b sin |V a . The probability of an inconclusive result is plotted as a function of .
Figure 3 adapted with permission from R. B. M. Clarke et al., Phys. Rev. A 63, 040305 (R), (2001).
Copyrighted by the American Physical Society.
orthogonal states that occur in the first term. This may be done by using a polarizing beamsplitter to put both photons back into the same momentum mode. Thus, after PBS 3 the
total state is
|(3) = (1 )1/2 [|H a |V a ] |0b |0c + (2 1)1/2 |0a |0b |H c .
(2.187)
95
Hermitian matrix has D 2 independent elements, but the condition Tr[] = 1 removes one
of them.) This situation, in which one is effectively trying to identify a completely unknown
from measurements on multiple copies, is known as quantum state estimation or quantum
tomography.
A good review of this topic for D-dimensional systems is Ref. [DPS03]. Note that,
in order to measure D 2 1 parameters using projective measurements, it is necessary
to consider D + 1 different projective measurements, that is, measuring D + 1 different
observables. Such a set of observables is called a quorum [Fan57]. That a quorum size of
D + 1 observables is necessary is obvious from the fact that measuring one observable
repeatedly can give only D 1 parameters: the D probabilities (one for each outcome)
minus one because they must sum to unity. That D + 1 is a sufficient size for a (suitably
chosen) quorum was first proven by Fano [Fan57].
The term quantum tomography was coined in quantum optics for estimating the state of an
electromagnetic field mode. Here the quorum consists of a collection of quadrature operators
+ sin P for different values of . This is analogous to the reconstruction of
X = cos Q
a two-dimensional tissue density distribution from measurements of density profiles along
various directions in the plane in medical tomography. Of course, the quantum harmonic
oscillator has an infinite Hilbert-space dimension D, so strictly an infinite quorum (infinitely
many s) must be used. In practice, it is possible to reconstruct an arbitrary with a finite
quorum, and a finite number of measurements for each observable, if certain assumptions
or approximations are made. We refer the reader to the review [PR04] for a discussion of
these techniques, including maximum-likelihood estimation.
2.8.2 Other work
A general treatment of the resources (state preparation, and measurement) required to attain
the Heisenberg limit in parameter estimation is given by Giovanneti, Lloyd and Maccone
[GLM06]. This is done using the language of quantum information processing, which is
treated by us in Chapter 7. See the discussion of recent theory and experiment [HBB+ 07]
in optical phase estimation in Section 7.10. The theory in this work is closely related to the
adaptive algorithm of Section 2.5.2.
Understanding the quantum limits to parameter estimation has also thrown light on the
controversial issue of the timeenergy uncertainty relation. It has long been recognized
that the timeenergy uncertainty relation is of a different character from the position
momentum uncertainty relation, since there is no time operator in quantum mechanics; see
of the unitary, and
for example Ref. [AB61]. However, if H is taken as the generator G
t as the parameter X to be estimated, then the Holevo upper bound (2.9) gives a precise
meaning to the timeenergy uncertainty relation.
Developing physical systems to estimate time translations as accurately as possible is, of
course, the business of the time-standards laboratories. Modern time standards are based
on the oscillations of atomic dipoles, and their quantum description is very similar to that
given in Section 2.2.1. However, in practice, clocks are limited in their performance by their
96
instability, which can be thought of as the relative variation in the time interval between
ticks of the clock. In terms of the general theory of parameter estimation, it is as if the
itself varied randomly by a tiny amount from shot to shot. For a model based
generator G
on a two-level atomic transition the instability is the ratio of the variation in the frequency
of the transition to the mean frequency . Currently, the best clocks have an instability
/ at or below 1017 [HOW+ 05].
The topic of discriminating non-orthogonal quantum states has been reviewed by Chefles
[Che00]; see also Ref. [PR04]. More recently, Jacobs [Jac07] has considered this problem
in the context of continuous adaptive measurements similar to that discussed in Section 2.6
above. Jacobs shows that, by using an adaptive technique, one can increase the rate at
which the information regarding the initial preparation is obtained. However, in the longtime limit, such an adaptive measurement actually reduces the total amount of information
obtained, compared with the non-adaptive measurement that reproduces (in the long-time
limit) the optimal projective measurement discussed in Section 2.7.1. That is, the adaptive
measurement fails to attain the Helstrom lower bound. This is an instructive example of the
fact that locally (in time) optimizing the rate of increase in some desired quantity (such as
the information about the initial state preparation) does not necessarily lead to the globally
optimal scheme. There is thus no reason to expect the adaptive measurement schemes
discussed in Sections 2.5 and 2.6 to be optimal.
3
Open quantum systems
3.1 Introduction
As discussed in Chapter 1, to understand the general evolution, conditioned and unconditioned, of a quantum system, it is necessary to consider coupling it to a second quantum
system. In the case in which the second system is much larger than the first, it is often
referred to as a bath, reservoir or environment, and the first system is called an open system.
The study of open quantum systems is important to quantum measurement for two reasons.
First, all real systems are open to some extent, and the larger a system is, the more
important its coupling to its environment will be. For a macroscopic system, such coupling
leads to very rapid decoherence. Roughly, this term means the irreversible loss of quantum
coherence, that is the conversion of a quantum superposition into a classical mixture. This
process is central to understanding the emergence of classical behaviour and ameliorating,
if not solving, the so-called quantum measurement problem.
The second reason why open quantum systems are important is in the context of generalized quantum measurement theory as introduced in Chapter 1. Recall from there that,
by coupling a quantum system to an apparatus (a second quantum system) and then
measuring the apparatus, a generalized measurement on the system is realized. For an open
quantum system, the coupling to the environment is typically continuous (present at all
times). In some cases it is possible to monitor (i.e. continuously measure) the environment
so as to realize a continuous generalized measurement on the system.
In this chapter we are concerned with introducing open quantum systems, and with
discussing the first point, decoherence. We introduced the decoherence of a macroscopic
apparatus in Section 1.2.3, in the context of the von Neumann chain and Heisenbergs
cut. To reiterate that discussion, direct projective measurements on a quantum system
do not adequately describe realistic measurements. Rather, one must consider making
measurements on an apparatus that has been coupled to the system. But how does one make
a direct observation on the apparatus? Should one introduce yet another system to model
the readout of the meter coupled to the actual system of study, and so on with meters upon
meters ad infinitum? This is the von Neumann chain [vN32]. To obtain a finite theory, the
experimental result must be considered to have been recorded definitely at some point:
Heisenbergs cut [Hei30].
97
98
The quantum measurement problem is that there is no physical basis for inserting a cut
at any particular point. However, there is a physical basis for determining the point in the
chain after which the cut may be placed without affecting any theoretical predictions. This
point is the point at which, for all practical purposes, the meter can be treated as a classical,
rather than a quantum, object. That such a point exists is due to decoherence brought about
by the environment of the apparatus.
Consider, for example, the single-photon measurement discussed in Section 1.5. The
system of study was the electromagnetic field of a single-mode microwave cavity. The
meter was an atomic system, suitably prepared. This meter clearly still behaves as a
quantum system; however, as other experiments by the same group have shown [RBH01],
the atomic meter is in turn measured by ionization detectors. These detectors are, of course,
rather complicated physical systems involving electrical fields, solid-state components and
sophisticated electronics. Should we include these as quantum systems in our description?
No, for two reasons.
First, it is too hard. Quantum systems with many degrees of freedom are generally
intractable. This is due to the exponential increase in the dimension of the Hilbert space
with the number of components for multi-partite systems, as discussed in Section A.2.
Except for cases in which the Hamiltonian has an exceptionally simple structure, numerical
solutions are necessary for the quantum many-body problem.
Exercise 3.1 For the special case of a Hamiltonian that is invariant under particle permutations show that the dimension of the total Hilbert space increases only linearly in the
number of particles.
However, even on todays supercomputers, numerical solutions are intractable for 100 particles or more. Detectors typically have far more particles than this, and, more importantly,
they typically interact strongly with other systems in their environment.
Second, it is unnecessary. Detectors are not arbitrary many-body systems. They are
designed for a particular purpose: to be a detector. This means that, despite its being
coupled to a large environment, there are certain properties of the detector that, if initially
well defined, remain well defined over time. These classical-like properties are those that
are robust in the face of decoherence, as we will discuss in Section 3.7. Moreover, in
an ideal detector, one of these properties is precisely the one which becomes correlated
with the quantum system and apparatus, and so constitutes the measurement result. As we
will discuss in Section 4.8, sometimes it may be necessary to treat the detector dynamics
in greater detail in order to understand precisely what information the experimenter has
obtained about the system of study from the measurement result. However, in this case it is
still unnecessary to treat the detector as a quantum system; a classical model is sufficient.
The remainder of this chapter is organized as follows. In Section 3.2 we introduce the
simplest approach to modelling the evolution of open quantum systems: the master equation
derived in the BornMarkov approximations. In Section 3.3 we apply this to the simplest
(and historically first) example: radiative damping of a two-level atom. In the same section
we also describe damping of an optical cavity; this treatment is very similar, insofar as both
involve a rotating-wave approximation. In Section 3.4 we consider systems in which the
99
rotating-wave approximation cannot be made: the spinboson model and Brownian motion.
In all of these examples so far, the reservoir consists of harmonic oscillators, modes of a
bosonic field (such as the electromagnetic field). In Section 3.5 we treat a rather different
sort of reservoir, consisting of a fermionic (electron) field, coupled to a single-electron
system.
In Section 3.6 we turn to more formal results: the mathematical conditions that a Markovian theory of open quantum systems should satisfy. Armed with these examples and this
theory, we tackle the issue of decoherence and its relation to the quantum measurement
problem in Section 3.7, using the example of Brownian motion. Section 3.8 develops this
idea in the direction of continuous measurement (which will be considered in later chapters), using the examples of the spinboson model, and the damped and driven atom. The
ground-breaking decoherence experiment from the group of Haroche is analysed in Section 3.9 using the previously introduced damped-cavity model. In Section 3.10 we discuss
two more open systems of considerable experimental interest: a quantum electromechanical
oscillator and a superconducting qubit. Finally (apart from the further reading), we present
in Section 3.11 a Heisenberg-picture description of the dynamics of open quantum systems,
and relate it to the descriptions in earlier sections.
3.2 The BornMarkov master equation
In this section we derive a general expression for the evolution of an open quantum system
in the Born and Markov approximations. This will then be applied to particular cases in
subsequent sections. The essential idea is that the system couples weakly to a very large
environment. The weakness of the coupling ensures that the environment is not much
affected by the system: this is the Born approximation. The largeness of the environment
(strictly, the closeness of its energy levels) ensures that from one moment to the next the
system effectively interacts with a different part of the environment: this is the Markov
approximation.
Although the environment is relatively unaffected by the system, the system is profoundly affected by the environment. Specifically, it typically becomes entangled with the
environment. For this reason, it cannot be described by a pure state, even if it is initially in
a pure state. Rather, as shown in Section A.2.2, it must be described by a mixed state .
The aim of the BornMarkov approximation is to derive a differential equation for . That
is, rather than having to use a quantum state for the system and environment, we can find
the approximate evolution of the system by solving an equation for the system state alone.
For historical reasons, this is called a master equation.
The dynamics of the state tot for the system plus environment is given in the Schrodinger
picture by
tot (t) = i[H S + H E + V , tot (t)].
(3.1)
Here H S is the Hamiltonian for the system (that is, it acts as the identity on the environment
Hilbert space), H E is that for the environment, and V includes the coupling between the
two. Following the formalism in Section A.1.3, it is convenient to move into an interaction
100
(3.2)
VIF (t) = eiH0 t V eiH0 t .
In this frame, the Schrodinger-picture equation is
tot;IF (t) = i[VIF (t), tot;IF (t)],
(3.3)
(3.4)
The equations below are all in the interaction frame, but for ease of notation we drop the
IF subscripts. That is, V will now denote VIF (t), etc.
Since the interaction is assumed to be weak, the differential equation Eq. (3.3) may be
solved as a perturbative expansion. We solve Eq. (3.3) implicitly to get
t
dt1 [V (t1 ), tot (t1 )].
(3.5)
tot (t) = tot (0) i
0
(3.6)
Since we are interested here only in the evolution of the system, we trace over the environment to get an equation for S = TrE [tot ] as follows:
This is still an exact equation but is also still implicit because of the presence of tot (t1 )
inside the integral. However, it can be made explicit by making some approximations, as
we will see. It might be asked why we carry the expansion to second order in V , rather than
use the first-order equation (3.3), or some higher-order equation. The answer is simply that
second order is the lowest order which generally gives a non-vanishing contribution to the
final master equation.
We now assume that at t = 0 there are no correlations between the system and its
environment:
tot (0) = (0) E (0).
(3.8)
This assumption may be physically unreasonable for some interactions between the system
and its environment [HR85]. However, for weakly interacting systems it is a reasonable
approximation. We also split V (which, it must be remembered, denotes the Hamiltonian
in the interaction frame) into two parts:
V (t) = VS (t) + VSE (t),
(3.9)
where VS (t) acts nontrivially only on the system Hilbert space, and where
Tr[VSE (t)tot (0)] = 0.
Exercise 3.2 Show that this can be done, irrespective of the initial system state (0), by
making a judicious choice of H 0 .
101
We now make a very important assumption, namely that the system only weakly affects
the bath so that in the last term of Eq. (3.7) it is permissible to replace tot (t1 ) by (t1 )
E (0). This is known as the Born approximation, or the weak-coupling approximation.
Under this assumption, the evolution becomes
t
(t) = i[VS (t), (t)]
dt1 TrE [VSE (t), [VSE (t1 ), (t1 ) E (0)]] .
(3.10)
0
Note that this assumption is not saying that tot (t1 ) is well approximated by (t1 ) E (0)
for all purposes, and indeed this is not the case; the coupling between the system and the
environment in general entangles them. This is why the system becomes mixed, and why
measuring the environment can reveal information about the system, as will be considered
in later chapters, but this factorization assumption is a good one for the purposes of deriving
the evolution of the system alone.
The equation (3.10) is an integro-differential equation for the system state matrix .
Because it is nonlocal in time (it contains a convolution), it is still rather difficult to solve. We
seek instead a local-in-time differential equation, sometimes called a time-convolutionless
master equation, that is, an equation in which the rate of change of (t) depends only upon
(t) and t. This can be justified if the integrand in Eq. (3.10) is small except in the region
t1 t. Since the modulus of (t1 ) does not depend upon t1 , this property must arise from
the physics of the bath. As we will show in the next section, it typically arises when the
system couples roughly equally to many energy levels of the bath (eigenstates of H E ) that
are close together in energy. Under this approximation it is permissible to replace (t1 ) in
the integrand by (t), yielding
t
We will see in examples below how, for physically reasonable properties of the bath, this
gives a master equation with time-independent coefficients, as required. In particular, we
require H E to have a continuum spectrum in the relevant energy range, and we require
102
E (0) to commute with H E . In practice, the latter condition is often relaxed in order to
yield an equation in which VS (t) may be time-dependent, but the second term in Eq. (3.12)
is still required to be time-independent.
3.3 The radiative-damping master equation
In this section we repeat the derivation of the BornMarkov master equation for a specific
case: radiative damping of quantum optical systems (a two-level atom and a cavity mode).
This provides more insight into the Born and Markov approximations made above.
3.3.1 Spontaneous emission
Historically, the irreversible dynamics of spontaneous emission were introduced by Bohr
[Boh13] and, more quantitatively, by Einstein [Ein17], before quantum theory had been
developed fully. It was Wigner and Weisskopf [WW30] who showed in 1930 how the
radiative decay of an atom from the excited to the ground state could be explained within
quantum theory. This was possible only after Diracs quantization of the electromagnetic
field, since it is the infinite (or at least arbitrarily large) number of electromagnetic field
modes which forms the environment or bath into which the atom radiates. The theory of
spontaneous emission is described in numerous recent texts [GZ04, Mil93], so our treatment
will just highlight key features.
As discussed in Section A.4, the free Hamiltonian for a mode of the electromagnetic
field is that of a harmonic oscillator. The total Hamiltonian for the bath is thus
k bk bk ,
(3.13)
H E =
k
where the integer k codes all of the information specifying the mode: its frequency, direction, transverse structure and polarization. The mode structure incorporates the effect of
bulk materials with a linear refractive index (such as mirrors) and the like, so this is all
described by the Hamiltonian H E . The annihilation and creation operators for each mode
are independent and they obey the bosonic commutation relations
[bk , bl ] = kl .
(3.14)
We will assume that only two energy levels of the atom are relevant to the problem, so
the free Hamiltonian for the atom is
a
(3.15)
z .
H a =
2
Here a is the energy (or frequency) difference between the ground |g and excited |e
states, and z = |ee| |gg| is the inversion operator for the atom. (See Box 3.1.)
The coupling of the electromagnetic field to an atom can be described by the so-called
dipole-coupling Hamiltonian
V =
(gk bk + gk bk )( + + ).
(3.16)
k
103
(3.17)
(3.18)
z = |11| |00|.
(3.19)
(3.20)
Here the subscripts stand for x, y or z, while 1 is the 2 2 unit matrix, i is the unit
imaginary and j kl is the completely antisymmetric tensor (that is, transposing any
two subscripts changes its sign) satisfying xyz = 1. From this commutation relations
like [ x , y ] = 2i z and anticommutation relations like x y + y x = 0 are easily
derived.
The state matrix for a two-level system can be written using these operators as
(t) = 12 [1 + x(t) x + y(t) y + z(t) z ],
(3.21)
where x, y, z are the averages of the Pauli operators. That is, x = Tr[ x ] et cetera.
Recall that Tr[ 2 ] 1, with equality for and only for pure states. This translates to
x 2 + y 2 + z2 1,
(3.22)
again with equality iff the system is pure. Thus, the system state can be represented by
a 3-vector inside (on) the unit sphere for a mixed (pure) state. The vector is called the
Bloch vector and the sphere the Bloch sphere.
For a two-level atom, it is conventional to identify |1 and |0 with the ground |g
and excited |e states. Then z is called the atomic inversion, because it is positive iff
the atom is inverted, that is, has a higher probability of being in the excited state than
in the ground state. The other components, y and x, are called the atomic coherences,
or components of the atomic dipole.
Another two-level system is a spin-half particle. Here spin-half means that the
maximum angular momentum contained in the intrinsic spin of the particle is /2. The
operator for the spin angular momentum (a 3-vector) is (/2) ( x , y , z ). That is, in
this case the Bloch vector (x, y, z) has a meaning in ordinary three-dimensional space,
as the mean spin angular momentum, divided by /2.
Nowadays it is common to study a two-level quantum system without any particular
physical representation in mind. In this context, it is appropriate to use the term qubit
a quantum bit.
104
Here + = ( ) = |eg| is the raising operator for the atom. The coefficient gk (which
can be assumed real without loss of generality) is proportional to the dipole matrix element
for the transition (which we will assume is non-zero) and depends on the structure of mode
1/2
k. In particular, it varies as Vk , where Vk is the physical volume of mode k.
It turns out that the rate of radiative decay for an atom in free space is of order 108 s1 or
smaller. This is much smaller than the typical frequency a for an optical transition, which
is of order 1015 s1 or greater. Since is due to the interaction Hamiltonian V , it seems
reasonable to treat V as being small compared with H 0 = H a + H E . Thus we are justified
in following the method of Section 3.2. We begin by calculating V in the interaction frame:
VIF (t) =
(3.23)
Exercise 3.3 Show this, using the same technique as in Exercise 1.30.
The first approximation we make is to remove the terms in VIF (t) that rotate (in the complex
plane) at frequency a + k for all k, yielding
VIF (t) =
(gk bk + ei(k a )t + gk bk ei(k a )t ).
(3.24)
k
(3.26)
Terms like these are, however, important for a proper calculation of the Lamb frequency shift a , but that is beyond the scope
of this treatment.
105
Exercise 3.5 Show this, using the properties of the vacuum state and the field operators.
Next, we wish to make the Markov approximation. This can be justified by considering
the reservoir correlation function (3.26). For an atom in free space, there is an infinite
number of modes, each of which is infinite in volume, so the modulus squared of the
coupling coefficients is infinitesimal. Thus we can justify replacing the sum in Eq. (3.26)
by an integral,
( ) =
d ()g()2 ei(a ) .
(3.27)
0
Here () is the density of field modes as a function of frequency. This is infinite but
the product ()g()2 is finite. Moreover, ()g()2 is a smoothly varying function of
frequency for in the vicinity of a . This means that the reservoir correlation function,
( ), is sharply peaked at = 0.
Exercise 3.6 Convince yourself of this by considering a toy model in which ()g()2 is
independent of in the range (0, 2a ) and zero elsewhere.
Thus we can apply the Markov approximation to obtain the master equation
= i
a
[ z , ] + D[ ].
2
(3.28)
A
A 1 (A A
(3.29)
D[A]
2
The real parameters a (the frequency shift) and (the radiative decay rate) are defined
as
( )d.
(3.30)
a i = i
2
0
Exercise 3.7 Derive Eq. (3.28)
In practice the frequency shift (called the Lamb shift) due to the atom coupling to the electromagnetic vacuum is small, but can be calculated properly only by using renormalization
theory and relativistic quantum mechanics.
The solution of Eq. (3.28) at any time t > 0 depends only on the initial state at time
t = 0; there is no memory effect. The evolution is non-unitary because of the D term, which
represents radiative decay. This can be seen by considering the Bloch representaton of the
atomic state, as discussed in Box 3.1.
Exercise 3.8 Familiarize yourself with the Bloch sphere by finding the points on it corresponding to the eigenstates of the Pauli matrices, and the point corresponding to the
maximally mixed state.
For example, the equation of motion for the inversion can be calculated as z = Tr[ z ],
and re-expressing the right-hand side in terms of x, y and z. In this case we find simply
106
(3.31)
Here = a + a 0 is the effective detuning of the atom, while , the Rabi frequency, is proportional to the amplitude of the driving field and the atomic dipole moment.
Here the phase of the driving field acts as a reference to define the in-phase (x) and
in-quadrature (y) parts of the atomic dipole relative to the imposed force. The master equation for a resonantly driven, damped atom is known as the resonance fluorescence master
equation.
107
Exercise 3.10 (a) Show that the Bloch equations for resonance fluorescence are
x = y
x,
2
y,
2
z = +y (z + 1),
y = z + x
(3.32)
(3.33)
(3.34)
4
x
2 + 22 + 42 1 .
y =
(3.35)
2
2 42
z ss
(b) Compare = arctan(yss /xss ) and A = xss2 + yss2 with the phase and amplitude of the
long-time response of a classical, lightly damped, harmonic oscillator to an applied periodic force with magnitude proportional to and detuning . In what regime does the
two-level atom behave like the harmonic oscillator?
Hint: First, define interaction-frame phase and amplitude variables for the classical oscillator; that is, variables that would be constant in the absence of driving and damping.
The second generalization is that the field need not be in a vacuum state, but rather
(for example) may be in a thermal state (i.e. with a Planck distribution of photon numbers
[GZ04]). This gives rise to stimulated emission and absorption of photons. In that case, the
total master equation in the Markov approximation becomes
+ ],
(3.36)
x + z , + (n + 1)D[ ] + nD[
= i
2
2
where n = {exp[a /(kB T )] 1}1 is the thermal mean photon number evaluated at the
atomic frequency (we have here restored ). This describes the (spontaneous and stimulated)
emission of photons at a rate proportional to (n + 1), and (stimulated) absorption of
108
cavity, as if they were modes, and to treat the amplitude decay as radiative damping due
to coupling to the (pseudo-)modes that are localized outside the cavity [GZ04]. This is a
good approximation, provided that the coupling is weak; that is, that the transmission at
the mirrors is small.
The simplest case to consider is a single mode (of frequency c ) of a one-dimensional
cavity with one slightly lossy mirror and one perfect mirror. We use a for the annihilation
operator for the cavity mode of interest and bk for those of the bath as before. The total
Hamiltonian for system plus environment, in the RWA, is [WM94a]
k bk bk +
gk (a bk + a bk ).
(3.37)
H = c a a +
k
The first term represents the free energy of the cavity mode of interest, the second is for
the free energy of the many-mode field outside the cavity, and the last term represents the
dominant terms in the coupling of the two for optical frequencies.
For weak coupling the BornMarkov approximations are justified just as for spontaneous
emission. Following the same procedure leads to a very similar master equation for the
cavity field, in the interaction frame:
+ nD[
a ].
= (n + 1)D[a]
(3.38)
Here n is the mean thermal photon number of the external field evaluated at the cavity
frequency c . We have ignored any environment-induced frequency shift, since this simply
redefines the cavity resonance c .
The first irreversible term in Eq. (3.38) represents emission of photons from the cavity.
The second irreversible term represents an incoherent excitation of the cavity due to thermal
photons in the external field.
Exercise 3.11 Show that the rate of change of the average photon number in the cavity is
given by
da a
+ n.
= a a
dt
(3.39)
Note that here (and often from here on) we are relaxing our convention on angle brackets
established in Section 1.2.1. That is, we may indicate the average of a property for a
quantum system by angle brackets around the corresponding operator.
From Eq. (3.39) it is apparent that is the decay rate for the energy in the cavity.
Assuming that ()g()2 is slowly varying with frequency, we can evaluate this decay rate
to be
2(c )g(c )2 .
(3.40)
109
In more physical terms, if the mirror transmits a proportion T 1 of the energy in the
cavity on each reflection, and the round-trip time for light in the cavity is , then = T / .
As in the atomic case, we can include other dynamical processes by simply adding an
appropriate Hamiltonian term to the interaction-frame master equation (3.38), as long as
the added Hamiltonian is (in some sense) small compared with H 0 . In particular, we can
include a coherent driving term, to represent the excitation of the cavity mode by an external
laser of frequency c , by adding the following driving Hamiltonian [WM94a]:
H drive = i a i a.
(3.41)
Exercise 3.13 Show that, in the zero-temperature limit, the stationary state for the driven,
damped cavity is a coherent state | with = 2 / .
Hint: Make the substitution a = 2 / + a 0 , and show that the solution of the master
equation is the vacuum state for a 0 .
H = x +
+
gk qk ,
(3.42)
+ z
2
2mk
2
k
k
where qk are the coordinates of each of the environmental oscillators. This could describe
a spin-half particle (see Box 3.1), in the interaction frame with respect to a Hamiltonian
proportional to z . Such a Hamiltonian would describe a static magnetic field in the z
(longitudinal) direction. Then the first term would describe resonant driving (as in the
two-level atom case) by a RF magnetic field in the xy (transverse) plane, and the last
term would describe fluctuations in the longitudinal field. However, there are many other
110
i(t1 )[ z (t), { z (t t1 ), (t)}] ,
(3.43)
B}
= A B + B A is known as an anticommutator, and the kernels are given by
where {A,
1 2
gk {qk (t), qk (t t1 )} =
d J ()cos(t1 )[1 + 2n()],
(t1 ) =
2 k
0
(t1 ) =
i 2
g [qk (t), qk (t t1 )] =
2 k k
(3.44)
d J ()sin(t1 ).
(3.45)
g 2 ( k )
k
2mk k
(3.46)
and n()
is the mean occupation number of the environmental oscillator at frequency .
(3.47)
Exercise 3.14 Show this by finding and solving the Heisenberg equations of motion for y
and z , for the Hamiltonian H 0 .
Substituting this into Eq.(3.43), and then moving out of the interaction frame, yields the
Schrodinger-picture master equation2
H nh =
+ (t) x
2
(3.48)
(3.49)
111
is a non-Hermitian operator (the Hermitian part of which can be regarded as the Hamiltonian), while
t
dt1 ((t1 ) i(t1 ))sin(t1 ),
(3.50)
(t) =
0
t
D(t) =
(3.51)
The environment thus shifts the free Hamiltonian for the system (via Re[ ]) and introduces
irreversible terms (via Im[ ] and D). Note that if = 0 only the final term in Eq. (3.48)
survives.
To proceed further we need an explicit form of the spectral density function. The simplest
case is known as Ohmic dissipation, in which the variation with frequency is linear at low
frequencies. We take
J () = 2
2
,
2 + 2
(3.52)
where is a cut-off frequency, as required in order to account for the physically necessary
fall-off of the coupling at sufficiently high frequencies, and is a dimensionless parameter
characterizing the strength of the coupling between the spin and the environment. After
splitting (t) into real and imaginary parts as (t) = f (t) i (t), we can easily do the
integral to find the decay term (t). It is given by
t
(t) = 1 cos(t) + sin(t) e
.
(3.53)
This begins at zero and decays (at a rate determined by the high-frequency cut-off) to a constant 2 /(2 + 2 ). The other terms depend on the temperature of the environment
and are not easy to evaluate analytically. The diffusion constant can be shown to approach
the asymptotic value
D =
2
coth(/2).
2 + 2
(3.54)
The function f (t) also approaches (algebraically, not exponentially) a limiting value, which
at high temperatures is typically much smaller than D (by a factor proportional to ).
In the limit that 0, we find
D 2kB T ,
(3.55)
and, as mentioned previously, (t) is zero in this limit. In this case the master equation
takes the following simple form in the long-time limit:
= 2kB T [ z , [ z , ]].
This describes dephasing of the spin in the xy plane at rate D /2.
(3.56)
112
(3.58)
Here (t)
is a shifted frequency and (t) is a momentum-damping rate. D(t) gives rise to
diffusion in momentum and f (t) to so-called anomalous diffusion.
If we again assume the Ohmic spectral density function (3.52) then we can evaluate these
time-dependent coefficients. The coefficients all start at zero, and tend asymptotically to
constants, with the same properties as in the spinboson case. The shifted frequency
tends asymptotically to 2 , which is unphysical for too large. In the hightemperature limit, kB T , with one finds
D = M
2
coth(/2) 2 kB T M,
2 + 2
(3.59)
113
= 0 the steady-state
diffusion adds kinetic energy at rate 2 kB T . Show that for
energy of the particle is kB T /2, as expected from thermodynamics.
{a k , a l } = kl ,
(3.61)
{a k , a l } = 0.
(3.62)
The study of such systems is the concern of the rapidly developing field of mesoscopic
electronics [Dat95, Imr97]. Unfortunately, perturbative master equations might not be
appropriate in many situations when charged fermions are involved, since such systems are
strongly interacting. However, there are some experiments for which a perturbative master
equation is a good approximation. We now consider one of these special cases to illustrate
some of the essential differences between bosonic and fermionic environments.
The concept of a mesoscopic electronic system emerged in the 1980s as experiments on
small, almost defect-free, conductors and semiconductors revealed unexpected departures
from classical currentvoltage characteristics at low temperatures. The earliest of these
results indicated quantized conductance. The classical description of conductance makes
reference to random scattering of carriers due to inelastic collisions. However, in mesoscopic
electronic systems, the mean free path for inelastic scattering may be longer than the length
of the device. Such systems are dominated by ballistic behaviour in which conduction
is due to the transport of single carriers, propagating in empty electron states above a
filled Fermi sea, with only elastic scattering from confining potentials and interactions
with magnetic fields. As Landauer [Lan88, Lan92] and Buttiker [But88] first made clear,
conductance in such devices is determined not by inelastic scattering, but by the quantummechanical transmission probability, T , across device inhomogeneities. If a single ballistic
channel supports a single transverse Fermi mode (which comprises two modes when spin is
included), the transmission probability is T 1. The resulting conductance of that channel
is the reciprocal of the quantum of resistance. This is given by the LandauerButtiker theory
as [Dat95]
RQ =
12.9 k.
e2
(3.63)
114
S
D = e VSD
L
c
SOURCE
R
D
R
DRAIN
Fig. 3.1 A schematic representation of a quantum dot in the conduction band. Position runs from left
to right and energy runs vertically. The quasibound state in the dot is labelled c. The grey regions
labelled L and R represent metallic electronic states filled up to the local Fermi level. The difference
in the Fermi levels between left and right is determined by the sourcedrain bias voltage as eVSD .
115
We assume that the reservoirs remain in the normal (Ohmic) conducting state. The total
system is not in thermal equilibrium due to the bias voltage VSD across the dot. However,
the two reservoirs are held very close to thermal equilibrium at temperature T , but at
different chemical potentials through contact to an external circuit via an Ohmic contact.
We refer to the fermionic reservoir with the higher chemical potential as the source (also
called the emitter) and the one with the lower chemical potential as the drain (also called
the collector). The difference in chemical potentials is given by S D = eVSD . In this
circumstance, charge may flow through the dot, and an external current will flow. The
necessity to define a chemical potential is the first major difference between fermionic
systems and the bosonic environments of quantum optics.
A perturbative master-equation approach to this problem is valid only if the resistance of
the tunnel junction, R, is large compared with the quantum of resistance RQ . The physical
meaning of this condition is as follows. If for simplicity we denote the bias voltage of the
junction as V , then the average current through the junction is V /R, so the tunnelling rate is
= V /(eR). Thus the typical time between tunnelling events is 1 = eR/V . Now, if the
lifetime of the quasibound state is , then, by virtue of the timeenergy uncertainty relation
discussed in Section 3.3.1, there is an uncertainty in the energy level of order / . If the
external potential is to control the tunnelling then this energy uncertainty must remain less
than eV . Thus the lifetime must be at least of order /(eV ). If we demand that the lifetime
be much less than the time between tunnelling events, so that the events do not overlap in
time, we thus require /(eV ) eR/V . This gives the above relation between R and the
quantum of resistance.
The total Hamiltonian of a system composed of the two Fermi reservoirs, connected by
two tunnel barriers to a single Fermi bound state, is (with = 1)
kS a k a k + c c c +
pD bp bp
H QD+leads =
p
+
+
(TkS c a k + TkS a k c)
(TpD bp c + TpD c bp ).
k
(3.64)
Here ak (ak ), c(c ) and bp (bp ) are the fermion annihilation (creation) operators of electrons
in the source (S) reservoir, in the central quantum dot and in the drain (D) reservoir,
respectively. Because of the fermion anticommutation relations, the dot is described by just
two states.
Exercise 3.17 Show from Eqs. (3.61) and (3.62) that the eigenvalues for the fermion
number operator a l a l are 0 and 1, and that, if the eigenstates are |0 and |1, respectively,
then a l |0 = |1.
The first three terms in Eq. (3.64) comprise H 0 . The energy of the bound state without
bias is 0 , which under bias becomes c = 0 eV , where is a structure-dependent
coefficient. The single-particle energies in the source and drain are, respectively, kS =
k 2 /(2m) and pD = p2 /(2m) eV . The energy reference is at the bottom of the conduction
116
band of the source reservoir. Here, and below, we are assuming spin-polarized electrons so
that we do not have to sum over the spin degree of freedom.
The fourth and fifth terms in the Hamiltonian describe the coupling between the quasibound electrons in the dot and the electrons in the reservoir. The tunnelling coefficients TkS
and TpD depend upon the profile of the potential barrier between the dot and the reservoirs,
and upon the bias voltage. We will assume that at all times the two reservoirs remain in
their equlibrium states despite the tunnelling of electrons. This is a defining characteristic
of a reservoir, and comes from assuming that the dynamics of the reservoirs are much faster
than those of the quasibound quantum state in the dot.
In the interaction frame the Hamiltonian may be written as
V (t) =
2
c j (t)eic t + c j (t)eic t ,
(3.65)
j =1
2 (t) =
TpD bp eip t .
D
(3.66)
We now obtain an equation of motion for the state matrix of the bound state in the dot
by following the standard method in Section 3.2. The only non-zero reservoir correlation
functions we need to compute are
t
Ij A (t) =
0
(3.68)
Here N and A stand for normal (annihilation operators after creation operators) and
antinormal (vice versa) ordering of operators see Section A.5. In order to illustrate the
important differences between the fermionic case and the bosonic case discussed previously,
we will now explicitly evaluate the first of these correlation functions, I1N (t).
Using the definition of the reservoir operators and the assumed thermal Fermi distribution
of the electrons in the source, we find
t
n Sk |TkS |2
dt1 exp[i(kS c )(t t1 )].
(3.69)
I1N (t) =
k
Since the reservoir is a large system, we can introduce a density of states () as usual and
replace the sum over k by an integral to obtain
0
S
S
2
d ()n ()|T ()|
d ei(c ) ,
(3.70)
I1N (t) =
0
where we have also changed the variable of time integration. The dominant term in the
frequency integration will come from frequencies near c because the time integration is
117
significant at that point. For fermionic reservoirs, the expression for the thermal occupation
number is [Dat95]
n S () = [1 + e(f )/kB T ]1 ,
(3.71)
where f is the Fermi energy (recall that = 1). We assume that the bias is such that the
quasibound state of the dot is below the Fermi level in the source. This implies that near
= c , and at low temperatures, the average occupation of the reservoir state is very close
to unity [Dat95].
Now we make the Markov approximation to derive an autonomous master equation as
in Section 3.2. On extending the limits of integration from t to in Eq. (3.70) as
explained before, I1N may be approximated by the constant
I1N (t) (c )|TS (c )|2 L /2.
(3.72)
This defines the effective rate L of injection of electrons from the source (the left reservoir
in Fig. 3.1) into the quasibound state of the dot. This rate will have a complicated dependence
on the bias voltage through both c and the coupling coefficients |TS ()|, which can be
determined by a self-consistent band calculation. We do not address this issue; we simply
seek the noise properties as a function of the rate constants.
By evaluating all the other correlation functions under similar assumptions, we find that
the quantum master equation for the state matrix representing the dot state in the interaction
frame is given by
R
L
d
c c),
c c c
=
(2c c cc cc ) +
(2c
dt
2
2
(3.73)
where L and R are constants determining the rate of injection of electrons from the source
into the dot and from the dot into the drain, respectively.
From this master equation it is easy to derive the following equation for the mean
(3.74)
Exercise 3.18 Show this, and show that the steady-state occupancy of the dot is nss =
L /(L + R ).
The effect of Fermi statistics is evident in Eq. (3.74). If there is an electron on the dot,
n = 1, and the occupation of the dot can decrease only by emission of an electron into
the drain at rate R .
It is at this point that we need to make contact with measurable quantities. In the case
of electron transport, the measurable quantities reduce to current I (t) and voltage V (t).
The measurement results are a time series of currents and voltages, which exhibit both systematic and stochastic components. Thus I (t) and V (t) are classical conditional stochastic
processes, driven by the underlying quantum dynamics of the quasibound state on the dot.
118
The reservoirs in the Ohmic contacts play a key role in defining the measured quantities
and ensuring that they are ultimately classical stochastic processes. Transport through the
dot results in charge fluctuations in either the left or the right channel. These fluctuations
decay extremely rapidly, ensuring that the channels remain in thermal equilibrium with the
respective Ohmic contacts. For this to be possible, charge must be able to flow into and out
of the reservoirs from an external circuit.
If a single electron tunnels out of the dot into the drain between time t and t + dt,
its energy is momentarily above the Fermi energy. This electron scatters very strongly
from the electrons in the drain and propagates into the right Ohmic contact, where it is
perfectly absorbed. The nett effect is a small current pulse in the external circuit of total
charge eL = eCR /(CL + CR ). Here CL/R is the capacitance between the dot and the L/R
reservoir, and we have ignored any parasitic capacitance between source and drain. This
is completely analogous to perfect photodetection: a photon emitted from a cavity will be
detected with certainty by a detector that is a perfect absorber. Likewise, when an electron in
the right channel tunnels onto the dot, there is a rapid relaxation of this unfilled state back to
thermal equilibrium as an electron is emitted from the right Ohmic contact into the depleted
state of the source. This again results in a current pulse carrying charge eR = e eL in the
circuit connected to the Ohmic contacts.
The energy gained when one electron is emitted from the left reservoir is, by definition,
the chemical potential of that reservoir, L , while the energy lost when one electron is
absorbed into the right reservoir is R . The nett energy transferred between reservoirs is
L R . This energy is supplied by the external voltage, V , and thus L R = eV . On
average, in the steady state, the same current flows in the source and drain:
Jss eL L (1 nss ) + eR R nss
(3.75)
= eL (1 nss ) = eR nss
L R
.
=e
L + R
(3.76)
(3.77)
1
1
+
L
R
1
.
(3.78)
Typical values for the tunnelling rates achievable in these devices are indicated by results
from an experiment by Yacoby et al. [YHMS95] in which single-electron transmission
through a quantum dot was measured. The quantum dot was defined by surface gates on
a GaAs/AlGaAs two-dimensional electron gas. The quantum dot was 0.4 m wide and
0.5 m long and had an electron temperature of 100 mK. They measured a tunnelling rate
of order 0.3 GHz.
119
t
The family forms a semigroup rather than a group because there is not necessarily any
inverse. That is, Nt is not necessarily defined for t < 0.
These conditions formally capture the idea of Markovian dynamics of a quantum
system. (Note that there is no implication that all open-system dynamics must be
Markovian.) From these conditions it can be shown that there exists a superoperator L
such that
d(t)
= L(t),
(3.79)
dt
where L is called the generator of the map Nt . That is,
(t) = Nt (0) = eLt (0).
(3.80)
K
D[L k ],
(3.81)
k=1
for H Hermitian and {L j } arbitrary operators. Here D is the superoperator defined earlier
in Eq. (3.29). For mathematical rigour [Lin76], it is also required that K
k=1 Lk Lk be a
bounded operator, but that is often not satisfied by the operators we use, so this requirement
is usually ignored. This form is known as the Lindblad form, and the operators {L k }
120
are called Lindblad operators. The superoperator L is sometimes called the Liouvillian
superoperator, by analogy with the operator which generates the evolution of a classical
probability distribution on phase space, and the term Lindbladian is also used.
Each term in the sum in Eq. (3.81) can be regarded as an irreversible channel. It is
important to note, however, that the decomposition of the generator into the Lindblad form
L 1 , L 2 , . . ., L k
is not unique. We can reduce the ambiguity by requiring that the operators 1,
be linearly independent. We are still left with the possibility of redefining the Lindblad
operators by an arbitrary K K unitary matrix Tkl :
L k
K
Tkl L l .
(3.82)
l=1
L k L k + k ,
(3.83)
Exercise 3.20 Verify the invariance of the master equation under (3.82) and (3.83).
In the case of a single irreversible channel, it is relatively simple to evaluate the completely
positive map Nt = exp(Lt) formally as
Nt =
Nt(m) ,
(3.84)
m=0
t2
dt1 S(t tm )X
X = J [L],
(3.85)
(3.86)
(3.87)
121
the example of Brownian motion (Section 3.4.2) showed, this is not always the case. It
turns out that the time-dependent Brownian-motion master equation (3.58) does preserve
positivity. It is only when making the approximations leading to the time-independent, but
non-Lindblad, equation (3.60) that one loses positivity. Care must be taken in using master
equations such as this, which are not of the Lindblad form, because there are necessarily
initial states yielding time-evolved states that are non-positive (i.e. are not quantum states at
all). Thus autonomous non-Lindblad master equations must be regarded as approximations,
but, on the other hand, the fact that one has derived a Lindblad-form master equation does
not mean that one has an exact solution. The approximations leading to the high-temperature
spinboson master equation (3.56) may be no more valid than those leading to the hightemperature Brownian-motion master equation (3.60), for example. Whether or not a given
open system is well approximated by Markovian dynamics can be determined only by a
detailed study of the physics.
1
sx |x|y := x,
(3.88)
x=0
where |x and |y denote the system and apparatus in the measurement basis. A measurement
of the apparatus in this basis will yield Y = x with probability |sx |2 , that is, with exactly
the probability that a direct projective measurement of a physical quantity of the form
C = x c(x)|xS x| on the system would have given. On the other hand, as discussed in
Section 1.2.6, one could make a measurement of the apparatus in some other basis. For
example, measurement in a complementary basis |pA yields no information about the
system preparation at all.
In general one could read out the apparatus in the arbitrary orthonormal basis
|0 = |0 + |1,
(3.89)
|1 = |0 |1,
(3.90)
where ||2 + ||2 = 1. The state after the interaction between the system and the apparatus
can now equally well be written as
| = d0 |0 S |0 A + d1 |1 S |1 A ,
(3.91)
122
(3.92)
where here |z denotes an environment state. This is identical in form to the original
systemapparatus interaction. However, the crucial point is that now the total state is
| =
1
sx |x|y := x|z := x.
(3.93)
x=0
If we consider using a different basis {|0 , |1 } for the apparatus, we find that it is not
possible to write the total state in the form of Eq. (3.93). That is,
| =
1
dx |x S |x A |x E ,
(3.94)
x=0
for any coefficients dx and states for the system and environment.
Exercise 3.23 Show that this is true except for the special case in which |s0 | = |s1 |.
123
Note that einselection does not solve the quantum measurement problem in that it does
not explain how just one of the elements of the superposition in Eq. (3.93) appears to become
real, with probability |sx |2 , while the others disappear. The solutions to that problem are
outside the scope of this book. What the approach of Zurek and co-workers achieves is to
explain why, for macroscopic objects like pointers, some states are preferred over others in
that they are (relatively) unaffected by decoherence. Moreover, they have argued plausibly
that these states have classical-like properties, such as being localized in phase space. These
states are not necessarily orthogonal states, as in the example above, but they are practically
orthogonal if they correspond to distinct measurement outcomes [ZHP93].
(3.95)
Here we have used for , and T is the thermal de Broglie wavelength, (2MkB T )1/2 . It
is called this because the thermal equilibrium state matrix for a free particle, in the position
basis
(x, x ) = x||x ,
(3.96)
has the form (x, x ) exp[(x x )2 /(42T )]. That is, the characteristic coherence length
of the quantum waves representing the particle (first introduced by de Broglie) is T . In
this position basis the above master equation is easy to solve:
(x, x ; t) = exp[ t(x x )2 /2T ](x, x ; 0).
(3.97)
Exercise 3.24 Show this. Note that this does not give the thermal equilibrium distribution
in the long-time limit because the dissipation and free-evolution terms have been omitted.
Let the initial state for the pointer be a superposition of two states, macroscopically different in position, corresponding to two different pointer readings. Let 2s be the separation
124
of the states, and their width. For s , the initial state matrix can be well approximated
by
where
(3.98)
(x) = (2 2 )1/4 exp (x s)2 /(2 2 ) .
(3.99)
That is, (x, x ; 0) is a sum of four equally weighted bivariate Gaussians, centred in (x, x )space at (s, s), (s, s), (s, s) and (s, s). But the effect of the decoherence (3.95)
on these four peaks is markedly different. The off-diagonal ones will decay rapidly, on a
time-scale
2
1 T
.
(3.100)
dec =
2s
For s T , as will be the case in practice, this decoherence time is much smaller than the
dissipation time,
diss = 1 .
(3.101)
The latter will also correspond to the time-scale on which the on-diagonal peaks in (x, x )
change shape under Eq. (3.97), provided that T . This seems a reasonable assumption,
since one would wish to prepare a well-localized apparatus (small ), but if T then
it would have a kinetic energy much greater than the thermal energy kB T and so would
dissipate energy at rate anyway.
The above analysis shows that, under reasonable approximations, the coherences (the
off-diagonal terms) in the state matrix decay much more rapidly than the on-diagonal terms
change. Thus the superposition is transformed on a time-scale t, such that dec t diss ,
into a mixture of pointer states:
(x, x ; t) (1/2)[ (x) (x ) + + (x)+ (x)].
(3.102)
Moreover, for macroscopic systems this time-scale is very short. For example, if s = 1 mm,
T = 300 K, M = 1 g and = 0.01 s1 , one finds (upon restoring where necessary)
dec 1037 s, an extraordinarily short time. On such short time-scales, it could well be
argued that the Brownian-motion master equation is not valid, and that a different treatment
should be used (see for example Ref. [SHB03]). Nevertheless, this result can be taken as
indicative of the fact that there is an enormous separation of time-scales between that on
which the pointer is reduced to a mixture of classical states and the time-scale on which
those classical states evolve.
3.8 Preferred ensembles
In the preceding section we argued that the interaction of a macroscopic apparatus with
its environment preserves classical states and destroys superpositions of them. From the
simple model of apparatusenvironment entanglement in Eq. (3.92), and from the solution
125
to the (cut-down) Brownian-motion master equation (3.97), it is seen that the state matrix
becomes diagonal in this pointer basis. Moreover, from Eq. (3.92), the environment carries
the information about which pointer state the system is in. Any additional evolution of
the apparatus (such as that necessary for it to measure the system of interest) could cause
transitions between pointer states, but again this information would also be carried in the
environment so that at all times an observer could know where the apparatus is pointing,
so to speak.
It would be tempting to conclude from the above examples that all one need do to find
out the pointer basis for a given apparatus is to find the basis which diagonalizes its state
once it has reached equilibrium with its environment. However, this is not the case, for two
reasons. The first reason is that the states forming the diagonal basis are not necessarily
states that are relatively unaffected by the decoherence process. Rather, as mentioned above,
the latter states will in general be non-orthogonal. In that case the preferred representation
of the equilibrium state matrix
k k
(3.103)
ss =
k
126
should have a characteristic decay time that is as long as possible (and, for macroscopic
systems, one hopes that this is much longer than that of a randomly chosen ensemble).
In the remainder of this section we are concerned with elucidating when an ignorance
interpretation of an ensemble representing ss is possible. As well as being important in
understanding the role of decoherence, it is also relevant to quantum control, as will be
discussed in Chapter 6. Let us restrict the discussion to Lindbladians having a unique
stationary state defined by
Lss = 0.
(3.105)
Also, let us consider only stationary ensembles for ss . Clearly, once the system has reached
steady state such a stationary ensemble will represent the system for all times t. Then, as
claimed above, it can be proven that for some ensembles (and, in particular, often for
the orthogonal ensemble) there is no way for an experimenter continually to measure the
environment so as to find out which state the system is in. We say that such ensembles are
not physically realizable (PR). However, there are other stationary ensembles that are PR.
(3.106)
This purification always exists, as discussed in Section A.2.2. Then, for any ensemble
{( k , k )}k that represents , it is possible to measure the environment such that the system
state is collapsed into one of the pure states k with probability k . This is sometimes
known as the SchrodingerHJW theorem.
Quantum steering gives rigorous meaning to the ignorance interpretation of any particular
ensemble. It says that there will be a way to perform a measurement on the environment,
without disturbing the system state on average, to obtain exactly the information as to which
state the system is really in. Of course, the fact that one can do this for any ensemble means
that no ensemble can be fundamentally preferred over any other one, as a representation of
at some particular time t. To say that an ensemble is PR, however, requires justifying the
ignorance interpretation at all times (after the system has reached steady state). We now
establish the conditions for an ensemble to be PR.
Schrodinger introduced this as an evocative term for the EinsteinPodolskyRosen effect [EPR35] involving entangled states.
For a completely general formulation of steering in quantum information terms, see Refs. [WJD07, JWD07].
127
If wj k ( ) exists then it is the probability that the measurement at time t + yields the state
j .
Equation (3.107) is a necessary but not sufficient criterion for the ensemble {( j , j ) : j }
to be PR. We also require that the weights be stationary. That is, for all j and all ,
k wj k ( ).
(3.108)
j =
k
Multiplying both sides of Eq. (3.107) by k , and summing over k, then using Eq. (3.108)
and Eq. (3.103) gives eL ss = ss , as required from the definition of ss .
One can analyse these conditions further to obtain simple criteria that can be applied
in many cases of interest [WV01]. In particular, we will return to them in Chapter 6.
For the moment, it is sufficient to prove that there are some ensembles that are PR and
some that are not. This is what was called in Ref. [WV01] the preferred-ensemble fact
(the preferred ensembles are those that are physically realizable). Moreover, for some
systems the orthogonal ensemble is PR and for others it is not. The models we consider are
chosen for their simplicity (they are two-level systems), and are not realistic models for the
decoherence of a macroscopic apparatus.
3.8.3 Examples
First we consider an example in which the orthogonal ensemble is PR: the high temperature
spinboson model. In suitably scaled time units, the Lindbladian in Eq. (3.56) is L = D[ z ].
In this example, there is no unique stationary state, but all stationary states are of the form
ss = |z := 1z := 1| + + |z := 1z := 1|.
(3.109)
128
[ x , ] + D[ ].
2
(3.110)
(3.111)
y(t) = c+ e+ t + c e t + yss ,
(3.112)
t
t
4i
+ 4i
z(t) = c+
e + + c
e + zss ,
4
4
(3.113)
= i
4
(3.114)
1
8i
(3.115)
where u, v and w are used to represent the initial conditions of x, y and z. A modified Rabi
frequency has been introduced,
= 2 ( /4)2 ,
(3.116)
which is real for > /4 and imaginary for < /4. The steady-state solutions are
xss = 0, yss = 2 /( 2 + 22 ) and zss = 2 /( 2 + 22 ), as shown in Exercise 3.10.
Exercise 3.26 Derive the above solution, using standard techniques for linear differential
equations.
In the Bloch representation, the diagonal states of ss are found by extending the stationary
Bloch vector forwards and backwards to where it intersects the surface of the Bloch sphere.
That is, the two pure diagonal states are
0
u
v = 2 42 + 2 1/2 .
(3.117)
129
z 0
1
1
Fig. 3.2 Dynamics of two states on the Bloch sphere according to the master equation (3.110), for
= 10 , with points every 0.1 1 . The initial states are those that diagonalize the stationary Bloch
sphere, which are close to y = 1. The stationary state is the dot close to the centre of the Bloch
sphere.
u
u
x(t)
y(t) = w+ (t) v + w (t) v
w +
w
z(t)
(3.118)
for any weights w+ (t), w (t). That is, the diagonal states evolve into states that are not
mixtures of the original diagonal states, so it is not possible for an observer to know at all
times that the system is in a diagonal state. The orthogonal ensemble is not PR. This is
illustrated in Fig. 3.2.
There are, however, non-orthogonal ensembles that are PR for this system. Moreover, there is a PR ensemble with just two members, like the orthogonal ensemble. This
is the ensemble {( + , 1/2), ( , 1/2)}, where this time the two states have the Bloch
vectors
2
u
1 yss2 zss
v =
.
(3.119)
yss
w
zss
130
u e t/2
x(t)
y(t) = yss .
z(t)
zss
(3.120)
Obviously this can be written as a positively weighted sum of the two initial Bloch vectors,
and averaging over the two initial states will give a sum that remains equal to the stationary
Bloch vector. That is, the two conditions (3.107) and (3.108) are satisfied, and this ensemble
is PR.
Exercise 3.27 Prove the above by explicitly constucting the necessary weights w+ (t) and
w (t).
These results are most easily appreciated in the limit. Then the stationary solution
is an almost maximally mixed state, displaced slightly from the centre of the Bloch sphere
along the y axis. The diagonal states then are close to y eigenstates, while the states in the
PR ensemble are close to x eigenstates. In this limit the master-equation evolution (3.110)
is dominated by the Hamiltonian term, which causes the Bloch vector to rotate around the
x axis. Thus, the y eigenstates are rapidly rotated away from their original positions, so
this ensemble is neither robust nor PR, but the x eigenstates are not rotated at all, and simply
decay at rate /2 towards the steady state, along the line joining them. Thus this ensemble
is PR. Moreover, it can be shown [WB00] that this is the most robust ensemble according
to the fidelity measure Eq. (3.104), with a characteristic decay time (half-life) of 2 ln 2/ .
These features are shown in Fig. 3.3.
The existence of a PR ensemble in this second case (where the simple picture of a
diagonal pointer basis fails) is not happenstance. For any master equation there are in fact
infinitely many PR ensembles. Some of these will be robust, and thus could be considered
pointer bases, and some will not. A full understanding of how PR ensembles arise will
be reached in Chapter 4, where we consider the conditional dynamics of a continuously
observed open system.
= D[a].
(3.121)
131
z 0
1
1
0
0
Fig. 3.3 Dynamics of two states on the Bloch sphere according to the master equation (3.110), for
= 10 , with points every 0.1 1 . The initial states are the two non-orthogonal states defined
in Eq. (3.119), which are close to x = 1. The stationary state is the dot close to the centre of the
Bloch sphere.
If we use the solution given in Eq. (3.85) we find that the general solution can be written
as a Kraus sum,
(t) =
M m (t)(0)M m (t),
(3.122)
m=0
where
(1 e t )m/2 t a a/2
a m .
M m (t) =
e
m!
(3.123)
We can regard this as an expansion of the state matrix in terms of the number of photons
lost from the cavity in time t.
Exercise 3.28 Prove Eq. (3.122) by simplifying Eq. (3.85) using the property that
a ] = a .
[a a,
Equation (3.122) can be solved most easily in the number-state basis. However, it is
rather difficult to prepare a simple harmonic oscillator in a number eigenstate. We encounter
simple harmonic oscillators regularly in classical physics; springs, pendula, cantilevers etc.
What type of state describes the kinds of motional states in which such oscillators are
132
= | and
Hint: Consider the effect of M m (t) on a coherent state, using the fact that a|
also using the number-state expansion for |.
Suppose we somehow managed to prepare a cavity field in a superposition of two coherent
states,
|(0) = N (| + |),
(3.124)
t
C(, , t) = exp |(t)| + |(t)| 2(t) (t) (1 e ) .
2
(3.125)
(3.126)
C(, , t) exp | |2 t/2 .
(3.127)
133
De
Dg
g ( e i + e
+ e
R1
( e i
i
i
)
)[ / 2
R2
Fig. 3.4 A schematic diagram of the experiment performed by the Haroche group to investigate the
decoherence of oscillator coherent states. The atom is prepared in an appropriate Rydberg state.
The cavities R1 and R2 each apply a /2 pulse. The interaction with the cavity field in C produces
superpositions of coherent states. The final ionization detectors determine the atomic state of the
atom. Figure 2 adapted with permission from M. Brune et al., Phys. Rev. Lett. 77, 4887, (1996).
Copyrighted by the American Physical Society.
We thus see that at the very beginning the coherence does not simply decay at the same rate
as the amplitudes, but rather at a decay rate that depends quadratically on the difference
between the amplitudes of the initial superposed states. This is qualitatively the same
as was seen for Brownian motion in Section 3.7.2. For macroscopically different states
(| | 1) the decoherence is very rapid. Once the coherence between the two states
has become very small we can regard the state as a statistical mixture of the two coherent
states with exponentially decaying coherent amplitudes. The quantum character of the
initial superposition is rapidly lost and for all practical purposes we may as well regard the
initial state as a classical statistical mixture of the two pointer states. For this reason it is
very hard to prepare an oscillator in a Schrodinger-cat state. However, the decoherence we
have described has been observed experimentally for | | 1.
134
thus interact very strongly with the cavity field even though it is well detuned from the
cavity resonance. The effect of the detuned interaction is to change the phase of the field
in the cavity. However, the sign of the phase shift is opposite for each of the atomic states.
Using second-order perturbation theory, an effective Hamiltonian for this interaction can
be derived:
H C = a a z ,
(3.128)
where z = |ee| |gg|, and = ||2 /(2), where is the single-photon Rabi frequency and = a c is the atomcavity detuning. Thus decreasing the detuning
increases (which is desirable), but the detuning cannot be decreased too much or the
description in terms of this effective interaction Hamiltonian becomes invalid.
Assume to begin that the cavity fields R1 and R2 in Fig. 3.4 above are resonant with
the atomic transition. Say the cavity C is initially prepared in a weakly
coherent state |
(in the experiment || = 3.1) and the atom in the state (|g + |e)/ 2, using a /2 pulse
in cavity R1 . Then in time the atomcavity system will evolve under the Hamiltonian
(3.128) to
1
(3.129)
|( ) = |g|ei + |e|ei ,
2
where = /2.
Exercise 3.31 Verify this.
The state in Eq. (3.129) is an entangled state between a two-level system and an oscillator.
Tracing over the atom yields a field state that is an equal mixture of two coherent states
separated in phase by 2.
To obtain a state that correlates the atomic energy levels with coherent superpositions of
coherent states, the atom is subjected to another /2 pulse in cavity R2 . This creates the
final state
1
(3.130)
|out = |g(|ei + |ei ) + |e(|ei |ei ) .
2
If one now determines that the atom is in the state |g at the final ionization detectors, the
conditional state of the field is
| g out = N+ (|ei + |ei ),
(3.131)
where N+ is a normalization constant. Likewise, if the atom is detected in the excited state,
| e out = N (|ei |ei ).
(3.132)
135
0.2
0.1
0.0
Fig. 3.5 A plot of the two-atom correlation versus the delay time between successive atoms for two
different values of the conditional phase shift. Figure 5(b) adapted with permission from M. Brune et
al., Phys. Rev. Lett. 77, 4887, (1996). Copyrighted by the American Physical Society.
decohered field state. It is impossible to measure directly the state of a microwave cavity
field at the quantum level because of the low energy of microwave photons compared
with optical photons. Instead the Haroche team used a second atom as a probe for the
field state. They then measured the state of the second atom, obtaining the conditional
probabilities p(e|g) and p(e|e) (where the conditioning label refers to the result of the first
atom measurement). Since the respective conditional field states after the first atom are
different, these probabilities should be different. The extent of the difference is given by
= p(e|e) p(e|g).
(3.133)
From the result (3.125), after a time the two conditional states will have decohered to
g
e
( ) | ei ei | + | ei ei |
out
(3.134)
C( )| ei ei | + C( ) | ei ei | ,
where < due to decay in the coherent amplitude and |C( )| < 1 due to the decay
in the coherences as before. In the limit C( ) 0, these two states are indistinguishable
and so 0. Thus, by repeating a sequence of double-atom experiments, the relevant
conditional probabilities may be sampled and a value of as a function of the delay time
can be determined. (In the experiment, an extra averaging was performed to determine
, involving detuning the cavities R1 and R2 from atomic resonance 0 by a varying
amount .)
In Fig. 3.5 we reproduce the results of the experimental determination of for two
different values of the conditional phase shift, , as a function of the delay time in units
136
(3.135)
where is the strength of the resonant driving force and is the strength of the coupling
between the oscillator and the two-level system. The irreversible dynamics of the apparatus
is modelled using the weak-damping, zero-temperature master equation of Eq. (3.121),
giving the master equation
= i [a + a , ] i [a a z , ] + D[a].
(3.136)
There are numerous physical problems that could be described by this model. It could
represent a two-level electric dipole system interacting with an electromagnetic cavity field
that is far detuned, as can occur in cavity QED (see the preceding Section 3.9.2) and
circuit QED (see the following Section 3.10.2). Another realization comes from the rapidly
developing field of quantum electromechanical systems, as we now discuss.
Current progress in the fabrication of nano-electromechanical systems (NEMSs) will
soon yield mechanical oscillators with resonance frequencies close to 1 GHz, and quality
factors Q above 105 [SR05]. (The quality factor is defined as the ratio of the resonance
frequency 0 to the damping rate .) At that scale, a NEMS oscillator becomes a quantum
electromechanical system (QEMS). One way to define the quantum limit is for the thermal
excitation energy to be less than the energy gap between adjacent oscillator energy eigenstates: 0 > kB T . This inequality would be satisfied by a factor of two or so with a device
having resonance frequency 0 = 1 2 GHz and temperature of T0 = 20 mK.
In this realization, the two-level system or qubit could be a solid-state double-well
structure with a single electron tunnelling between the wells (quantum dots). We will
model this as an approximate two-state system. It is possible to couple the quantumelectromechanical oscillator to the charge state of the double dot via an external voltage
gate. A possible device is shown in Fig. 3.6. The two wells are at different distances from the
voltage gate and this distance is modulated as the oscillator moves. The electrostatic energy
137
d
1
L
Q
Vg
Fig. 3.6 A possible scheme for coupling a single-electron double-dot system to a nano-mechanical
resonator. The double dot is idealized as a double-well potential for a single electron.
of the system depends on which well is occupied by the electron and on the square of the
oscillator displacement. This leads to a shift in the frequency of the oscillator that depends
on the location of the electron [CR98]. Currently such nano-mechanical electrometers are
strongly dominated by thermal fluctuations and the irreversible dynamics are not well
described by the decay term in Eq. (3.136). However, if quality factors and resonance
frequencies continue to increase, these devices should enter a domain of operation where
this description is acceptable.
At any time the state of the system plus apparatus may be written as
(t) = 00 |00| + 11 |11| + 10 |10| + 10 |01|,
(3.137)
where ij is an operator that acts only in the oscillator Hilbert space. If we substitute this
into Eq. (3.136), we find the following equations:
00 ] + D[a]
00 ,
00 = i [a + a , 00 ] + i [a a,
(3.138)
11 ] + D[a]
11 ,
11 = i [a + a , 11 ] i [a a,
(3.139)
10 } + D[a]
10 ,
10 = i [a + a , 10 ] i {a a,
(3.140)
B}
= A B + B A as usual. On solving these equations for the initial condition of
where {A,
an arbitary qubit state c0 |0 + c1 |1 and the oscillator in the ground state, we find that the
combined state of the system plus apparatus is
(t) = |c0 |2 | (t) (t)| |00| + |c1 |2 |+ (t)+ (t)| |11|
+ c0 c1 C(t)|+ (t) (t)| |10| + H.c. ,
(3.141)
138
(3.142)
The coherence factor C(t) has a complicated time dependence, but tends to zero as t .
Thus the two orthogonal states of the measured qubit become classically correlated with
different coherent states of the apparatus. The latter are the pointer basis states of the
apparatus, and may be approximately orthogonal. Even if they are not orthogonal, it can be
seen that the qubit state becomes diagonal in the eigenbasis of z .
For short times C(t) decays as an exponential of a quadratic function of time. Such a
quadratic dependence is typical for coherence decay in a measurement model that relies
upon an initial build up of correlations between the measured system and the pointer
degree of freedom. For long times ( t 1), the coherence decays exponentially in time:
C(t) et . The rate of decoherence is
=
2 2 2
.
( 2 /4 + 2 )2
(3.143)
This qubit decoherence rate can be understood as follows. The long-time solution of
Eq. (3.141) is
= |c0 |2 | | |00| + |c1 |2 |+ + | |11|
(3.144)
with = i ( /2 i )1 .
Exercise 3.32 Verify by direct substitution that this is a steady-state solution of the master
equation (3.136).
The square separation S = | + |2 between the two possible oscillator amplitudes in
the steady state is given by
S=
4 2 2
.
( 2 /4 + 2 )2
(3.145)
Thus the long-time decoherence rate is = S /2. This is essentially the rate at which
information about which oscillator state is occupied (and hence which qubit state is occupied) is leaking into the oscillators environment through the damping at rate . If S 1,
then the decoherence rate is much faster than the rate at which the oscillator is damped.
139
TUNNEL
BARRIER
Cg
ISLAND
Vg
x
TUNNEL
BARRIER
Fig. 3.7 A Cooper-pair box system. A superconducting metallic island is connected to a Cooper pair
reservoir by a split tunnel junction, threaded by a magnetic flux x . A DC bias gate with voltage Vg
can make it energetically favourable for one or more Cooper-pairs to tunnel onto the island.
by a tunnel barrier from a reservoir of Cooper pairs. A Cooper pair (CP) is a pair of electrons
bound together due to complex interactions with the lattice of the superconducting material
[Coo56, BCS57]. Although electrons are fermions, a pair of electrons acts like a boson,
and so can be described similarly to photons, using number states |N for N N.
A schematic representation of a CPB is shown in Fig. 3.7. The box consists of a small
superconducting metallic island with oxide barrier tunnel junctions insulating it from the
Cooper-pair reservoir. As the voltage Vg on the bias gate is changed, one or more Cooper
pairs may tunnel onto the island. The tunnelling rate is determined by the Josephson energy
EJ of the junction. This can be changed by adjusting the magnetic flux x threading the
loop: a so-called split-junction CPB.
In the experiment of Schuster et al. [SWB+ 05] the CPB was placed inside a superconducting co-planar microwave LC-resonator. The resonator supports a quantized mode of
the electromagnetic field, while the CPB acts like an atomic system. Thus the term circuit
QED (as opposed to the cavity QED of Section 3.9.2) is used for these systems. The
coupling between the CPB and the microwave field is given by
EJ
|N N + 1| + |N + 1N|
H = r a a
2 N
(N n g )2 |N N|.
(3.146)
+ 4EC
In the first term, r = LC is the frequency and a the annihilation operator for the
microwave resonator field (note that in this section we are not setting = 1). The second
term is the Josephson tunnelling term, with Josephson frequency EJ /. The third term is
the coupling between the field and the CPB, in which EC = e2 /(2C! ) and n g = Cg Vg /(2e).
Here C! is the capacitance between the island and the rest of the circuit, Cg is the capacitance
between the CPB island and the bias gate for the island, and Vg is the operator for the total
voltage applied to the island by the bias gate. This voltage can be split as Vg = Vg(0) + v,
140
where Vg(0) is a DC field and v is the microwave field in the cavity, which is quantized. It is
related to the cavity annihilation operator by
(3.147)
v = (a + a ) r /(2C).
We can thus write
g;
n g = n(0)
g + n
(3.148)
r
,
LC
a =
EJ
.
(3.150)
(3.151)
(3.152)
with = r a the detuning between the circuit frequency and the CPB tunnelling
frequency. It is assumed to be small compared with a . We can, however, still consider
a detuning that is large compared with g. Treating the second term in Eq. (3.152) as a
perturbation on the first term, it is possible to show using second-order perturbation theory
that Eq. (3.152) may be approximated by the effective Hamiltonitan
Veff = a a + a a z ,
(3.153)
where = g 2 /. Moving frames again to the cavity resonance, and including a resonant
microwave driving field , gives a Hamiltonian with the same form as Eq. (3.135).
Schuster et al. [SWB+ 05] recently implemented this system experimentally and measured the measurement-induced qubit dephasing rate given in Eq. (3.143). In their experiment, /(2) = 100 MHz, g/(2 ) = 5.8 MHz and the cavity decay rate was /(2 ) =
0.8 MHz. Schoelkopfs team used a second probe microwave field tuned to the CPB resonance to induce coherence in the qubit basis. The measurement-induced decoherence time
then appears as a broadening of the spectrum representing the response of the qubit to
the probe. This spectrum is related to the norm squared of the Fourier transform of the
coherence function in time. The results are found to be in good agreement with the theory
141
presented here for small . Although the decay of coherence is exponential for long times
with rate (3.143), for short times the decoherence is quadratic in time. This is manifested
experimentally in the line shape of the probe absorption spectrum: the line shape deviates
from the usual Lorentzian shape (corresponding to exponential decay) in its wings.
gk bk e+ik z ,
(3.154)
(3.155)
where k is the detuning of the bath mode k from the system, is the dissipation rate, and,
defined simply to make VIF (t) simple in form. The second reason is that in some quantumoptical situations, such as the damping of a cavity mode at a single mirror, it is possible
to consider the electromagnetic field modes which constitute the bath as being functions
of one spatial direction only. On defining the speed of light in vacuo to be unity, and the
relates to the bath at
origin as the location of the mirror, we have that, at time t = 0, b(z)
position |z| away from the mirror. For z < 0 it represents a property of the incoming field,
:= t) represents
and for z > 0 it represents a property of the outgoing field. That is, b(z
the field that will interact (for t > 0) or has interacted (for t < 0) with the system at time
t. An explanation of this may be found in many textbooks, such as that of Gardiner and
Zoller [GZ04].
Now, in the limit of a continuum bath as considered in Section 3.3.1, we find that
[b(z),
b (z )] = 1 (z z ).
(3.156)
142
In order to derive a Markovian master equation, it was necessary to assume that ( ) was
sharply peaked at = 0. Ignoring the Lamb shift in Eq. (3.30), and taking the Markovian
limit, we obtain
[b(z),
b (z )] = (z z ).
(3.157)
Physically, this result cannot be exact because the bath modes all have positive frequency.
Also, one must be careful using this result because of the singularity of the -function.
Nevertheless, it is the result that must be used to obtain a strict correspondence with a
Markovian master equation.
Before moving to the Heisenberg picture, it is useful to define the unitary operator for
an infinitesimal evolution generated by Eq. (3.154):
B z:=t
c d B z:=t .
(3.158)
U (t + dt, t) = exp cd
Here we have defined a new infinitesimal operator,
dB z = b(z)dt.
(3.159)
(3.160)
where we have used the heuristic equation (0)dt = 1. This can be understood by thinking
of dt as the smallest unit into which time can be divided. Then the discrete approximation
to a -function is a function which is zero everywhere except for an interval of size dt
around
zero, where it equals (0) = 1/dt (so that its area is unity). Because dB z is of order
dt, it is necessary to expand U (t + dt, t) to second rather than first order in its argument.
That is,
!
"
B z:=t
U (t + dt, t) = 1 + c dB z:=t c dB z:=t 12 c c dt 12 {c , c}d
dB z:=t
!
"2
+ 12 c2 dB z:=t + 12 (c )2 dB z:=t .
(3.161)
s (t)],
= [b(z
dt
(3.162)
143
(3.163)
(3.164)
(3.165)
(3.166)
Here we are (just for the moment) using the subscript HP to denote that Eq. (3.166) is
obtained by replacing the operators appearing in U (t + dt, t) by their Heisenberg-picture
versions at time t. If we were to expand the exponential in U HP (t + dt, t) to first order in its
argument, we would simply reproduce Eq. (3.162). As motivated above, this will not work,
and instead we must use the second-order expansion as in Eq. (3.161). First we define
dB in (t) dB z:=t (t),
(3.167)
and bin (t) similarly. These are known as input field operators. Note that as usual the
t-argument on the right-hand side indicates that here dB z:=t is in the Heisenberg picture.
Because of the bath commutation relation (3.157), this operator is unaffected by any
evolution prior to time t, since dB in (t) commutes with dB in (t) for non-equal times. Thus
we could equally well have defined
dB in (t) dB z:=t (t ),
t t.
(3.168)
In particular, if t = t0 , the initial time for the problem, then dB in (t) is the same as the
Schrodinger-picture operator dB z:=t appearing in U (t + dt, t) of Eq. (3.158).
If the bath is initially in the vacuum state, this leads to a significant simplification, as
we will now explain. Ultimately we are interested in calculating the average of system (or
bath) operators. In the Heisenberg picture, such an average is given by
s (t) = Tr[s (t)S B ] = TrS [0|s (t)|0S ],
(3.169)
where |0 is the vacuum bath state and S is the initial system state. Since dB in (t)|0 = 0 for
all t, any expression involving dB in (t) and dB in (t) that is in normal order (see Section A.5)
will contribute nothing to the average. Thus it is permissible to drop all normally ordered
terms in Eq. (3.161) that are of second order in dB in (t). That is to say, we can drop all
144
(3.170)
(3.171)
One thus obtains from Eq. (3.165) the following Heisenberg equation of motion in the
interaction frame:
(3.172)
Here we have dropped the time arguments from all operators except the input bath operators.
We have also included a system Hamiltonian H , as could arise from having a non-zero
VS , or a Lamb-shift term, as discussed in Section 3.3.1. Remember that we are still in
the interaction frame H here is not the same as the H = H 0 + V for the system plus
environment with which we started the calculation.
We will refer to Eq. (3.172) as a quantum Langevin equation (QLE) for s . The operator
s may be a system operator or it may be a bath operator. Because bin (t) is the bath operator
before it interacts with the system, it is independent of the system operator s (t). Hence for
system operators one can derive
*
+
ds
s 12 s c c + i[H , s ] .
= c s c 12 c c
dt
(3.173)
Although the noise terms in (3.172) do not contribute to Eq. (3.173), they are necessary in
order for Eq. (3.172) to be a valid Heisenberg equation of motion. If they are omitted then
the operator algebra of the system will not be preserved.
where a is an annihiExercise 3.34 Show this. For specificity, consider the case c = a,
lation operator, and show that, unless these terms are included, [a(t),
a (t)] will not remain
equal to unity.
The master equation. Note that Eq. (3.173) is Markovian, depending only on the average
of system operators at the same time. Therefore, it should be derivable from a Markovian
evolution equation for the system in the Schrodinger picture. That is to say, there should
exist a master equation for the system state matrix such that
s(t)
= Tr[s(t)].
(3.174)
145
Here, the placement of the time argument indicates the picture (Heisenberg or Schrodinger).
By inspection of Eq. (3.173), the corresponding master equation is
i[H , ].
= D[c]
(3.175)
of the equation dB in (t)dB in (t) = dt, with all other second-order products ignorable and all
first-order terms being zero on average, we have in the general case
dB in (t)dB in (t) = N dt,
(3.176)
(3.177)
(3.178)
(3.179)
while Eq. (3.160) still holds. The parameter N is positive, while M and are complex,
with M constrained by
|M|2 N (N + 1).
(3.180)
This type of input field is sometimes called a white-noise field, because the bath correlations
are -correlated in time. That is, they are flat (like the spectrum of white light) in frequency
space. A thermal bath is well approximated by a white-noise bath with M = 0 and N =
{exp[0 /(kB T )] 1}1 , where 0 is the frequency of the systems free oscillation. Only
a pure squeezed (or vacuum) bath attains the equality in Eq. (3.180).
Using these rules in expanding the unitary operator in Eq. (3.165) gives the following
general QLE for a white-noise bath:
ds = i dt[H , s ] +
1
s ) + N (2c
s c s cc cc s )
(N + 1)(2c s c s c c c c
2
[c,
s ]] dt [dB in c c dB in , s ].
+ M[c , [c , s ]] + M [c,
(3.181)
Here we have dropped time arguments but are still (obviously) working in the Heisenberg
picture.
Exercise 3.35 Derive Eq. (3.181).
146
M
M
[c,
]]
[c , [c , ]] +
[c,
2
2
i[H + i( c c ), ].
(3.182)
Note that the effect of the non-zero mean field (3.179) is simply to add a driving term to the
existing system Hamiltonian H . Although not obviously of the Lindblad form, Eq. (3.182)
can be written in that form, with three irreversible terms, as long as Eq. (3.180) holds.
Exercise 3.36 Show this.
Hint: Define N such that |M|2 = N (N + 1) and consider three Lindblad operators
+ M + 1) c (N + M)].
c and [c(N
proportional to c,
147
some of the conceptual issues surrounding decoherence and the quantum measurement
problem, see the recent review by Schlosshauer [Sch04]. For an extensive investigation
of physically realizable ensembles and robustness for various open quantum systems
see Refs. [WV02a, WV02b, ABJW05]. Finally, we note that an improved version of the
Schrodinger-cat decoherence experiment of Section 3.9.2 has been performed, also by the
Haroche group. The new results [DDS+ 08] allow reconstruction of the whole quantum
state (specifically, its Wigner function see Section A.5), showing the rapid vanishing of
its nonclassical features under damping.
4
Quantum trajectories
4.1 Introduction
A very general concept of a quantum trajectory would be the path taken by the state of a
quantum system over time. This state could be conditioned upon measurement results, as
we considered in Chapter 1. This is the sort of quantum trajectory we are most interested
in, and it is generally stochastic in nature. In ordinary use, the word trajectory usually
implies a path that is also continuous in time. This idea is not always applicable to quantum systems, but we can maintain its essence by defining a quantum trajectory as the
path taken by the conditional state of a quantum system for which the unconditioned
system state evolves continuously. As explained in Chapter 1, the unconditioned state
is that obtained by averaging over the random measurement results which condition the
system.
With this motivation, we begin in Section 4.2 by deriving the simplest sort of quantum
trajectory, which involves jumps (that is, discontinuous conditioned evolution). In the
process we will reproduce Lindblads general form for continuous Markovian quantum
evolution as presented in Section 3.6. In Section 4.3 we relate these quantum jumps to
photon-counting measurements on the bath for the model introduced in Section 3.11, and
also derive correlation functions for these measurement records. In Section 4.4 we consider
the addition of a coherent field (the local oscillator) to the output before detection. In
the limit of a strong local oscillator this is called homodyne detection, and is described
by a continuous (diffusive) quantum trajectory. In Section 4.5 we generalize this theory
to describe heterodyne detection and even more general diffusive quantum trajectories. In
Section 4.6 we illustrate the detection schemes discussed by examining the conditioned
evolution of a simple system: a damped, driven two-level atom. In Section 4.7 we show
that there is a complementary description of continuous measurement in the Heisenberg
picture, and that this can also be used to derive correlation functions and other statistics
of the measurement results. In Section 4.8 we show how quantum trajectory theory can
be generalized to deal with imperfect detection, incorporating inefficiency, thermal and
squeezed bath noise, dark noise and finite detector bandwidth. In Section 4.9 we turn from
optical examples to mesoscopic electronics, including a discussion of imperfect detection.
We conclude with further reading in Section 4.10.
148
149
(4.1)
and hence to continuous evolution. We now seek to generalize this unitary evolution
by incorporating measurements. To consider the unconditioned state, averaged over the
possible measurement results, we have to represent the system by a state matrix rather than
a state vector. Then continuous evolution of implies
lim
lim
(t + ) (t)
= (t) = finite.
(4.3)
In order to obtain a differential equation for (t), we require the measurement time T
to be infinitesimal. In this limit, we say that we are monitoring the system. Then, from
Eq. (1.86), the state matrix at time t + dt, averaging over all possible results, is
J [M r (dt)](t).
(4.4)
(t + dt) =
r
If (t + dt) is to be infinitesimally different from (t), then a first reasonable guess at how
to generalize Eq. (4.1) would be to consider just one r, say r = 0, and set
+ iH )dt,
M 0 (dt) = 1 (R/2
(4.5)
where R and H are Hermitian operators. However, we find that this single measurement
operator does not satisfy the completeness condition (1.78), since, to order dt,
M 0 (dt)M 0 (dt) = 1 R dt = 1.
(4.6)
The above result reflects the fact that a measurement with only one possible result is not
really a measurement at all and hence the measurement operator (4.5) must be a unitary
operator, as it is with R = 0. If R = 0 then we require at least one other possible result to
The simplest suggestion is to consider two results 0 and 1.
enable r M r (dt) M r (dt) = 1.
We let M 0 (dt) be as above, and define
M 1 (dt) = dt c,
(4.7)
where c is an arbitrary operator obeying
c c = R,
(4.8)
(4.9)
150
Quantum trajectories
c ,
(t + dt) = [1 (c c/2
(4.10)
(4.11)
Allowing for more than one irreversible term, we obtain exactly the master equation derived
by Lindblad by more formal means [Lin76] (see Section 3.6).
(4.12)
are made routinely in experimental quantum optics. If c = a then Eq. (4.11) is the
damped-cavity master equation derived in Section 3.3.2, and this theory describes the
system evolution in terms of photodetections of the cavity output. Note that we are ignoring
the time delay between emission from the system and detection by the detector. Loosely,
we can think of the conditioned state here as being the state the system was in at the time
of emission. When it comes to considering feedback control of the system we will see that
any time delay, whether between the emission and detection or between the detection and
feedback action, must be taken into account.
Let us denote the number of photodetections up to time t by N (t), and say for simplicity
that the system state at time t is a pure state |(t). Then the stochastic increment dN (t)
obeys
dN(t)2 = dN(t),
(4.13)
(4.14)
where a classical expectation value is denoted by E and the quantum expectation value
by angle brackets. The first equation here simply says that dN is either zero or one, as it
151
must be since it is the increment in N in an infinitesimal time. The second equation gives
the mean of dN , which is identical with the probability of detecting a photon. This is an
example of a point process see Section B.6.
From the measurement operators (4.5) and (4.7) we see that, when dN (t) = 1, the state
vector changes to
|1 (t + dt) =
M 1 (dt)|(t)
c|(t)
,
=
c c(t)
(4.15)
where the denominator gives the normalization. If there is no detection, dN (t) = 0 and
|0 (t + dt) =
M 0 (dt)|(t)
= 1 dt iH + 12 c c 12 c c(t)
|(t),
(4.16)
where the denominator has been expanded to first order in dt to yield the nonlinear term.
This stochastic evolution can be written explicitly as a nonlinear stochastic Schrodinger
equation (SSE):
c
c c(t)
c c
d|(t) = dN(t)
1 + [1 dN(t)]dt
iH |(t).
2
2
c c(t)
(4.17)
It is called a Schrodinger equation only because it preserves the purity of the state, like
Eq. (4.1). We will call a solution to this equation a quantum trajectory for the system.
We can simplify the stochastic Schrodinger equation by using the rule (see Section B.6)
dN (t)dt = o(dt).
(4.18)
This notation means that the order of dN (t)dt is smaller than that of dt and so the former
is negligible compared with the latter. Then Eq. (4.17) becomes
c
c
c(t)
c
c
1 + dt
iH |(t).
(4.19)
d|(t) = dN (t)
2
2
c c(t)
Exercise 4.1 Verify that the only difference between the two equations is that the state
vector after a jump is infinitesimally different. Since the total number of jumps in any finite
time is finite, the difference between the two equations is negligible.
From Eq. (4.19) it is simple to reconstruct the master equation using the rules (4.13) and
(4.14). First define a projector
(t)
= |(t)(t)|,
(4.20)
152
Quantum trajectories
(4.21)
(4.22)
r r
,
Tr[r r ]
(4.23)
H[r ] = r + r Tr[r + r ].
(4.24)
(t) = E[ (t)],
(4.25)
Now define
that is, the state matrix is the expected value or ensemble average of the projector. From
Eq. (B.54), the rule (4.14) generalizes to
E dN(t)g (t)
= dt E Tr (t)c c g (t) ,
(4.26)
for any function g. Using this yields finally
d = i dt[H , ] + dt D[c],
(4.27)
as required.
Exercise 4.2 Verify Eqs. (4.22) and (4.27) following the above steps.
(4.28)
c c (t) c c
c
dN (t)
iH |(t),
1 + dt
d|(t) =
2
2
c c (t)
(4.29)
with
E[dN (t)] = (t)|c c |(t)dt,
dN (t)dN (t) = dN (t) .
(4.30)
(4.31)
153
Equations of this form have been used extensively since the mid 1990s in order to obtain
numerical solutions of master equations [DZR92, MCD93]. The solution (t) is approximated by the ensemble average E[|(t)(t)|] over a finite number M 1 of numerical
realizations of the stochastic evolution (4.29).
The advantage of doing this rather than solving the master equation (4.28) is that, if the
system requires a Hilbert space of dimension N in order to be represented accurately, then
in general storing the state matrix requires of order N 2 real numbers, whereas storing
the state vector | requires only of order N . For large N , the time taken to compute the
evolution of the state matrix via the master equation scales as N 4 , whereas the time taken
to compute the ensemble of state vectors via the quantum trajectory scales as N 2 M, or
just N 2 if parallel processors are available. Even though one requires M 1, reasonable
results may be obtainable with M N 2 . For extremely large N it may be impossible even
to store the state matrix on most computers. In this case the quantum trajectory method
may still be useful, if one wishes to calculate only certain system averages, rather than the
entire state matrix, via
E[(t)|A|(t)]
= Tr[(t)A].
(4.32)
One area where this technique has been applied to good effect is the quantized motion of
atoms undergoing spontaneuous emission [DZR92].
The simplest method of solution for Eq. (4.29) is to replace all differentials d by small
but finite differences . That is, in a small interval of time t, a random number R(t) chosen
uniformly from the unit interval is generated. If
(t)|c c |(t)t
(4.33)
R(t) < jump =
then a jump happens. One of the possible jumps () is chosen randomly using another (or
the same) random number, with the weights (t)|c c |(t)t/jump . The appropriate
N is then set to 1, all others set to zero, and the increment (4.29) calculated.
In practice, this is not the most efficient method for simulation. Instead, the following
method is generally used. Say the system starts at time 0. A random number R is generated
as above. Then the unnormalized evolution
d
c c /2 + iH |(t)
(4.34)
|(t) =
dt
)|(T
) = R. This time T will have to be found
is solved for a time T such that (T
iteratively. However, since Eq. (4.34) is an ordinary linear differential equation, it can be
solved efficiently using standard numerical techniques. The decay in the state-matrix norm
(t)|
(t)
is because Eq. (4.34) keeps track only of the no-jump evolution, derived from
the repeated action of M 0 (dt). That is, the norm is equal to the probability of this series of
results occurring (see Section 1.4). Thus this method generates T , the time at which the first
jump occurs, with the correct statistics. Which jump (i.e. which ) occurs at this time can
be determined by the technique described above. The relevant collapse operator c is then
154
Quantum trajectories
(4.36)
Clearly dB |0 is a bath state containing one photon. But it is not a normalized one-photon
state; rather, it has a norm of
0|dB dB |0 = dt.
(4.37)
(4.38)
where 1 (dt) is defined in Eq. (4.12). Moreover, from Eq. (4.36) it is apparent that the system
state conditioned on the bath containing a photon is exactly as given in Eq. (4.15). The
probability of finding no photons in the bath is 0 (dt) = 1 1 (dt), and the conditioned
system state is again as given previously in Eq. (4.16).
In the above, we have not specified whether the measurement on the bath is projective
(which would leave the number of photons unchanged) or non-projective. In reality, photon
detection, at least at optical frequencies, is done by absorption, so the field state at the
end of the measurement is the vacuum state. However, it should be emphasized that this
is in no way essential to the theory. The field state at the beginning of the next interval
[t + dt, t + 2 dt) is a vacuum state, but not because we have assumed the photons to have
been absorbed. Rather, it is a vacuum state for a new field operator, which pertains to the
part of the field which has moved in to interact with the system while the previous part
(which has now become the emitted field) moves out to be detected see Section 3.11.
4.3 Photodetection
155
(4.39)
Note that dN(t), since its mean depends on the quantum state at time t, is conditioned on
the record dN(s) for s < t. That is, I is what is known as a self-exciting point process
[Haw71]. We write the quantum state at time t (which in general may be mixed) as I (t).
Here the subscript I emphasizes that it is conditioned on the photocurrent. To consider
mixed states, it is necessary to reformulate the stochastic Schrodinger equation (4.19) as
a stochastic master equation (SME). This simply means replacing the projector (t) in
Eq. (4.22) by I (t) to get
dt H iH + 12 c c I (t),
dI (t) = dN (t)G[c]
(4.40)
where the jump probability is
I (t) .
E[dN(t)] = dt Tr c c
(4.41)
(4.42)
(4.43)
We will now show how this can be evaluated using Eq. (4.40). We take the state at time t
to be a given (t).
First consider finite. Now dN(t) is either zero or one. If it is zero, then the function is
automatically zero. Hence,
F (2) (t, t + )(dt)2 = Pr[dN (t) = 1] E[dN (t + )|dN (t) = 1],
(4.44)
where E[A|B] means the expectation value of the variable A given the event B. This is
equal to
(4.46)
156
Quantum trajectories
c /Tr[c(t)
c ].
E [I (t + )|dN (t) = 1] = exp(L )c(t)
(4.47)
Here the superoperator eL acts on the product of all operators to its right. Thus, the final
expression for finite is
L c(t)
c .
(4.48)
F (2) (t, t + ) = Tr c ce
For c an annihilation operator for a cavity mode, this is equal to Glaubers second-order
coherence function, G(2) (t, t + ) [Gla63].
If = 0, then the expression (4.43) diverges, because dN (t)2 = dN (t). Naively,
(4.49)
However, since we are effectively discretizing time in bins of size dt, the expression 1/dt
at = 0 is properly interpreted as the Dirac -function ( ) see the discussion following
Eq. (3.160). Since this is infinitely larger than any finite term at = 0, we can write the
total expression as
L c(t)
c + Tr c c(t)
( ).
(4.50)
F (2) (t, t + ) = Tr c ce
Often we are interested in the stationary or steady-state statistics of the current, in which
case the time argument t disappears and (t) is replaced by ss , the (assumed unique)
stationary solution of the master equation:
Lss = 0.
(4.51)
157
(4.52)
(4.53)
(4.54)
M 0 (dt) = 1 iH + 12 c c + c dt.
(4.55)
evaluates to
+
c iH |I (t).
2
2
2
(4.56)
The ensemble-average evolution is the master equation
i[H + i c ic , ],
= D[c]
(4.57)
which is as expected from Eq. (3.182) with N = M = 0. To unravel this master equation for purposes of numerical calculation, one could choose the SSE as for the vacuum
input (4.19), merely changing the Hamiltonian as indicated in the master equation (4.57).
However, this would be a mistake if the trajectories were meant to represent the actual
conditional evolution of the system, which is given by Eq. (4.56).
d = i dt[H , ] + dt D[c]
(4.58)
H H i 21 ( c c ),
(4.59)
158
Quantum trajectories
SYSTEM
OUTPUT
c
LRBS
VERY STRONG
LOCAL OSCILLATOR
Fig. 4.1 A scheme for simple homodyne detection. A low-reflectivity beam-splitter (LRBS) transmits
almost all of the system output, and adds only a small amount of the local oscillator through reflection.
Nevertheless, the local oscillator is so strong that this reflected field dominates the intensity at the
single photoreceiver. This is a detector that does not resolve single photons but rather produces a
photocurrent proportional to Jhom (t) plus a constant.
where is an arbitrary complex number (in particular, it is not related to the system
damping rate for which we sometimes use ). Under this transformation, the measurement
operators transform to
(4.60)
M 1 (dt) = dt(c + ),
c ) + 12 (c + )(c + ) .
(4.61)
M 0 (dt) = 1 dt iH + 12 (c
This shows that the unravelling of the deterministic master-equation evolution into a set of
stochastic quantum trajectories is not unique.
Physically, the above transformation can be achieved by homodyne detection. In the
simplest configuration (see Fig. 4.1), the output field of the cavity is sent through a beamsplitter of transmittance . The transformation of a field operator b entering one port of a
beam-splitter can be taken to be
(4.62)
b b + 1 o,
where o is the operator for the field incident on the other port of the beam-splitter, which is
reflected into the path of the transmitted beam. This other field transforms on transmission
159
that satisfies [ (t), (t )] = (t t ) and can be assumed to act on the vacuum state. For
very close to unity, as is desired here, the transformation (4.62) reduces to
b b + ,
(4.63)
which is called a displacement of the field (see Section A.4). A perfect measurement of the
photon number of the displaced field leads to the above measurement operators.
Exercise 4.4 Convince yourself of this.
Let the coherent field be real, so that the homodyne detection leads to a measurement
of the x quadrature of the system dipole. This can be seen from the rate of photodetections
at the detector:
I (t)].
E[dN(t)/dt] = Tr[( 2 + x + c c)
(4.64)
+
c iH |I (t).
2
2
2
(4.67)
This shows how the master equation (4.58) can be unravelled in a completely different manner from the usual quantum trajectory (4.19). Note the minor difference from the coherently
driven SSE (4.56), which makes the latter simulate a different master equation (4.57).
4.4.2 The continuum limit
The ideal limit of homodyne detection is when the local oscillator amplitude goes to infinity.
In this limit, the rate of photodetections goes to infinity, but the effect of each detection on
the system goes to zero, because the field being detected is almost entirely due to the local
2
Be aware of the following possibility for confusion: for a two-level atom we typically have c = , in which case x = x but
y = y !
160
Quantum trajectories
= Tr 2 + x + c c I (t) + O 3/2 t
I (t) + O 1/2 t.
(4.68)
= 2 + x
The error in (due to the change in the system over the interval) is larger than the
The variance in N will be dominated by the Poissonian number
contribution from c c.
statistics of the coherent local oscillator (see Section A.4.2). Because the number of counts
is very large, these Poissonian statistics will be approximately Gaussian. Specifically, it
can be shown [WM93b] that the statistics of N are consistent with those of a Gaussian
random variable of mean (4.68) and variance
(4.69)
2 = 2 + O 3/2 t.
The error in 2 is necessarily as large as expressed here in order for the statistics of N to
be consistent with Gaussian statistics. Thus, N can be written as
I (t)/ ] + W,
N = 2 t[1 + x
(4.70)
where the accuracy in both terms is only as great as the highest order expression in 1/2 .
Here W is a Wiener increment satisfying E[W ] = 0 and E[(W )2 ] = t (see Appendix
B).
Now, insofar as the system is concerned, the time t is still very small. Expanding
Eq. (4.66) in powers of 1 gives
3
I (t)G[c]
x
I (t)H[c]
H[c]
c c
I (t)
+
O
+
I (t) = N (t)
2
+ t H iH c 12 c c I (t),
(4.71)
where G and H are as defined previously in Eqs. (4.23) and (4.24).
Exercise 4.5 Show this.
Although Eq. (4.66) requires that dN (t) be a point process, it is possible simply to substitute
the expression obtained above for N as a Gaussian random variable into Eq. (4.71). This
is because each jump is infinitesimal, so the effect of many jumps is approximately equal to
161
the effect of one jump scaled by the number of jumps. This can be justified by considering an
expression for the system state given precisely N detections, and then taking the large-N
limit [WM93b]. The simple procedure adopted here gives the correct answer more rapidly.
Keeping only the lowest-order terms in 1/2 and letting t dt yields the SME
J (t) + dW (t)H[c]
J (t),
dJ (t) = i[H , J (t)]dt + dt D[c]
(4.72)
where the subscript J is explained below, under Eq. (4.75). Here dW (t) is an infinitesimal
Wiener increment satisfying
dW (t)2 = dt,
E[dW (t)] = 0.
(4.73)
(4.74)
That is, the jump evolution of Eq. (4.66) has been replaced by diffusive evolution.
Exercise 4.6 Derive Eq. (4.72).
By its derivation, Eq. (4.72) is an Ito stochastic differential equation, which we indicate
by our use of the explicit increment (rather than the time derivative). It is trivial to see
that the ensemble-average evolution reproduces the non-selective master equation by using
Eq. (4.74) to eliminate the noise term. Readers unfamiliar with stochastic calculus, or
unfamiliar with our conventions regarding the Ito and Stratonovich versions, are referred
to Appendix B.
Just as leads to continuous evolution for the state, so does it change the pointprocess photocount into a continuous photocurrent with white noise. Removing the constant
local oscillator contribution gives
N (t) 2 t
J (t) + (t),
= x
t
(4.75)
where (t) = dW (t)/dt. This is why the subscript I has been replaced by J in Eq. (4.72).
Finally, it is worth noting that these equations can all be derived from balanced homodyne
detection, in which the beam-splitter transmittance is one half, rather than close to unity
(see Fig. 4.2). In that case, one photodetector is used for each output beam, and the
signal photocurrent is the difference between the two currents. This configuration has
the advantage of needing smaller local oscillator powers to achieve the same ratio of
system amplitude to local oscillator amplitude, because all of the local oscillator beam is
detected. Also, if the local oscillator has classical intensity fluctuations then these cancel
out when the photocurrent difference is taken; with simple homodyne detection, these
fluctuations are indistinguishable from (and may even swamp) the signal. Thus, in practice,
balanced homodyne detection has many advantages over simple homodyne detection. But,
in theory, the ideal limit is the same for both, which is why we have considered only
simple homodyne detection. An analysis for balanced homodyne detection can be found in
Ref. [WM93b].
162
Quantum trajectories
J hom(t)
SYSTEM
OUTPUT
c
50 :50 BS
STRONG LOCAL
OSCILLATOR
Fig. 4.2 A scheme for balanced homodyne detection. A 50 : 50 beam-splitter equally mixes the system
output field and the local oscillator in both output ports. The local oscillator here can be weaker, but
still dominates the intensity at the two photoreceivers. The difference between the two photocurrents
is proportional to Jhom (t).
(4.76)
If one ignores the normalization of the state vector, then one gets the simpler equation
d| J (t) = dt iH 12 c c + Jhom (t)c | J (t).
(4.77)
This SSE (which is the form Carmichael originally derived [Car93]) very elegantly shows
how the state is conditioned on the measured photocurrent. We have used a bar rather
than a tilde to denote the unnormalized state because its norm alone does not tell us
the probability for a measurement result, unlike in the case of the unnormalized states
introduced in Section 1.4. Nevertheless, the linearity of this equation suggests that it should
be possible to derive it simply using quantum measurement theory, rather than in the
complicated way we derived it above. This is indeed the case, as we will show below. In
fact, a derivation for quantum diffusion equations like Eq. (4.76) was first given by Belavkin
[Bel88] along the same lines as below, but more rigorously.
Consider the infinitesimally entangled bathsystem state introduced in Eq. (4.36):
(4.78)
163
giving
Now because dB|0
= 0, it is possible to replace dB in Eq. (4.78) by dB + dB,
|(t)|0.
B + dB]
|(t + dt) = 1 iH dt 12 c c dt + c[d
(4.79)
This is useful since we wish to consider measuring the x quadrature of the bath after it has
interacted with the system. This measurement is modelled by projecting the field onto the
eigenstates |J , where
[b + b ]|J = J |J .
(4.80)
(4.81)
(4.82)
this integral is not unique. That is, we can remove the factor ost (J ) in the state vector
by changing the integration measure for the measurement result:
(4.84)
(t + dt) = d(J )| J (t + dt) J (t + dt)|,
where
dt |(t)
| J (t + dt) = 1 iH dt 12 c c dt + cJ
(4.85)
and d(J ) = ost (J )dJ . That is, we have a linear differential equation for a non-normalized
state that nevertheless averages to the correct , if J (t) is chosen not according to its actual
distribution, but according to its ostensible distribution, ost (J ). This equation (4.85) is
known as a linear quantum trajectory.
164
Quantum trajectories
Now Eq. (4.85) is the same as Eq. (4.77), where here we see that the homodyne pho All
tocurrent Jhom as we have defined it in Eq. (4.75) is simply a measurement of b + b.
that remains is to show that the above theory correctly predicts the statistics for J . The true
probability distribution for J is
(J )dJ = J (t + dt)| J (t + dt)dJ = J (t + dt)| J (t + dt)d(J ).
(4.86)
That is, the actual probability for the result J equals the ostensible probability multiplied
by the state-matrix norm of | J (t + dt). From Eq. (4.85), this evaluates to
dt c c[dt
(J dt)2 ]}dJ.
(J )dJ = ost (J ){1 + J x
(4.87)
We can clarify the orders of the terms here by defining a new variable S = J dt which is
of order unity. Then
dt + O(dt)]dS,
(4.88)
(S)dS = (2)1/2 exp(S 2 /2)[1 + Sx
or, to the same order in dt,
dt)2 /2]dS.
(S)dS = (2)1/2 exp[(S x
(4.89)
(4.90)
dJ = dt LJ + J dt cJ + J c ,
(4.91)
where we have used (J dt)2 = dt, which is true in the statistical sense under the ostensible
distribution see Section B.2 (the same holds for the actual distribution to leading order).
From this form it is easy to see that the ensemble-average evolution (with J chosen according
to its ostensible distribution) is the master equation = L, because the ostensible mean
of J is zero. The actual distribution of J is again the ostensible distribution multiplied by
the norm of J .
It is not a peculiarity of homodyne detection that we can reformulate a nonlinear equation
for a normalized state in which dW (t) = J (t)dt J (t)dt is white noise as a linear equation
for a non-normalized state in which J (t) has some other (ostensible) statistics. Rather, it
is a completely general aspect of quantum or classical measurement theory. It is useful
primarily in those cases in which the ostensible distribution for the measurement result J (t)
can be chosen so as to yield a particularly simple linear equation. That was the case above,
where we chose J (t) to have the ostensible statistics of white noise. Another convenient
choice might be for the ostensible statistics of J (t) to be Gaussian with a variance of 1/dt
(as in white noise) but a mean of . In that case Eq. (4.91) becomes
(4.92)
dJ = dt LJ + (J )dt cJ + J c J .
165
Exercise 4.10 Show that this does give the correct average , and the correct actual
distribution for J (t).
From this, we see that the nonlinear quantum trajectory (4.72) is just a special case in which
is chosen to be equal to the actual mean of J (t), since then
J (t) = J (t) J (t)J = J (t) Tr[(c + c )J (t)] = dW (t)/dt.
(4.93)
(4.94)
where x = c + c , as usual, and (t) is assumed given (it could be ss ). The autocorrelation
function is defined as
(1)
(t, t + ) = E[Jhom (t + )Jhom (t)].
Fhom
(4.95)
We use a superscript (1), rather than (2), because this function is related to Glaubers
first-order coherence function [Gla63], as will be shown in Section 4.5.1.
From Eq. (4.75) and the fact that (t + ) is independent of the system at the past times
t, this expression can be split into three terms,
(1)
J (t + ) (t)] + E[ (t + ) (t)] + E[x
J (t + )]x(t),
(t, t + ) = E[x
(4.96)
Fhom
where the factorization of the third term is due to the fact that (t) is given. The second
term here is equal to ( ). The first term is non-zero because the conditioned state of the
system at time t + depends on the noise in the photocurrent at time t. That noise enters
by the conditioning equation (4.72), so
(4.97)
The subsequent stochastic evolution of the system will be independent of the noise (t) =
dW (t)/dt and hence may be averaged, giving
L E[{1 + dW (t)H[c]}
J (t)dW (t)/dt] .
J (t + ) (t)] = Tr xe
(4.98)
E[x
Using the Ito rules for dW (t) and expanding the superoperator H yields
L
L
J (t + ) (t)] = Tr xe
c(t)
+ (t)c Tr xe
(t) Tr[x(t)].
E[x
(4.99)
The second term here cancels out the third term in Eq. (4.96), to give the final expression
L
(1)
(t, t + ) = Tr xe
c(t)
+ (t)c + ( ).
(4.100)
Fhom
Experimentally, it is more common to represent the information in the correlation function
by its Fourier transform. At steady state, this is known as the spectrum of the homodyne
166
Quantum trajectories
photocurrent,
S() = lim
= 1+
(1)
d Fhom
(t, t + )ei
(4.101)
L
ss + ss c .
c
d ei Tr xe
(4.102)
The unit contribution is known as the local oscillator shot noise or vacuum noise because
it is present even when there is no light from the system.
(4.103)
Consider a time t, small on a characteristic time-scale of the system, but large compared
with 1 so that there are many cycles due to the detuning. One might think that averaging
the rotating exponentials over this time would eliminate the terms in which they appear.
However, this is not the case because these terms are stochastic, and, since the noise is
white by assumption, it will vary even faster than the rotation at frequency . Define two
new Gaussian random variables
t+t
2 cos(s)dW (s),
(4.104)
Wx (t) =
t
t+t
Wy (t) =
t
2 sin(s)dW (s).
(4.105)
167
(4.106)
where q and q stand for x or y, and is the Heaviside function, which is zero when its
argument is negative and one when its argument is positive.
On the systems time-scale t is infinitesimal. Thus the Wq (t) can be replaced by
infinitesimal Wiener increments dWq (t) obeying
q (t)q (t ) = q,q (t t ),
(4.107)
where q (t) = dWq (t)/dt. Taking the average over many detuning cycles therefore transforms Eq. (4.103) into
J (t)dt
dJ (t) = i[H , J (t)]dt + D[c]
+ dWy (t)H[ic]
J (t).
+ 1/2 dWx (t)H[c]
(4.108)
Exercise 4.11 Verify Eq. (4.106) and hence convince yourself of Eqs. (4.107) and (4.108).
Hint: Show that the right-hand side of Eq. (4.106) is zero when |t t | > t, and that
integrating over t or t yields (t)2 .
This is equivalent to homodyne detection of the two quadratures simultaneously, each
with efficiency 1/2. (Non-unit efficiency will be discussed in Section 4.8.1). On defining a
normalized complex Wiener process by
dZ = (dWx + i dWy )/ 2,
(4.109)
which satisfies dZ dZ = dt but dZ 2 = 0, we can write Eq. (4.108) more elegantly as
J (t)dt + H[dZ (t)c]
J (t).
dJ (t) = i[H , J (t)]dt + D[c]
(4.110)
In order to record the measurements of the two quadratures, it is necessary to find the
Fourier components of the photocurrent. These are defined by
t+t
1
Jx (t) = (t)
2 cos(s)Jhom (s)ds,
(4.111)
t
Jy (t) = (t)
t+t
2 sin(s)Jhom (s)ds.
(4.112)
2x (t),
J (t) + 2y (t).
Jy (t) = y
J (t) +
Jx (t) = x
(4.113)
(4.114)
(Recall that x and y are the quadratures defined in Eq. (4.65).) Again, these are proportional
to the homodyne photocurrents that are expected for an efficiency of 1/2 (see Section 4.8.1).
168
Quantum trajectories
1
[J (t)
2 x
+ iJy (t)]
J (t) + dZ(t)/dt,
= c
(4.116)
(4.117)
where dZ is as defined in Eq. (4.109). In terms of this current, one can derive an unnormalized SSE analogous to Eq. (4.77):
(4.118)
d| J (t) = dt iH 12 c c + Jhet (t) c | J (t).
Equation (4.118), with the expression (4.117) in place of Jhet , was introduced by Gisin
and Percival in 1992 [GP92b] as quantum state diffusion. However, they considered it to
describe the objective evolution of a single open quantum system, rather than the conditional
evolution under a particular detection scheme as we are interpreting it.
Using the same techniques as in Section 4.4.4, it is simple to show that the average
complex heterodyne photocurrent is
+ ( ).
E Jhet (t + ) Jhet (t) = Tr[c eL c(t)]
(4.119)
(4.120)
Ignoring the second (-function) term in this autocorrelation function, the remainder is
simply Glaubers first-order coherence function G(1) (t, t + ) [Gla63]. In steady state this
is related to the so-called power spectrum of the system by
1
ss ].
P () =
d ei Tr[c eL c
(4.121)
2
This can be interpreted as the photon flux in the system output per unit frequency (a
dimensionless quantity).
ss ] and that this is consistent with the
Exercise 4.12 Show that d P () = Tr[c c
above interpretation.
In practice it is this second interpretation that is usually used to measure P (). That
is, the power spectrum is usually measured by using a spectrometer to determine the
output intensity as a function of frequency, rather than by autocorrelating the heterodyne
photocurrent.
169
In this section, we give a complete classification of all such unravellings, for the general
Lindblad master equation
1
0
= L i[H , ] + ck ck 12 ck ck , .
(4.122)
Here, and in related sections, we are using the Einstein summation convention because
this simplifies many of the formulae. That is, there is an implicit sum for repeated indices,
which for k is from 1 to K. Using this convention, the most general SME was shown in
Ref. [WD01] to be
(4.123)
dJ = dt LJ + (ck ck )J dZk + H.c. .
Here the dZk are complex Wiener increments satisfying
dZj (t)dZk (t) = dt j k ,
(4.124)
(4.125)
The j k = kj are arbitrary complex numbers subject only to the condition that the crosscorrelations for Z are consistent with the self-correlations.
This is
!
" the case iff (if and only if)
the 2K 2K correlation matrix of the vector Re[dZ], Im[dZ] is positive semi-definite.3
That is,
dt I + Re[]
Im[]
0.
(4.126)
Im[]
I Re[]
2
Here the real part of a matrix A is defined as Re[A] = (A + A )/2, and similarly Im[A] =
i(A A )/2. Equation (4.126) is satisfied in turn iff the spectral norm of is bounded
from above by unity. That is,
2 max ( ) 1,
(4.127)
where max (A) denotes the maximum of the real parts of the eigenvalues of A. In the present
context, the eigenvalues of A are real, of course, since is Hermitian.
Exercise 4.13 Show that Eq. (4.126) is satisfied iff Eq. (4.127) is satisfied.
Hint: Consider the real symmetric matrix
Re[] Im[]
X=
.
Im[] Re[]
(4.128)
Show that the eigenvalues of X are symmetrically placed around the origin. Thus we will
have I + X 0 iff X 1. This in turn will be the case iff X2n converges as n .
Show that
Re[An ] Im[An ]
2n
X =
,
(4.129)
Im[An ] Re[An ]
where A = , and hence show the desired result.
3
170
Quantum trajectories
(4.130)
where (t) is an arbitrary real function of time. This is known as a gauge transformation. It has no effect on any physical properties of the system. However, it can
radically change the appearance of a stochastic Schrodinger equation, since (t) may
be stochastic. Consider for example the simple SSE
2 dt| + (x x)dZ
|,
(4.131)
d| = iH 12 (x x)
where dZ dZ = dt and dZ 2 = dt. Let the global phase obey the equation
d = f dZ + f dZ,
(4.132)
where f (t) is an arbitrary smooth function of time that may even be a function of |
itself. Then
| + d| = 1 + i d 12 d d ei (t) (| + d|) .
(4.133)
The resultant equation for | is
d| = iH Re f 2 + |f |2 dt|
+ if + if dt|
x x
12 (x x)
+ if ) dZ + if dZ |,
+ (x x
(4.134)
for
which appears quite different from Eq. (4.131) (think of the case f = ix,
example). By contrast, the SME is invariant under global phase changes:
+ H[dZ x].
d = i dt[H , ] + dt D[x]
(4.135)
The above formulae apply for efficient detection (see Section 4.8 for a discussion of
inefficient detection and Section 6.5.2 for the required generalization). Thus we could write
the unravelling as a SSE rather than the SME (4.123). However, there are good reasons
to prefer the SME form, even for efficient detection. First, it is more general in that it can
describe the purification of an initially mixed state. Second, it is easier to see the relation
between the quantum trajectories and the master equation which the system still obeys on
average. Third, it is invariant under gauge transformations (see Box 4.1).
As expected from Section 4.4.3, the SME (4.123) can be derived directly from quantum
measurement theory. We describe the measurement result in the infinitesimal time interval
[t, t + dt) by a vector of complex numbers J(t) = {Jk (t)}K
k=1 . As functions of time, these are
continuous but not differentiable, and, following the examples of homodyne and heterodyne
Jk dt = dt kj cj + ck + dZk .
171
(4.136)
We can prove this relation between the noise in the quantum trajectory and the noise in
the measurement record by using the measurement operators
M J = 1 iH dt 12 ck ck dt + Jk ck dt.
d J M JM J = 1
! "
if we choose d J to be the measure yielding the ostensible moments
! "
d J (Jk dt) = 0,
! "
d J (Jj dt)(Jk dt) = j k dt,
! "
d J (Jj dt)(Jk dt) = j k dt.
(4.137)
(4.138)
(4.139)
(4.140)
(4.141)
With this assignment of measurement operators MJ and measure d J we can easily
show that the expected value of the result J is
! "
(4.142)
E[Jk ] = d J Tr M J M J Jk = kj cj + ck .
This is consistent with the previous definition in Eq. (4.136). Furthermore, as in Section 4.4.3, we can show that the second moments of J dt are (to leading order in dt)
independent of the system state and can be calculated using d. In other words, they are
as defined above. This completes the proof that Eq. (4.136)
identical to the statistics of dZ
gives the correct probability for the result J.
The next step is to derive the conditioned state of the system after the measurement. This
is given by
M J M J
.
+ dJ =
Tr M M
J
(4.143)
J
dJ = 12 ck ck , J dt + Jk dt ck J cl Jl dt + (Jk dt Jk 1) cj cj J dt
"
!
1 Jl cl dt Jl cl dt .
(4.144)
172
Quantum trajectories
detection theory of Section 4.3.1 can be applied, with c = . The state vector of the
atom conditioned on the photodetector count obeys the following SSE:
I (t)
!
"
dt
(4.146)
[ I (t)] + iH |I (t),
2
where H = 12 ( x + z ) and the photocount increment dN (t) satisfies E[dN (t)] =
I (t)dt.
173
Fig. 4.3 A scheme for direct detection of an atom. The atom is placed at the focus of a parabolic
mirror so that all the fluorescence emitted by the atom is detected by the photodetector. Figure 1
adapted with permission from J. Gambetta et al., Phys. Rev. A 64, 042105, (2001). Copyrighted by
the American Physical Society.
With the conditioned subscript understood, one can write the conditioned state in terms
of the Euler angles (, ) parameterizing the surface of the Bloch sphere. Since we are
assuming a pure state (see Box. 3.1),
|(t) = cg |g + ce |e,
(4.147)
(4.148)
(4.149)
Exercise 4.17 Show this, using the usual relation between (, , r := 1) and (x, y, z), with
|0 = |g and |1 = |e.
A typical stochastic trajectory is shown in Fig. 4.4. From an ensemble of these one could
obtain the stationary distribution ss (, ) for the states on the Bloch sphere under direct
detection.
In practice, it is easier to find the steady-state solution by returning to the SSE (4.146)
and ignoring normalization terms. Consider the evolution of the system following a photodetection at time t = 0 so that |(0) = |g. Assuming that no further photodetections
take place, and omitting the normalization terms in Eq. (4.146), the state evolves via
!
"
d
(4.150)
|0 (t) = + iH | 0 (t).
dt
2
Here, the state vector has a state-matrix norm equal to the probability of it remaining in
this no-jump state, as discussed in Section 4.2.3.
On writing the unnormalized conditioned state vector as
| 0 (t) = cg (t)|g + ce (t)|e,
(4.151)
/2 + i
sin(t),
2
sin(t),
2
(4.152)
(4.153)
174
Quantum trajectories
1
0.5
cos
10
10
( )
Fig. 4.4 A typical trajectory for the conditioned state of an atom under direct detection in terms of
the Bloch angles and cos . The driving and detuning are = 3 and = 0.5, in units of the decay
rate .
where
1/2
2 = ( i /2)2 + 2
(4.154)
is a complex number that reduces to the detuned Rabi frequency as 0. One can still
use the definitions (4.148) and (4.149) with cj replaced by cj , since they are insensitive to
normalization. The significance of the normalization is that
S(t) = 0 (t)| 0 (t) = |cg (t)|2 + |ce (t)|2
(4.155)
ss (, ) is confined to the curve parameterized by (t), (t) , with each point weighted
by the survival probability S(t). (For = 0 this curve wraps around on itself, so that each
point obtains multiple contributions to its weight.)
175
2
u
1 yss2 zss
.
v =
(4.156)
yss
w
zss
It turns out that this can be generalized for = 0, but it is a lot more complicated, so in
this section we retain = 0. For large , these points on the Bloch sphere approach the
antipodal pair at x = 1.
Exercise 4.19 Show this, and show that in the same limit the direct detection ensemble
becomes equally spread over the x = 0 great circle.
Thus, these two PR ensembles are as different as they possibly can be.
In Section 3.8.3 we did not identify the measurement scheme that realizes this ensemble.
Since the elements of the ensemble are discrete, the unravelling must involve jumps. Since
it is not the direct detection unravelling of the preceding section, it must involve a local
oscillator, as introduced in Section 4.4.1. Since here we are using for the atomic decay
rate, in this section we use for the local oscillator amplitude. The no-jump and jump
measurement operators are then
||2
M0 (dt) = 1 i x + + +
dt,
(4.157)
2
2
2
M 1 (dt) = dt ( + ).
(4.158)
Direct detection is recovered by setting = 0.
If the atom radiates into a beam as considered previously, the above measurement can
be achieved by mixing it with a resonant local oscillator at a beam-splitter, as shown in
Fig. 4.5. The transmittance of the beam-splitter must be close to unity. The phase of is of
course defined relative to the field driving the atom, parameterized by .
176
Quantum trajectories
c
LRBS
WEAK LOCAL
OSCILLATOR
EOM
SIGNAL
PROCESSOR
Fig. 4.5 A scheme for adaptive detection. The fluorescence emitted by the atom is coherently mixed
with a weak local oscillator (LO) via a low-reflectivity beam-splitter (LRBS). The electro-optic
modulator (EOM) reverses the amplitude of the LO every time the photodetector fires. Figure 5
adapted with permission from J. Gambetta et al., Phys. Rev. A 64, 042105, (2001). Copyrighted by
the American Physical Society.
Since our aim is for the atom to remain in one of two fixed pure states, except when it
jumps, we must examine the fixed points (i.e. eigenstates) of the operator M 0 (dt). It turns
out that it has two fixed states, such that, if Re[] = 0, one is stable and one unstable. For
Re[] > 0, the stable fixed state is
)
2
i
2i
+
|g + |e.
(4.159)
|s =
4
2
Here the tilde denotes an unnormalized state. The corresponding eigenvalue is
)
1 + 2||2
2
i
2 2i
s =
.
4
2
4
(4.160)
The unstable state and eigenvalue are found by replacing the square root by its negative.
It is unstable in the sense that its eigenvalue is more negative, indicating that its norm will
decay faster than that of the stable eigenstate. Thus, an initial superposition of these two
(linearly independent) states will, when normalized, evolve towards the stable eigenstate.
Let us say = + , with Re[+ ] > 0, and assume the system is in the appropriate stable
state |s+ . When a jump occurs the new state of the system is proportional to
M 1+ |s+ ( + + )|s+ .
(4.161)
The new state will obviously be different from |s+ and so will not remain fixed. This
is in contrast to what we are seeking, namely a system that will remain fixed between
jumps. However, let us imagine that, immediately following the detection, the value of the
local oscillator amplitude is changed to some new value, . This is an example of an
adaptive measurement scheme as discussed in Section 2.5, in that the parameters defining
the measurement depend upon the past measurement record. We want this new to be
177
chosen such that the state ( + + )|s+ is a stable fixed point of the new M 0 (dt). The
conditions for this to be so will be examined below. If they are satisfied then the state will
remain fixed until another jump occurs. This time the new state will be proportional to
( + )( + + )|s+ = [ + + ( + + ) ]|s+ .
(4.162)
If we want jumps between just two states then we require this to be proportional to |s+ .
Clearly this will be so if and only if
= + .
(4.163)
Writing + = , we now return to the condition that ( + + )|s+ be the stable fixed
state of M 0 (dt). From Eq. (4.159), and using Eq. (4.163), this gives the relation
)
)
2
2
2
+ 2i
= 2 2i
.
(4.164)
4
4
(4.165)
which, remarkably, are independent of the ratio /. The stable and unstable fixed states
for this choice are
i
|s =
|g
|e,
2
2
2 +
22 + 2
(4.166)
1
1
|u = |g |e,
2
2
(4.167)
i
,
8
2
5
i
.
u =
8
2
s =
(4.168)
(4.169)
Exercise 4.20 Show that the stable eigenstates correspond to the two states defined by
the Bloch vectors in Eq. (4.156).
We have thus constructed the measurement scheme that realizes the two-state PR ensemble for the two-level atom. Ignoring problems of collection and detector efficiency, it may
seem that this adaptive measurement scheme would not be much harder to implement
experimentally than homodyne detection; it requires simply an amplitude inversion of the
local oscillator after each detection. In fact, this is very challenging, since the feedback
delay must be very small compared with the characteristic time-scale of the system. For
a typical atom the decay time ( 1 108 s) is shorter than currently feasible times for
electronic feedback. Any experimental realization would have to use an atom with a very
178
Quantum trajectories
long lifetime, or some other equivalent effective two-level system with radiative transitions
in the optical frequency range.
Another difference from homodyne detection is that the adaptive detection has a very
small local oscillator intensity at the detector: it corresponds to half the photon flux of the
atoms fluorescence if the atom were saturated. In either stable fixed state, the actual photon
flux entering the detector in this scheme is
s |( + )( + )|s =
,
4
(4.170)
which is again independent of / . This rate is also, of course, the rate for the system to
jump to the other stable fixed state, so that the two are equally occupied in steady state.
There are many similarities between the stochastic evolution under this adaptive unravelling and the conditioned evolution of the atom under spectral detection, as investigated
in Ref. [WT99]. Spectral detection uses optical filters to resolve the different frequencies
of the photons emitted by the atom. As a consequence it is not possible to formulate a
trajectory for the quantum state of the atom alone. For details, see Ref. [WT99]. In the case
of a strongly driven atom ( ), photons are emitted with frequencies approximately
equal to the atomic resonance frequency, a , and to the sideband frequencies a .
(This is the characteristic Mollow power spectrum of resonance fluorescence [Mol69].) In
the interaction frame these frequencies are 0 and , and can be seen in the imaginary parts
of (respectively) the eigenvalues /2 and appearing in the solution to the resonance
fluorescence master equation in Section 3.8.3. In this high-driving limit, the conditioned
atomic state can be made approximately pure, and it jumps between states close to the
x eigenstates, just as in the adaptive detection discussed above. In this case the total
detection rate is approximately /2, as expected for a strongly driven (saturated) atom. Of
these detections, half are in the peak of the power spectrum near resonance (which do not
give rise to state-changing jumps), while half are detections in the sidebands (which do).
Thus the rate of state-changing jumps is approximately /4, just as in the case of adaptive
detection.
d| J (t) = + iH dt
2
1
179
cos
0
0
10
10
10
10
(a)
2
0
0
cos
0
0
(b)
2
0
0
t ) )
Fig. 4.6 A segment of a trajectory of duration 10 1 of an atomic state on the Bloch sphere under
homodyne detection. The phase of the local oscillator relative to the driving field is 0 in (a) and
/2 in (b). The driving and detuning are = 3 and = 0.
= 0, so the true distributions on the Bloch sphere are symmetric under reflection in the
yz plane. The effect of the choice of measurement is dramatic and readily understandable.
The homodyne photocurrent from Eq. (4.75) is
(4.172)
Jhom (t) = ( x cos y sin ) + (t).
When the local oscillator is in phase ( = 0), the deterministic part of the photocurrent
is proportional to x(t). Under this measurement, the atom tends towards states with welldefined x . The eigenstates of x are stationary states of the driving Hamiltonian, so this
leads to trajectories that stay near these eigenstates for a relatively long time. This is seen
in Fig. 4.6(a), where tends to stay around 0 or . In contrast, measuring the = /2
quadrature tries to force the system into an eigenstate of y . However, such an eigenstate
will be rapidly spun around the Bloch sphere by the driving Hamiltonian. This effect is
clearly seen in Fig. 4.6(b), where the trajectory is confined to the = /2 great circle
(like that for direct detection).
The above explanation for the nature of the quantum trajectories is also useful for
understanding the noise spectra of the quadrature photocurrents in Eq. (4.172). The power
spectrum (see Section 4.5.1) of resonance fluorescence of a strongly driven two-level atom
has three peaks, as discussed in the preceding section. The central one is peaked at the
atomic frequency, and the two sidebands (each of half the area) are displaced by the Rabi
180
Quantum trajectories
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
10
15
Fig. 4.7 A power spectrum for resonance fluorescence with = 1, = 0 and = 10. Note that this
Rabi frequency is larger than that in Fig. 4.6, in order to show clearly the Mollow triplet.
frequency [Mol69], as shown in Fig. 4.7. It turns out that the spectrum of the in-phase
homodyne photocurrent (see Section 4.4.4) gives the central peak, while the quadrature
photocurrent gives the two sidebands [CWZ84]. This is readily explained qualitatively
from the evolution of the atomic state under homodyne measurements. When x is being
measured, it varies slowly, remaining near one eigenvalue on a time-scale like 1 . This
gives rise to an exponentially decaying autocorrelation function for the photocurrent (4.172),
or a Lorentzian with width scaling as in the frequency domain. When y is measured,
it undergoes rapid sinusoidal variation at frequency , with noise added at a rate . This
explains the side peaks.
| J (t + dt) = 1
+ iH dt
2
1
181
rotation around the x axis. The complex photocurrent as defined in Eq. (4.117) is
(4.174)
Jhet (t) = + (t),
where (t) = dZ(t)/dt. The spectrum of this photocurrent (the Fourier transform of the
two-time correlation function (4.206)) gives the complete Mollow triplet, since y is rotated
at frequency with noise, while the dynamics of x is only noise.
(4.176)
That is, the singularity of the bath commutation relations leads to a finite change in the bath
field operator bin in an infinitesimal time.
Because of this finite change, it is necessary to distinguish between the bath operator bin
interacting with the system at the beginning of the time interval [t, t + dt) and that at the
end, which we will denote bout (t). Ignoring infinitesimal terms, Eq. (4.176) implies that
bout = bin + c.
(4.177)
This is sometimes called the inputoutput relation [GC85]. To those unfamiliar with the
Heisenberg picture, it may appear odd that a system operator appears in the expression
for a bath operator, but this is just what one would expect with classical equations for
dynamical variables. As explained in Section 1.3.2, Heisenberg equations such as this are
the necessary counterpart to entanglement in the Schrodinger picture. If the system is an
182
Quantum trajectories
bout(t)
bout(t z)
b in(t)
b in(t +z)
z=0
Fig. 4.8 A schematic diagram showing the relation between the input and output fields for a one The distance from the
sided cavity with one relevant mode described by the annihilation operator c.
cavity, z, appears as a time difference in the free field operators because we are using units such that
the speed of light is unity. The commutation relations say that (for z > 0) bin (t + z) and bout (t z)
commute with any system operator at time t. This is readily understood from the figure, since these
field operators apply to parts of the field that also exist (at a point in space removed from the system)
at time t.
optical cavity, then this operator represents the field immediately after it has bounced off
the cavity mirror. This is shown in Fig. 4.8.
Just as bin (t) commutes with an arbitrary system operator s (t ) at an earlier time t < t, it
can simply be shown that bout (t) commutes with the system operators at a later time t > t.
As a consequence of this, the output field obeys the same commutation relations as the
input field,
(4.178)
(4.179)
183
It is easy to show that this is statistically identical to the classical photocurrent I (t) used in
Section 4.3.2. To see this, consider the Heisenberg operator dN out (t) = bout (t)bout (t)dt for
the output field. This is found from Eq. (4.177) to be
+ (t)]dt.
dN out (t) = [c (t) + (t)][c(t)
(4.180)
Here we are using (t) for bin (t) to emphasize that the input field is in the vacuum state
and so satisfies (t) (t ) = (t t ), with all other second-order moments vanishing. It
is then easy to show that
(4.181)
(4.182)
These moments are identical to those of the photocount increment, Eqs. (4.41) and (4.13),
with a change from Schrodinger to Heisenberg picture.
The identity between the statistics of the output photon-flux operator Iout (t) and the
photocurrent I (t) does not stop at the single-time moments (4.181) and (4.182). As discussed
in Section 4.3.2, the most commonly calculated higher-order moment is the autocorrelation
function. In the Heisenberg picture this is defined as
F (2) (t, t + ) = Iout (t + )Iout (t).
(4.183)
(4.184)
Exercise 4.21 Show this using the commutation relations (4.178) to put the field operators
in normal order. (Doing this eliminates the input field operators, because they act directly
on the vacuum, giving a null result.)
The autocorrelation function can be rewritten in the Schrodinger picture as follows. First,
recall that the average of a system operator at time t is
s (t) = TrS [(t)s ] = TrS [TrB [W (t)]s ] = Tr[W (t)s ],
(4.185)
where (t) is the system (S) state matrix and W (t) is the state matrix for the system plus
bath (B). Now W (t) = U (t)W (0)U (t), where in the Markovian approximation the unitary
evolution is such that
(4.186)
(4.187)
(4.188)
(4.189)
(4.190)
184
Quantum trajectories
(4.191)
where (t)
is the solution of the master equation with the given initial conditions. This is
sometimes known as the quantum regression theorem.
Exercise 4.22 Generalize the above result to two-time correlation functions of the form
x(0)
s (t)z(0).
Applying the result of this exercise to the above autocorrelation function (4.184) gives
L c(t)
F (2) (t, t + ) = Tr c ce
c + Tr c c(t)
( ).
(4.192)
This is identical to Eq. (4.50) obtained using the conditional evolution in Section 4.3.2.
In fact, any statistical comparison between the two will agree because Iout (t) and I (t) are
merely different representations of the same physical quantity. Conceptually, the two representations are quite different. In the Heisenberg-operator derivation, the shot-noise term
(the delta function) in the autocorrelation function arises from the commutation relations
of the electromagnetic field. By contrast, the quantum trajectory model produces shot noise
because photodetections are discrete events, which is a far more intuitive explanation. As
we will see, some results may be more obvious using one method, others more obvious
with the other, so it is good to be familiar with both.
(4.193)
On dropping the time arguments, the photon-flux operator for this field is
= 2 + (c + c + + ) + (c + )(c + ).
Iout
(4.194)
In the limit that , the last term can be ignored for the homodyne photocurrent
operator,
I (t) 2
hom
+ (t).
Jout
(t) lim out
= x(t)
(4.195)
Here, x is the quadrature operator defined in Eq. (4.65) and (t) is the vacuum input operator
(t) = (t) + (t),
(4.196)
which has statistics identical to those of the normalized Gaussian white noise for which the
same symbol is used.
Exercise 4.23 Convince yourself of this.
185
The operator nature of (t) is evident only from its commutation relations with its conjugate
variable (t)
= i (t) + i (t):
[ (t), (t
)] = 2i(t t ).
(4.197)
The output quadrature operator (4.195) evidently has the same single-time statistics as
and a
the homodyne photocurrent operator (4.75), with a mean equal to the mean of x,
hom
white-noise term. The two-time correlation function of the operator Jout
(t) is defined by
(1)
hom
hom
Fhom
(t, t + ) = Jout
(t + )Jout
(t).
(4.198)
Using the commutation relations for the output field (4.178), this can be written as
(1)
+ )x(t)
: + ( ),
Fhom
(t, t + ) = : x(t
(4.199)
where the annihilation of the vacuum has been used as before. Here the colons denote time
and normal ordering of the operators c and c . The meaning of this can be seen in the
Schrodinger picture:
L
(1)
Fhom
c(t)
+ (t)c + ( ),
(4.200)
(t, t + ) = Tr xe
where L is as before and (t) is the state of the system at time t, which is assumed known.
Again, this is in exact agreement with that calculated from the quantum trajectory
hom
method in Section 4.4.4. The operator quantity Jout
(t) has the same statistics as those of
the classical photocurrent Jhom (t). Again the different conceptual basis is reflected in the
origin of the delta function in the autocorrelation function: operator commutation relations
in the Heisenberg picture versus local oscillator shot noise from the quantum trajectories.
(4.201)
= 1/2( + c ).
bout
Here we have introduced an ancilla vacuum annihilation operator that enters at the free
port of the beam-splitter.
Exercise 4.24 Show that
+ +
bout bout = bout bout
+ bout bout
,
(4.202)
186
Quantum trajectories
= y i[ + ],
(4.203)
(4.204)
where y is defined in Eq. (4.65). Defining the complex heterodyne photocurrent as in Eq.
(4.116) gives
het
Jout
= c + + .
(4.205)
Exercise 4.25 Convince yourself that (t) = (t) + (t) has the same statistics as the
complex Gaussian noise (t) defined in Section 4.6.4.
Thus, the operator (4.205) has the same statistics as the photocurrent (4.117). The autocorrelation function
(1)
het
het
Fhet
(t, t + ) = Jout
(t + )Jout
(t),
(4.206)
(1)
Fhet
(t, t + ) = c (t + )c(t)
+ ( ).
(4.207)
evaluates to
ds = dt ck s ck 12 ck ck , s + i[H , s ]
(4.208)
where we are using the Einstein summation convention as before and dB k;in = bk;in dt where
the bk;in are independent vacuum field operators. The output field operators are
bk;out = bk;in + ck .
(4.209)
Recall that for a completely general dyne unravelling the measurement result was a vector
of complex currents Jk (t) given by (4.136), where the noise correlations are defined by a
complex symmetric matrix . In the Heisenberg picture the operators for these currents are
(4.210)
Here a k are ancillary annihilation operators, which are also assumed to act on a vacuum
state, and obey the usual continuum-field commutation relations,
[a j (t), a k (t )]dt = j k (t t ).
(4.211)
187
These ancillary operators ensure that all of the components Jk commute with one another.
This is necessary since this vector operator represents an observable quantity. Assuming T
(a capital ) to be a symmetric matrix like , we find
[Jj , Jk ]dt = j k kj = 0,
(4.212)
[Jj , Jk ]dt
(4.213)
= j k j l lk Tj l Tlk .
(4.214)
The right-hand side of this equation is always positive (see Section 4.5.2), so it is always
possible to find a suitable T.
From these definitions one can show that
Jk = ck + kj cj + Jk ,
(4.215)
(4.216)
where
has a zero mean. Thus the mean of Jk is the same as that of the classical current in
Section 4.5.2. Also, one finds that
(Jj dt)(Jk dt) = j k dt,
(Jj dt)(Jk dt) = j k dt,
(4.217)
(4.218)
188
Quantum trajectories
powerful framework for establishing the relations we seek. In this picture, the dynamics of
the system is given by the quantum Langevin equation
= 12 a(t)dt
+ (t)dt,
da(t)
which has the solution
t/2
= a(0)e
a(t)
(4.219)
e(st)/2 (s)ds.
(4.220)
(4.221)
Photon-number distribution. We begin with photon counting. The most obvious difference
between a projective measurement of photon number and an external counting of escaped
photons from a freely decaying cavity is that the final state of the cavity mode is the
appropriate photon-number eigenstate in the first case and the vacuum in the second. The
latter result comes about because the counting time must be infinite to allow all photons to
escape. Although extra-cavity detection is not equivalent to projective detection, it should
give the same statistics in the infinite time limit.
The output photon flux is
+ (t)],
I(t) = [a (t) + (t)][a(t)
(4.222)
t
t/2
a(0)e
(4.223)
(4.224)
On using integration by parts (we show this explicitly for the simpler cases of homodyne
detection below), it is possible to evaluate this as
+ n.
N = a (0)a(0)
(4.225)
Here n contains only bath operators, and annihilates on the vacuum. Hence it commutes
with the first term and contributes nothing to the expectation value of any function of N ,
provided that the bath is in a vacuum state. This confirms that the integral of the photocurrent
does indeed measure the operator a a for the initial cavity state.
Quadrature distribution. Now consider homodyne detection. Unlike direct detection, one
should not simply integrate the photocurrent from zero to infinity because even when all
light has escaped the cavity the homodyne measurement continues to give a non-zero
189
current (vacuum noise). Thus, for long times the additional current is merely adding noise
to the record. This can be circumvented by properly mode-matching the local oscillator to
the system. That is to say, by matching the decay rate, as well as the frequency, of the local
oscillator amplitude to that of the signal. Equivalently, the current could be electronically
multiplied by the appropriate factor and then integrated.
For the cavity decay in Eq. (4.220) the appropriately scaled homodyne current operator is
hom
+ (t)].
(t) = et/2 [x(t)
Jout
(4.226)
x(0)e
e
(s)ds + (t) ,
Jout (t) = e
(4.227)
and so
X
0
hom
(t)dt
Jout
+
= x(0)
(4.228)
t/2
(t)dt
e
0
es/2 (s)ds.
dt e
0
(4.229)
Using integration by parts on the last term, it is easy to show that it cancels out the
penultimate term, so the operator of the integrated photocurrent is simply
X = x(0).
(4.230)
Thus it is possible to measure a quadrature of the field by homodyne detection. Unlike the
case of direct detection, this derivation requires no assumptions about the statistics of the
bath field.
Husimi distribution. Heterodyne detection is different from direct and homodyne detection
in that it does not measure an Hermitian system operator. That is because it measures both
quadratures simultaneously. However, it does measure a normal operator (see Box 1.1). The
(normal) operator for the heterodyne photocurrent (4.205) is, with appropriate photocurrent
scaling factor,
het
+ (t)],
Jout
(t) = et/2 [a(t)
(4.231)
+ e ,
(t)dt = a(0)
(4.232)
Jout
A
where e =
0
t/2
(t)dt.
190
Quantum trajectories
defined in Section A.5, has nothing to do with normal operators.) For a vacuum-state bath,
the expectation value of any expression normally ordered in e will have zero contribution
Now normal ordering with respect to e is antinormal ordering
from all terms involving e.
Thus the statistics of A are the antinormally ordered statistics of a.
As
with respect to a.
shown in Section A.5, these statistics are those found from the so-called Husimi or Q
function,
Q()d2 =
d2
||,
(4.233)
+ D[ c]
(4.234)
= i[H , ] + (1 )D[c]
and unravel only the last term.
For direct detection, Eq. (4.40) is replaced by
+ dt(1 )D[c]
I (t),
+ dt H[iH 12 c c]
dI (t) = dN(t)G[ c]
(4.235)
191
E[dN(t)] = Tr[c(t)
c ]dt.
(4.236)
In this case a SSE does not exist because the conditioned state will not remain pure even if
it begins pure. Note that in the limit 0 one obtains the unconditioned master equation.
For homodyne detection, the homodyne photocurrent is obtained simply by replacing c
by c to give
Jhom (t) =
J (t) + (t).
x
(4.237)
Note that here the shot noise (the final term) remains normalized so as to give a unit
spectrum; other choices of normalization are also used. The SME (4.72) is modified to
J (t)dt + dW (t)H[c]
J (t).
dJ = i[H , J (t)]dt + D[c]
(4.238)
Again, there is no SSE. The generalization of the heterodyne SME (4.110) is left as an
exercise for the reader.
192
Quantum trajectories
Rather than doing a completely general derivation, we consider first the case of a whitenoise bath in a pure, but non-vacuum, state. That is, as in Section 3.11.2,
= N dt,
dB (t)dB(t)
(4.239)
= M dt,
dB(t)d
B(t)
(4.240)
[dB(t),
dB (t)] = dt,
(4.241)
(4.242)
In this case the pure state of the bath, which we denote |M, obeys
(N + M)b (t)]|M = 0.
[(N + M + 1)b(t)
(4.243)
Exercise 4.28 From Eqs. (4.241)(4.243), derive Eqs. (4.239) and (4.240).
Now replacing the vacuum bath state |0 by |M in Eq. (4.36) and expanding the unitary
operator U (dt) to second order yields
|(t + dt) = 1 12 dt (N + 1)c c + N cc M c2 M c2
|(t)|M.
(4.244)
+ c dB (t) c dB(t)
Consider homodyne detection on the output. Any multiple of the operator in Eq. (4.243)
can be added to Eq. (4.244) without affecting it. Thus it is possible to replace dB by
dB +
N + M + 1 N + M
N + M + 1
dB
dB =
[dB + dB]
L
L
L
(4.245)
N + M
N + M + 1 N + M
dB +
dB =
[dB + dB],
L
L
L
(4.246)
L = 2N + M + M + 1.
(4.247)
and dB by
dB
where
This yields
|(t + dt) = 1 12 dt (N + 1)c c + N cc M c2 M c2
+ c N + M + 1 /L c (N + M)/L
|(t)|M.
[dB (t) + dB(t)]
(4.248)
Projecting onto eigenstates |J of the output quadrature b + b then gives the unnormalized conditioned state
| J (t + dt) = 1 12 dt (N + 1)c c + N cc M c2 M c2
+ J dt c N + M + 1 /L c (N + M) /L
M
(J ),
(4.249)
|(t) ost
193
M
where the state-matrix norm ost
(J ) gives the probability of obtaining the result J and the
ostensible probability distribution for J is
)
dt
M
2
(4.250)
exp( 12 J 2 dt/L).
ost (J ) = |J |M| =
2 L
Note that from this ostensible distribution the variance of J 2 is L/dt, which, depending on
the modulus and argument of M, may be larger or smaller than its vacuum value of 1/dt.
Now, by following the method in Section 4.4.3, one obtains the following SSE for the
unnormalized state vector:
'
dt
d|J (t) =
(N + 1)c c + N cc M c2 M c2
2
(
N + M + 1
N +M
(4.251)
c
| J (t),
+ J (t)dt c
L
L
where the actual statistics of the homodyne photocurrent J are given by
+ L (t),
Jhom (t) = x(t)
(4.252)
where (t) = dW (t)/dt is white noise as usual. Note that the high-frequency spectrum of
the photocurrent is no longer unity, but L.
Turning Eq. (4.251) into an equation for J = | J J | and then normalizing it yields
1
dJ (t) = dt L + dW (t)H (N + M + 1)c (N + M)c J (t),
(4.253)
L
where the unconditional evolution is
+ N D[c ] +
L = (N + 1)D[c]
M
M
[c,
]] i[H , ]. (4.254)
[c , [c , ]] +
[c,
2
2
+ D[c ]
(4.255)
N D[c]
in the non-selective evolution. Rather, the conditioning term depends upon N and
involves both c and c . In quantum optics, N is typically negligible, but for quantumelectromechanical systems (see Section 3.10.1), N is not negligible. Thus, for continuous
measurement of such devices by electro-mechanical means, it may be necessary to apply a
SME of form similar to Eq. (4.253).
From this conditioning equation and the expressions for the photocurrent, it is easy to
find the two-time correlation function for the output field using the method of Section 4.4.4.
194
Quantum trajectories
The result is
(1)
(t, t + ) = E[Jhom (t + )Jhom (t)]
(4.256)
Fhom
(N + M)c (t)
= Tr (c + c )eL (N + M + 1)c(t)
+ (N + M + 1)(t)c (N + M )(t)c + L( ). (4.257)
Note that, unlike in the case of a vacuum input, there is no simple relationship between this
formula and the Glauber coherence functions.
This correlation function could be derived from the Heisenberg field operators. The
relevant expression is
(4.258)
We will not attempt to prove that this evaluates to Eq. (4.257), because it is considerably
more difficult than with a vacuum input bin = . The reason for this is that it is impossible
to choose an operator ordering such that the contributions due to the bath input vanish. The
necessary method would have to be more akin to that used in obtaining Eq. (4.257), where
the stochastic equation analogous to the SME is the quantum Langevin equation (3.181).
(4.259)
(4.260)
J (t)dt = [J0 (t)dt + N dW1 ]/ 1 + N ,
where dW1 is an independent Wiener increment and N is the dark-noise power relative to
the shot noise. Note that we have included the normalization factor so that the ostensible
distribution for J (t) is also that of normalized Gaussian white noise.
The problem is to determine the quantum trajectory for the system state J conditioned
on Eq. (4.260), rather than that conditioned on the ideal current J0 . To proceed, we rewrite
J0 as
(4.261)
J0 (t)dt = [J (t)dt + N dW (t)]/ 1 + N ,
where
dW = ( N J0 dt dW1 )/ 1 + N
195
(4.262)
N dW ][c + H.c.].
(4.263)
d = L dt + dW (t)H[c],
(4.264)
(4.265)
where 1/(1 + N ) and dW (t) is the Gaussian white noise which appears in the actual
photocurrent:
(4.266)
J (t) = c + c + dW (t)/dt.
In comparison with Section 4.8.1 we see that the addition of Gaussian white noise to the
photocurrent before the experimenter has access to it is exactly equivalent to an inefficient
homodyne detector.
For direct detection, dark noise is not the same as an inefficiency. Modelling it is actually
more akin to the methods of the following subsection, explicitly involving the detector. The
interested reader is referred to Refs. [WW03a, WW03b].
196
Quantum trajectories
REALISTIC PHOTORECEIVER
J (t)
SYSTEM
OUTPUT
LRBS
FILTER
IDEAL
PHOTORECEIVER
Q (t)
+
V (t)
WHITE
NOISE
dW
N dt
VERY STRONG
LOCAL OSCILLATOR
Fig. 4.9 A schematic diagram of the model for simple homodyne detection by a realistic photoreceiver.
The realistic photoreceiver is modelled by a hypothetical idealphotoreciever, the output J (t) of which
is passed through a low-pass filter to give Q(t). White noise N dW/dt is added to this to yield the
observable output V (t) of the realistic photoreceiver.
(4.268)
Thus, the quantum state conditioned on Q(t) is the same as that conditioned on J (t).
However a nontrivial result is obtained if we say that the output is
(4.270)
197
Here the J superscript represents dependence upon the unobservable microscopic current
J . This is the variable which is averaged over to find the expectation value. From this
equation we can obtain our state of knowledge of the system, conditioned on V , as
(4.271)
V = dq V (q).
Similarly, our state of knowledge of the detector is
V (q) = Tr[V (q)] .
(4.272)
Note, however, that (q) contains more information than do and (q) combined, because
of correlations between the quantum system and the classical detector.
The ideal stochastic master equation. As in the previous cases of imperfect detection, it is
convenient to use the linear form of the SME as in Section 4.4.3. For a detector that is ideal
apart from an efficiency , this is
(4.273)
d J = L J dt + [J (t)dt](c J + J c ),
where the ostensible distribution for the current J (t) is that of Gaussian white noise.
The stochastic FPE for the detector. The detector state is given by (q), the probability
distribution for Q. From Eq. (4.267), Q obeys the SDE
dQ = B[Q J (t)]dt,
(4.274)
where J as given above satisfies (J dt)2 = dt. From Section B.5, the probability distribution
(q) obeys the stochastic FokkerPlanck equation (FPE)
'
(
2
1
dt
J (q).
(4.275)
[q J (t)]dt + B 2
d J (q) = B
q
2
(q)2
This assumes knowledge of J , as shown explicitly by the superscript. Note that this equation
has the solution
J (q) = (q QJ ),
(4.276)
where QJ is the solution of Eq. (4.274). It is only later, when we average over J , that we
will obtain an equation with diffusion leading to a non-singular distribution. If we were to
do this at this stage we would derive a FPE of the usual (deterministic) type, without the
J -dependent term in Eq. (4.275).
The Zakai equation for the detector. Now we consider how (q) changes when the observer
obtains the information in V . That is, we determine the conditioned state by Bayesian
inference:
(V |q)(q)
.
(4.277)
V (q) (q|V ) =
(V )
From Eq. (4.269), in any infinitesimal time interval, the noise in V is infinitely greater
than the signal. Specifically, the root-mean-square noise in an interval of duration dt is
198
Quantum trajectories
N/dt, while the signal is Q. Thus, the amount of information about Q contained in
V (t)
is infinitesimal, and hence the change from (q) to V (q) is infinitesimal, of order dt.
This means that it is possible to derive a stochastic differential equation for V (q). This
is called a KushnerStratonovich equation (KSE). Such a KSE is exactly analogous to the
stochastic master equation describing the update of an observers knowledge of a quantum
system.
Just as in the quantum case one can derive a linear version of the SME for an unnormalized state ,
one can derive a linear version of the KSE for an unnormalized probability
distribution V (q). This involves choosing an ostensible distribution ost (V ), such as
ost (V ) = (V |q := 0). Then
V (q) =
(V |q)(q)
ost (V )
(4.278)
has the interpretation that ost (V ) dq V (q) is the actual probability distribution for V .
Note the analogy with the quantum case in Section 4.4.3, where a trace rather than an
integral is used (see also Table 1.1). The linear version of the KSE is called the Zakai
equation and it will be convenient to use it in our derivation.
From Eq. (4.269), the distribution of V given that Q = q is
(V |q) = [dt/(2 N )]1/2 exp[(V q)2 dt/(2N )].
(4.279)
(4.280)
in Eq. (4.278) is
exp[(2V q q 2 )dt/(2N )].
(4.281)
Now the width of (q) will not depend upon dt (as we will see), so we can assume that (q)
has support
that is finite. Hence the q in Eq. (4.281) can be assumed finite. By contrast, V is
of order 1/ dt, as explained above.
Thus, the leading term in the exponent of Eq. (4.281)
is V q/(dt N ), and this is of order dt. Expanding the exponent to leading order gives
V (q) = [1 + qV (t)dt/N](q).
(4.282)
(4.283)
The joint stochastic equation. We now combine the three stochastic equations we have
derived above (the SME for the system, the stochastic FPE for the detector and the Zakai
equation for the detector) to obtain a joint stochastic equation. We define
VJ (q) = J VJ (q).
(4.284)
199
(4.285)
where we have used Eq. (4.282), and d J (q; t) is given by Eq. (4.275) and d J (t) by
Eq. (4.273). By expanding these out we find
'
(
2
1
dVJ (q) = dt B
+
L
J (q)
[q J (t)] + B 2
q
2
(q)2
'
(
+ [J (t)dt] 1 [J (t)dt]B
[c J (q) + J (q)c ]
q
(4.286)
+ (1/ N )[V (t)dt/ N]q J (q).
Averaging over unobserved processes. By construction, the joint stochastic equation in
Eq. (4.286) will preserve the factorization of J (q) in the definition Eq. (4.284). This is
because this equation assumes that J , the output of the ideal (apart from its inefficiency)
detector, is known. In practice, the experimenter knows only V , the output of the realistic
detector. Therefore we should average over J . Since we are using a linear SME, this means
using the ostensible distribution for J dt in which it has a mean of zero and a variance of
dt. Thus we obtain
'
(
2
1
+
L
(q)
q + B2
dV (q) = dt B
q
2
(q)2
[c(q)
+ (q)
c ]
dt B
q
(4.287)
Then the infinitesimally evolved unnormalized state determines the actual distribution for
V according to
dq Tr[(q) + dV (q)] .
(4.289)
(V ) = ost (V )
Using the same arguments as in Section 4.4.3, we see that the actual statistics for V are
(4.290)
V dt = Qdt + N dW (t),
where
Q =
dq Tr[(q)] q.
(4.291)
200
Quantum trajectories
Note that dW (t) is not the same as dW1 (t) in Eq. (4.269). Specifically,
N dW = N dW1 + (Q Q).
(4.292)
The realistic observer cannot find dW1 from dW , since that observer does not know Q.
The final step is to obtain an equation for the normalized state, using the actual distribution
for V . If we again assume that the state is normalized at the start of the interval, we find
dV (q) =
(q) + dV (q)
(q).
dq Tr[(q) + dV (q)]
(4.293)
Now taking the trace and integral of Eq. (4.287) gives zero for every term except for the
last, so the denominator evaluates to
(4.294)
1 + (1/ N )[V (t)dt/ N ]Q.
By expanding
2 the reciprocal of this to second order and using Eq. (4.290) to replace
[V (t)dt/ N] by dt, we find
'
(
2
1
+
L
V (q)
dV (q) = dt B
q + B2
q
2
(q)2
V (q) + V (q)c ]
[c
dt B
q
(4.295)
2
1
d(q) = dt B
+
L
(q)
q + B2
q
2
(q)2
[c(q)
+ (q)c ].
dt B
q
(4.296)
The coupling term here will still generate correlations between the system and the detector.
Note, however, that this coupling does not cause any back-action on the system, only
201
forward-action on the detector. This can be verified by showing that the unconditioned
= L = i[H , ] + D[c].
(4.297)
Exercise 4.32 Verify this, using the fact that (q), and hence (q), can be assumed to go
smoothly to zero as q .
The superoperator KushnerStratonovich equation is clearly considerably more complicated than the stochastic master equations used to describe other imperfections in detection. In general it is possible only to simulate it numerically. (One technique is given in
Ref. [WW03b], where it is applied to the homodyne detection of a two-level atom.) Nevertheless, it is possible to study some of its properties analytically [WW03a, WW03b].
From these, some general features of the quantum trajectories this equation generates can
be identified. First, in the limit B , the detector simply adds dark noise and the appropriate SME (4.265) with effective efficiency /(1 + N )) can be rederived. Second, for B
finite and N 1, the detector has an effective bandwidth of
(4.298)
Beff = B/ N ,
which is much greater thanB. That is, the detector is insensitive to changes in the system
on a time-scale less than N /B. In the limit N 0 the effective bandwidth becomes
infinite and the noise becomes zero, so the detector is perfect (apart from ). That is, the
quantum trajectories reduce to those of the SME (4.238), as expected from the arguments
at the beginning of this subsection.
202
Quantum trajectories
300 nm
QPC
L
IQPC
G2
G1
DOT
(a)
(b)
Fig. 4.10 (a) A quantum dot with a single-electron quasibound state is connected to two Fermi
reservoirs, a source (S) and drain (D) by tunnel junctions. Tunnelling through the quantum dot
modulates the tunnelling current through a quantum point contact. (b) An experimental realization
using potentials (defined by surface gates) in a GaAs/AlGaAs two-dimensional electron-gas system.
Part (b) is reprinted by permission from Macmillan Publishers Ltd, Nature Physics, E. V. Sukhorukov
et al., 3, 243, Fig. 1(a), copyright 2007.
this way the modulated current through the QPC can be used continuously to monitor the
occupation of the dot. We will follow the treatment given in Ref. [GM01].
The irreversible dynamics of the tunnelling though a single quasibound state on a quantum
dot from source to drain is treated in Section 3.5. We assume here that the interaction
of the dot with the QPC is weak, and hence does not significantly change the master
equation derived there. Here we need a model to describe the tunnnelling current through
the QPC and its interaction with the quantum dot. We use the following Hamiltonian
(with = 1):
H = H QD+leads + H QPC + H coup .
(4.299)
Here H QD+leads is as in Eq. (3.64), and describes the tunnelling of electrons from the source
to the dot and from the dot to the drain. This leads to the master equation (3.73) in which
the tunnelling rate from source to dot is L and that from dot to drain is R . The new
Hamiltonian terms in this chapter are
" !
"
!
a Rq a Lk , (4.300)
kL a Lk a Lk + kR a Rk a Rk +
kq a Lk a Rq + qk
H QPC =
k
H coup =
k,q
"
c c kq a Lk a Rq + qk
a Rq a Lk .
(4.301)
k,q
Here, as in Section 3.5, c is the electron annihilation operator for the quantum dot. The
Hamiltonian for the QPC detector is represented by H QPC , in which a Lk , a Rk and kL , kR
are, respectively, the electron (fermionic) annihilation operators and energies for the left
and right reservoir modes for the QPC at wave number k. Also, there is tunnelling between
203
these modes with amplitudes kq . Finally, Eq. (4.301) describes the interaction between the
detector and the dot: when the dot contains an electron, the effective tunnelling amplitudes
of the QPC detector change from kq to kq + kq .
In the interaction frame and Markovian approximation, the (unconditional) zerotemperature master equation of the reduced state matrix for the quantum-dot system is
[Gur97, GMWS01]
+ D[ + n](t)
(4.302)
(4.303)
dn
R n.
= L (1 n)
dt
(4.304)
This is because the Hamiltonian describing the interaction between the dot and the QPC
commutes with the number operator n the measurement is a QND measurement of
However, if we ask for the conditional mean occupation of the dot given an observed
n.
current through the QPC, we do find a (stochastic) dependence on this current, as we will see
later.
Exercise 4.33 Show that the stationary solution to Eq. (4.302) is
ss =
L
R
|11| +
|00|,
L + R
L + R
(4.305)
and that this is consistent with the stationary solution of Eq. (4.304).
Currents and correlations. It is important to distinguish the two classical stochastic currents
through this system: the current I (t) through the QPC and the current J (t) through the
quantum dot. Equation (4.302) describes the evolution of the reduced state matrix of
the quantum dot when these classical stochastic processes are averaged over. To study the
stochastic evolution of the quantum-dot state, conditioned on a particular measurement
realization, we need the conditional master equation. We first define the relevant point
processes that are the source of the classically observed stochastic currents.
204
Quantum trajectories
For the tunnelling onto and off the dot, we define two point processes:
[dMcb (t)]2 = dMcb (t),
(4.306)
c (t)dt = L Tr J [c ]c (t) dt,
E[dMcL (t)] = L (1 n)
(4.307)
(4.308)
Here b takes the symbolic value L or R and we use the subscript c to emphasize that the
quantities are conditioned upon previous observations (detection records) of the occurrences
of electrons tunnelling through the quantum dot and also tunnelling through the QPC barrier
(see below). The current through the dot is given by the classical stochastic process
J (t)dt = eL dMcL (t) + eR dMcR (t) ,
(4.309)
where eL and eR are as defined in Section 3.5 and sum to e.
Next, we define the point process for tunnelling through the QPC:
[dNc (t)]2 = dNc (t),
(4.310)
c (t)]dt.
c ] = [D + (D D)n
E[dNc (t)] = Tr[J [ + n ]
(4.311)
(4.312)
The expected current is thus eD when the dot is empty and eD when the dot is occupied.
Exercise 4.34 Show that the steady-state currents through the quantum dot and QPC are,
respectively,
eL R
Jss =
,
(4.313)
L + R
R
L
Iss = eD
+ eD
.
(4.314)
L + R
L + R
(Note that the expression for Jss agrees with Eq. (3.77) from Section 3.5.)
The SME describing the quantum-dot state conditioned on the above three point processes
is easily derived using techniques similar to those described in Section 4.3. The result is
c + dN (t)G[ + n]
c
dc = dM L (t)G[ L c ]c + dM R (t)G[ R c]
dt 12 H L cc + R c c + (D D)n c .
(4.315)
Equation (4.315) assumes that we can monitor the current through the quantum dot
sufficiently quickly and accurately to distinguish the current pulses associated with the
processes dM b (t). In this case, the dot occupation nc (t) will jump between the values 0
and 1. It makes the transition 0 1 when an electron tunnels onto the dot, which occurs
at rate L . It makes the transition 1 0 when an electron leaves the quantum dot, which
occurs at rate R .
205
(4.316)
The two values are eD and eD , depending on the value ofnc (t), and transitions between
them are governed by the transition rates L and R . A process such as this is called a
random telegraph process.
Using known results for a random telegraph process [Gar85], we can calculate the
stationary two-time correlation function for the QPC current,
R(t s) E[I (t), I (s)]ss
= e2 (D D )2
L R
e(R +L )|ts| .
(l + R )2
(4.317)
Here E[A, B] E[AB] E[A] E[B], and Eq. (4.317) is sometimes called the reduced
correlation function because of the subtraction of the products of the means.
Exercise 4.36 Derive Eq. (4.317) from the master equation (4.302) by identifying I (t) with
Exercise 4.37 Show that, for the situation considered above, the QPC spectrum is the
Lorentzian
SQPC () = e2 (D D )2
2L R
L + R
.
(L + R )2 (L + R )2 + 2
(4.319)
Conditional dynamics. We now focus on the conditional dynamics of the quantum dot as
the QPC current, I (t), is monitored. That is, we average over the dot-tunnelling events
described by dM b (t). Physically, this is reasonable because it would be very difficult to
discern the charge pulses (less than one electron) associated with these jumps. It would be
similarly difficult to discern the individual jumps dN(t) which define the QPC current I (t)
according to Eq. (4.312). In the above, we avoided that issue by considering the limit in
which the rate of tunnelling events through the QPC was so high that I (t) could be treated
as a random telegraph process, with randomness coming from the quantum-dot dynamics
but no randomness associated with the QPC tunnelling itself. It is apparent from the results
of the experiment (see Fig. 4.11) that this is a good, but not perfect, approximation. That
206
Quantum trajectories
I QPC
(nA)
Time (s)
Fig. 4.11 A typical experimental record of the QPC tunnel current as it switches between two values
depending on the occupation of the nearby quantum dot. Adapted by permission from Macmillan
Publishers Ltd, Nature Physics, E. V. Sukhorukov et al., 3, 243, Fig. 1(c), copyright 2007.
is to say, it is evident that there is some noise on the current in addition to its switching
between the values eD and eD . This noise is due in part to the stochastic processes of
electrons tunnelling through the QPC and in part to excess noise added in the amplification
process. It is obvious that the individual jumps through the QPC are not resolved; rather,
the noise appears to be that of a diffusion process.
We saw in Section 4.8.3 that, if the noise in an ideal current is a Wiener process and
the excess noise is also a Wiener process, then the effect of the contaminating noise
is simply to reduce the efficiency of the detection to < 1. Since this is the simplest
model of imperfect detection, we will apply it in the present case. This requires finding a
diffusive approximation to the quantum jump stochastic master equation for describing the
conditional QPC current dynamics. As will be seen, this requires |D D|/D 1. From
Fig. 4.11 we see that in the experiment this ratio evaluated to approximately 0.03, so this
approximation seems reasonable.
On averaging over the jump process dM b , introducing an efficiency as described above
and assuming for simplicity that and are real and positive (so that the relative phase
= ), Eq. (4.315) reduces to
I + 12 dt H 2 n ( n)
2 I
dI = dN (t)G[ ( + n)]
+ (1 )D[ + n]
I ,
+ dt L D[c ] + R D[c]
(4.320)
where we have used a new subscript to denote conditioning on I (t) = e dN/dt, where
c = [D + (D D)nc (t)].
(4.321)
E[dN (t)] = Tr J [ ( + n)]
This describes quantum jumps, in that every time there is a tunnelling event (dN = 1) the
conditional state changes by a finite amount.
To derive a diffusive limit, we make the following identifications:
.
(4.322)
n c;
Then, apart from the additional irreversible terms, Eq. (4.320) is identical to the conditional
master equation for homodyne detection (4.66). Thus, if we assume that (or,
equivalently, |D D|/D 1), we can follow the procedure of Section 4.4.2. We thus
207
QPC
L
DOT 1
DOT 2
Fig. 4.12 A QPC is used to monitor the occupation of a quantum dot (1) coherently coupled by
tunnelling to another quantum dot (2).
+ D[n]
c (t) + dW (t)H[n]
c (t),
dc (t) = dt L D[c ] + R D[c]
(4.323)
(4.325)
c = 1 or 0).
Note that the noise turns off when the dot is either occupied or empty (n
This is necessary mathematically in order to prevent the occupation becoming less than 0
c = 1 (0), we are sure that there is (is not) an electron on
or greater than 1. Physically, if n
the dot, so monitoring the dot cannot give us any more information about the state. Thus,
there is no updating of the conditional mean occupation.
208
Quantum trajectories
(4.326)
where
c1 c1 kq a Lk a Rq + qk
a Rq a Lk ,
H coup =
(4.327)
(4.328)
k,q
where n j = cj cj .
Averaging over the jump process gives the unconditional master equation
= D[n 1 ] i[V , ].
(4.330)
Here = ||
2 as before, while n j = cj cj and the effective Hamiltonian is
V = z + x
2
2
(4.331)
x = c1 c2 + c2 c1 ,
(4.332)
y = i(c1 c2 c2 c1 ),
(4.333)
z = c2 c2 c1 c1 ,
(4.334)
so that z(t) = 1 and z(t) = 1 indicate that the electron is localized in dot 2 and dot 1,
respectively.
Exercise 4.38 Verify that the above operators k are Pauli operators.
The parameter is the strength of the tunnelling from one dot to the other, while =
209
1
0.5
(a)
10
15
20
25
30
1
0.5
zc
(b)
10
15
20
25
30
dNc
(c)
0.5
0
0
10
15
20
25
30
t ) )
Fig. 4.13 Differences in behaviour between unconditional and conditional evolutions. The initial
DQD state is z = 1 (dot 1). The parameters are = 1, = 0, = and D = = /2, and time
2 while D = | |2 .) (a) Unconditional evolution of z(t). (b)
is in units of 1 . (Recall that = ||
Conditional evolution of zc (t), interrupted by quantum jumps, corresponding to the stochastically
generated QPC detection record shown in (c). Reprinted Figure 3 with permission from H-S. Goan
and G. J. Milburn, Phys. Rev. B 64, 235307, (2001). Copyright 2008 by the American Physical
Society.
210
Quantum trajectories
effect, or quantum Zeno effect.4 In fact, the electron can still make a transition to dot 1 (the
z = 1 state), but the rate of this is suppressed from O(), in the no-measurement case,
to O(2 /D). In the limit /D , the transition rate goes to zero.
The quantum diffusion limit. We saw in Section 4.9.1 that the quantum diffusion equations
can be obtained from the quantum jump description under the assumption that | | ||
(or equivalently D ).
Exercise 4.40 Derive the diffusive SME for the double-dot case, and hence the
stochastic Bloch equations:
dxc (t) = [ dt + sin dW (t)]yc (t) (/2)xc (t)dt
+ cos zc (t)xc (t)dW (t),
dyc (t) = [ dt + sin dW (t)]xc (t) zc (t)dt (/2)yc (t)dt
+ cos zc (t)yc (t)dW,
dzc (t) = yc (t)dt cos 1 zc2 (t) dW.
diffusive
(4.335)
(4.336)
(4.337)
The measured current gives information about which dot is occupied, as shown in the final
term of Eq. (4.337), as expected. However, for sin = 0, it also gives information about
the rotation around the z axis, as shown in the other two equations. That is, the effective
detuning has a deterministic term and a stochastic term proportional to the noise in the
current.
In Figs. 4.14(a)(d), we plot the conditional quantum-jump evolution of zc (t) and the
corresponding detection record dNc (t), with various values of (| |/| |). Each jump (discontinuity) in the zc (t) curves corresponds to the detection of an electron through the QPC
barrier. One can clearly observe that, with increasing (| |/||),
the rate of jumps increases,
but the amplitude of the jumps decreases. When D = 0 each jump collapses the DQD
electron into dot 2 (z = 1), but as D approaches D (from below) the jumps in z become
smaller, although they are always positive. That is because, whenever there is a detection
of an electron passing through QPC, dot 2 is more likely to be occupied than dot 1.
The case for quantum diffusion using Eqs. (4.335)(4.337) is plotted in Fig. 4.14(e). In
this case, infinitely small jumps occur infinitely frequently. We can see that the behaviour
of zc (t) for | | = 5| | in the quantum-jump case shown in Fig. 4.14(d) is already very close
to that of quantum diffusion shown in Fig. 4.14(e). Note that for the case = (which we
have used in the simulations) the unconditional evolution does not depend on the parameter
| |. Thus all of these unravellings average to the same unconditioned evolution shown in
Fig. 4.13(a).
The QPC current spectrum. We now calculate the stationary spectrum of the current
fluctuations through the QPC measuring the coherently coupled DQD. This quantity is
4
The former name alludes to the saying a watched pot never boils; the latter alludes to the proof by the Greek philosopher
Zeno of Elea that motion is impossible. See Ref. [GWM93] for a review of this effect and an analysis using quantum trajectory
theory.
211
Fig. 4.14 Transition from quantum jumps to quantum diffusion. The parameters are = 1, = 0,
2 . In (a)(d) are shown the quantum jump,
= and = /2, and time is in units of 1 = ||
conditional evolutions of zc (t) and corresponding detection moments with the following | |/| |
ratios: (a) 1, (b) 2, (c) 3 and (d) 5. In (e) the conditional evolutions of zc (t) in the quantum diffusive
limit (| |/| | ) are shown. The variable (t) = dW/dt, the noise in the QPC current in the
quantum-diffusive limit, is scaled so as to have unit variance on the plot. Reprinted Figure 4 with
permission from H-S. Goan and G. J. Milburn, Phys. Rev. B 64, 235307, (2001). Copyright 2008 by
the American Physical Society.
212
Quantum trajectories
(4.338)
This two-time correlation function for the current has been calculated for the case of
quantum diffusion in Ref. [Kor01a]. Here we will present the quantum-jump case from
Ref. [GM01], where I (t) = e dN(t)/dt.
Using the SME (4.329) and following the derivation in Section 4.3.2, we find that
E[I (t)] = eE[dN(t)]/dt = e Tr {D + (D D)n 1 }(t) ,
(4.339)
while, for 0,
E[I (t + )I (t)] = e2 E[dNc (t + )dN (t)]/(dt)2
= e2 2 Tr {D + (D D)n 1 }eL {J [ + n 1 ](t)}
+ e2 Tr {D + (D D)n 1 }(t) ( ).
(4.340)
In this form, we have related the ensemble averages of a classical random variable to the
quantum averages with respect to the qubit state matrix. The case < 0 is covered by the
fact that the two-time autocorrelation function is symmetric by definition.
Now we are interested in the steady-state case in which t , so that (t) I /2
(see Exercise 4.39.) Thus we can simplify Eq. (4.340) using the following identities for an
arbitrary operator B: Tr[eL B] = Tr[B] and Tr[BeL ] = Tr[B]/2, Hence we obtain the
steady-state R( ) for 0 as
R( ) = eI ( ) + e2 2 (D D)2 Tr n 1 eL (n 1 /2) Tr[n 1 /2]2 ,
(4.341)
where I = e (D + D)/2 is the steady-state current.
Exercise 4.41 Verify Eq. (4.341).
The first term in Eq. (4.341) represents the shot-noise component. It is easy to evaluate
the second term analytically for the = 0 case, yielding
(I )2 + e e+
R( ) = eI ( ) +
,
(4.342)
4
+
where = (/4) (/4)2 2 , and I = e (D D ) is the difference between
the two average currents.
Exercise 4.42 Derive Eq. (4.342).
Hint: This can be done by solving the equations for the unconditioned Bloch vector
Eqs. (4.335)(4.337) with = = 0 with the appropriate initial conditions to represent
the initial state n 1 /2. This state is not normalized, but the norm is unchanged by the
evolution, so one can take out a factor of 1/2 and use the normalized initial state n 1 .
213
Fourier transforming this, as in Eq. (4.318), yields the spectrum of the current fluctuations
as
S() = S0 +
2 (I )2 /4
,
(2 2 )2 + (/2)2 2
(4.343)
where S0 = eI represents the shot noise. Note that, from Eq. (4.343), the noise spectrum
at = can be written as
S() S0
(I )2
=
.
S0
(e)I
(4.344)
( D + D )2
S() S0
= 2
(4.345)
S0
(D + D )
214
Quantum trajectories
6
4
S ( )/S0
(a )
2
0
8
6
S ( )/S0
( b)
4
2
0
50
40
S ( )/S0
30
(c )
20
10
0
Fig. 4.15 A plot of the noise spectrum of the QPC current monitoring a double quantum dot. The
spectrum is normalized by the shot-noise level, and is shown for various ratios of measurement
strength to tunnelling strength , with /(4) equalling (a) 0.01, (b) 0.5 and (c) 2. For discussion,
see the text. Reprinted Figure 5 with permission from H-S. Goan and G. J. Milburn, Phys. Rev. B 64,
235307, (2001). Copyright 2008 by the American Physical Society.
later derived from a microscopic model in Ref. [Kor01b]. This model was restricted to a
weakly responding detector (the diffusive limit discussed above), but has been extended to
the case of a strongly responding detector (quantum jumps) [Kor03]. Korotkovs approach
(which he called Bayesian) is completely equivalent to the quantum trajectory approach
used above [GMWS01]. Quantum trajectories were first used in the solid-state context in
Ref. [WUS+ 01], which included a derivation from a (rather simplistic) microscopic model.
From the beginning [Kor99, WUS+ 01], these theories of continuous monitoring in
mesoscopic systems have allowed for non-ideal monitoring. That is, even if the initial state
were pure, the conditional state would not in general remain pure; there is no description
using a stochastic Schrodinger equation. The microscopic model in Ref. [WUS+ 01] is
inherently non-ideal, while, in Ref. [Kor99], Korotkov introduced a phenomenological
dephasing rate, which he later derived by introducing extra back-action noise from the
detector [Kor01b]. Another sort of non-ideality was considered in Ref. [GM01], where the
authors introduce inefficient detection by a QPC, as used in Section 4.9.1 Here efficiency
has the same sense as in quantum optics: some proportion 1 of the detector output
is lost. This is of course equivalent to introducing extra decoherence as produced by an
215
unmonitored detector. As shown generally in Section 4.8.3, in the diffusive limit, the same
conditioning equation results if extra white noise (dark noise) is added to the detector
output before it is recorded.
The theory of quantum trajectories for mesoscopic systems has recently been extended to
allow for the noisy filtering characteristic of amplifiers used in such experiments [OWW+ 05,
OGW08]. This was done using the same theory as that presented in Section 4.8.4, but taking
into account correlations between noise that disturbs the system and noise in the recorded
current. Such realistic quantum trajectories are essential for optimal feedback control,
because the optimal algorithms are based upon the state conditioned on the measurement
record, as will be discussed in Section 6.3.3. Note that the authors of Ref. [RK03] do consider
the effect of a finite-bandwidth filter on a feedback algorithm, but that is a quite distinct
idea. There, the feedback algorithm is not based on the state of the system conditioned on
the filtered current, and indeed no such conditional state is calculated.
5
Quantum feedback control
5.1 Introduction
In the preceding chapter we introduced quantum trajectories: the evolution of the state of a
quantum system conditioned on monitoring its outputs. As discussed in the preface, one of
the chief motivations for modelling such evolution is for quantum feedback control. Quantum feedback control can be broadly defined as follows. Consider a detector continuously
producing an output, which we will call a current. Feedback is any process whereby a
physical mechanism makes the statistics of the present current at a later time depend upon
the current at earlier times. Feedback control is feedback that has been engineered for a
particular purpose, typically to improve the operation of some device. Quantum feedback
control is feedback control that requires some knowledge of quantum mechanics to model.
That is, there is some part of the feedback loop that must be treated (at some level of
sophistication) as a quantum system. There is no implication that the whole apparatus must
be treated quantum mechanically.
The structure of this chapter is as follows. The first quantum feedback experiments
(or at least the first experiments specifically identified as such) were done in the mid
1980s by two groups [WJ85a, MY86]. They showed that the photon statistics of a beam
of light could be altered by feedback. In Section 5.2 we review such phenomena and
give a theoretical description using linearized operator equations. Section 5.3 considers the
changes that arise when one allows the measurement to involve nonlinear optical processes.
As well as explaining key results in quantum-optical feedback, these sections introduce
important concepts for feedback in general, such as stability and robustness, and important
applications such as noise reduction. These sections make considerable use of material
from Ref. [Wis04].
From Section 5.4 onwards we turn from feedback on continuous fields to feedback
on a localized system that is continuously monitored. We give a general description for
feedback in such systems and show how, in the Markovian limit, the evolution including
the feedback can be described by a new master equation. We formulate our description
both in the Schrodinger picture and in the Heisenberg picture, and we discuss an elegant
example for which the former description is most useful: protecting a Schrodinger-cat state
from decoherence. In Section 5.5 we redevelop these results for the particular case of a
216
217
measurement yielding a current with Gaussian white noise, such as homodyne detection.
We include the effects of a white-noise (thermal or squeezed) bath. In Section 5.6 we
apply this theory for homodyne-based feedback to a very simple family of quantum-optical
systems with linear dynamics. We show that, although Markovian feedback can be described
without reference to the conditional state, it is the conditional state that determines both
the optimal feedback strength and how the feedback performs. In Section 5.7 we discuss
a proposed application using (essentially) Markovian feedback to produce deterministic
spin-squeezing in an atomic ensemble. Finally, in Section 5.8 we discuss other concepts
and other applications of quantum feedback control.
5.2 Feedback with optical beams using linear optics
5.2.1 Linearized theory of photodetection
The history of feedback in quantum optics goes back to the observation of sub-shot-noise
fluctuations in an in-loop photocurrent (defined below) in the mid 1980s by two groups
[WJ85a, MY86]. A theory of this phenomenon was soon developed by Yamamoto and
co-workers [HY86, YIM86] and by Shapiro et al. [SSH+ 87]. The central question they
were addressing was whether this feedback was producing real squeezing (defined below), a
question whose answer is not as straightforward as might be thought. These treatments were
based in the Heisenberg picture. They used quantum Langevin equations where necessary
to describe the evolution of source operators, but they were primarily interested in the
properties of the beams entering the photodetectors, rather than their sources.
The Heisenberg picture is most convenient if (a) one is interested primarily in the
properties of the beams and (b) an analytical solution is possible. To obtain analytical
results, it is necessary to treat the quantum noise only within a linearized approximation.
We begin therefore by giving the linearized theory for photodetection in the Heisenberg
picture.
Using the theory from Section 4.7, the operator for the photon flux in a beam at longitudinal position z1 is
(5.1)
(5.2)
218
(5.3)
is assumed to have zero mean. This approximation is essentially the same as that used in
Section 4.4.2 to treat homodyne detection in the large-local-oscillator limit. It assumes that
individual photon counts are unimportant, namely that the system fluctuations are evident
only in large numbers of detections. This approximation will be valid if the correlations of
interest in the system happen on a time-scale long compared with the inverse mean count
rate and if the fluctuations are relatively small:
X 1 (t + )X 1 (t) 2 for = 0.
(5.4)
In all that follows we will consider only stationary stochastic processes, where the two-time
correlation functions depend only on the time difference . We cannot consider = 0 in
Eq. (5.4), because the variance diverges due to vacuum fluctuations:1
lim X 1 (t + )X 1 (t) = lim ( ).
(5.5)
0
Note, however, that in this limit the linearized correlation function agrees with that from
Eq. (5.1):
lim I(t)I(t + ) = lim I(t) ( ) = lim 2 ( ).
(5.6)
0
Just as we defined x and y quadratures for a system in Chapter 4, here it is also useful to
define the phase quadrature fluctuation operator
(5.7)
For free fields, where (taking the speed of light to be unity as usual)
t + ) = b(z
, t),
b(z,
(5.8)
(5.9)
(5.10)
(5.11)
and similarly for Y1 (). Note that we use a tilde but drop the hat for notational convenience.
Then it is simple to show that
[X 1 (), Y1 ( )] = 4 i( + ).
1
(5.12)
For thermal or squeezed white noise see Section 4.8.2 this -function singularity is multiplied by a non-unit constant.
219
For stationary statistics as we are considering, X 1 (t)X 1 (t ) is a function of t t only.
From this it follows that
X 1 ()X 1 ( ) ( + ).
(5.13)
Note that the final expression involves both X 1 and X 1 . Equation (5.15) is the same as
the spectrum defined for a homodyne measurement of the x quadrature in Section 4.4.4 if
we take the system quadrature x to have zero mean. In the present case, the spectrum can
be experimentally determined as
1 it
I(t), I(0) dt,
(5.16)
e
S1X () = I(t)
(5.17)
This can be regarded as an uncertainty relation for continuum fields. A coherent continuum
field where b 1 | = | has, for all ,
S1Q () = 1,
(5.18)
where Q = X or Y (or any intermediate quadrature). This is known as the standard quantum
limit or shot-noise limit. A squeezed continuum field is one such that, for some and
some Q,
S1Q () < 1.
(5.19)
This terminology is appropriate for the same reason as for single-mode squeezed states: the
reduced noise in one quadrature gets squeezed out into the other quadrature, because of
Eq. (5.17).
220
I3
b0
b2
b1
b3
I2
VACUUM
INPUT
VACUUM
INPUT
g
z
z0
z1
Fig. 5.1 A diagram for a travelling-wave feedback experiment. Travelling fields are denoted b and
photocurrent fluctuations I . The first beam-splitter transmittance, 1 , is variable, the second, 2 ,
fixed. The two vacuum field inputs are denoted and .
Quantum Squeezing, 2004, pp. 171
122, Chapter 6 Squeezing and Feedback, H. M. Wiseman, Figure 6.1, Springer-Verlag, Berlin,
Heidelberg. Redrawn and revised and adapted with kind permission of Springer Science+Business
Media.
(5.20)
We take the amplitude and phase noises to be independent and characterized by arbitrary
spectra S0X () and S0Y (), respectively.
This field is then passed through a beam-splitter of transmittance 1 (t). By unitarity,
the diminution in the transmitted field by a factor 1 (t) must be accompanied by the
addition of vacuum noise from the other port of the beam-splitter (see Section 4.4.1). The
transmitted field is
(5.21)
b 1 (t) = 1 (t 1 ) b 0 (t 1 ) + 1 (t 1 ) (t 1 ).
Here 1 = z1 z0 and we are using the notation
j 1 j .
(5.22)
221
The annihilation operator (t) represents the vacuum fluctuations. The vacuum is a special
case of a coherent continuum field of vanishing mean amplitude (t) = 0, and so is
completely characterized by its spectrum
SQ () = 1.
(5.23)
Since the vacuum fluctuations are uncorrelated with any other field, and have stationary
statistics, the time argument for (t)
is arbitrary (when it first appears).
The beam-splitter transmittance 1 (t) in Eq. (5.21) is time-dependent. This timedependence can be achieved experimentally by a number of means. For example, if the
incoming beam is elliptically polarized then an electro-optic modulator (a device with
a refractive index controlled by a current) will alter the orientation of the ellipse. A
polarization-sensitive beam-splitter will then control the amount of the light which is transmitted, as done, for example, in [TWMB95]. As the reader will no doubt have anticipated,
the current used to control the electro-optic modulator can be derived from a later detection
of the light beam, giving rise to feedback. Writing 1 (t) = 1 + 1 (t), and assuming that
the modulation of the transmittance is small (1 (t) 1 , 1 ), one can write
b 2 (t) = 2 b 1 (t 2 ) + 2 (t
2 ),
(5.25)
represents vacuum fluctuations like (t). The reflected beam
where 2 = z2 z1 and (t)
operator is
2 ).
(5.26)
b 3 (t) = 2 b 1 (t 2 ) 2 (t
Using the approximation (5.24), the linearized quadrature fluctuation operators for
X 2 (t) = 2 1 X 0 (t T2 ) + 2 /1 1 (t T2 )
+ 2 1 X (t T2 ) + 2 X (t T2 ),
Y2 (t) = 2 1 Y0 (t T2 )
+ 2 1 Y (t T2 ) + 2 Y (t T2 ),
b 2 are
(5.27)
(5.28)
where T2 = 2 + 1 . Here, for simplicity, we have shifted the time argument of the
vacuum quadrature operators by 1 . This is permissible because the vacuum noise is a
stationary process (regardless of any other noise processes). Similarly, for b 3 we have
X 3 (t) = 2 1 X 0 (t T2 ) + 2 /1 1 (t T2 )
(5.29)
+ 2 1 X (t T2 ) 2 X (t T2 ),
Y3 (t) = 2 1 Y0 (t T2 ) + 2 1 Y (t T2 )
(5.30)
2 Y (t T2 ).
222
The mean fields for b 2 and b 3 are 1 2 and 1 2 , respectively. Thus, if these
fields are incident upon photodetectors, the respective linearized photocurrent fluctuations
are, as explained in Section 5.2.1,
I2 (t) =
I3 (t) =
1 2 X 2 (t),
(5.31)
1 2 X 3 (t).
(5.32)
g
2 2
(5.33)
5.2.3 Stability
Putting Eq. (5.33) into
Clearly the feedback can affect only the amplitude quadrature X.
Eq. (5.27) yields
X 2 (t) = 2 1 X 0 (t T2 ) + g
+
h(t )X 2 (t T t )dt
2 1 X (t T2 ) +
2 X (t T2 ),
(5.34)
223
where f(t) represents all of the (stationary) noise processes in Eq. (5.34). Now the solution
to this equation can be found by taking the Laplace transform:
(5.36)
1 ghL (s)exp(sT ) X 2L (s) = fL (s),
(5.37)
X(t) will also be spectral-bounded for all times. (Here by spectral-bounded we mean that
the spectrum, as defined in Eq. (5.14), is bounded from above.) This will be the case if and
only if
Re[s] < 0,
(5.38)
(5.39)
Thus under this assumption the characteristic equation cannot be satisfied for |g| < 1. That
is, the |g| < 1 regime will always be stable. On the other hand, if g > 1 then there is an s
with a positive real part that will solve Eq. (5.39).
224
(5.41)
= +|g| e
sin .
(5.42)
(5.43)
(5.44)
Thus, as long as cot is positive, the real part of s (that is, T 1 ) will be negative, as
is required for stability. Substituting Eq. (5.44) into Eq. (5.42) or Eq. (5.43) to eliminate
yields
= |g| e e cot .
sin
(5.45)
(5.46)
(5.47)
T 1 .
2T |g|
That is, a finite delay time T and large negative feedback g 1 puts an upper bound
on the bandwidth B = 2 of the feedback (here defined as the full-width at half2
225
Exercise 5.4 Prove this by considering the left-hand side of Eq. (5.39) as a function of s
on the interval [0, ) on the real line. In particular, consider its value at 0 and its value
at .
Thus it is a necessary condition to have g < 1. If g < 1, the stability of the feedback
depends on T and the shape of h(t). However, it turns out that it is possible to have arbitrarily
large negative low-frequency feedback (that is, g 1), for any feedback loop delay T ,
provided that h(t) is broad enough. That is, the price to be paid for strong low-frequency
negative feedback is a reduction in the bandwidth of the feedback, namely the width of
2
|h()|
. A simple example of this is considered in Box 5.1, to which the following exercise
pertains.
Exercise 5.5 Convince yourself of the statements following Eq. (5.46) by graphing both
sides of Eq. (5.45) for different values of |g| e .
2 1 X 0 () + 2 1 X () + 2 X ()
X 2 () = exp(iT2 )
.
(5.48)
1 g h()exp(iT
)
From this the in-loop amplitude quadrature spectrum is easily found from Eqs. (5.13) and
(5.14) to be
S2X () =
=
1 2 S0X () + 2 1 SX () + 2 SX ()
|1 g h()exp(iT
)|2
1 + 1 2 [S0X () 1]
.
|1 g h()
exp(iT )|2
(5.49)
226
The most dramatic effect is, of course, for large negative feedback. For sufficiently
large g it is clear that one can make
S2X () < 1
(5.50)
for some . This effect has been observed experimentally many times with different systems
involving feedback; see for example Refs. [WJ85a, MY86, YIM86, MPV94, TWMB95].
Without a feedback loop this sub-shot-noise photocurrent would be seen as evidence for
squeezing. However, there are several reasons to be very cautious about applying the
description squeezing to this phenomenon relating to the in-loop field. Two of these reasons
are theoretical, and are discussed in the following two subsections. The more practical
reason relates to the out-of-loop beam b 3 .
From Eq. (5.29), the X quadrature of the beam b 3 is, in the Fourier domain,
X 3 () = exp(iT2 ) 2 1 X 0 () + 2 1 X () 2 X ()
+ 2 /2 g h()exp(iT
)X 2 ().
(5.51)
Here we have substituted for 1 in terms of X 2 . Now using the above expression (5.48)
gives
'
2 1 X 0 () + 2 1 X ()
X 3 () = exp(iT2 )
1 g h()
exp(iT )
2
2 [1 g h()exp(iT
)/2 ]X ()
.
(5.52)
1 g h()exp(iT
)
This yields the spectrum
S3X () =
1 + 2 1 [S0X () 1]
|1 g h()
exp(iT )|2
+
/2
2 Re[g h()exp(iT
)] + g 2 |h()|
.
2
|1 g h()exp(iT )|
(5.53)
The denominators are identical to those in the in-loop case, as is the numerator in the first
line, but the additional term in the numerator of the second line indicates that there is extra
noise in the out-of-loop signal.
The expression (5.53) can be rewritten as
S3X () = 1 +
2 /2
2 1 [S0X () 1] + g 2 |h()|
.
2
|1 g h()exp(iT )|
(5.54)
From this it is apparent that, unless the initial beam is amplitude-squeezed (that is, unless
S0X () < 1 for some ), the out-of-loop spectrum will always be greater than the shot-noise
limit of unity. In other words, it is not possible to extract the apparent squeezing in the
feedback loop by using a beam-splitter. In fact, in the limit of large negative feedback
227
(which gives the greatest noise reduction in the in-loop signal), the low-frequency out-ofloop amplitude spectrum approaches a constant. That is,
lim S3X (0) = 21 .
(5.55)
Thus the more light one attempts to extract from the feedback loop, the higher above shot
noise the spectrum becomes. Indeed, this holds for any frequency such that h()
= 0, but
2
must
go
to
zero
(see
Box.
5.1).
recall that for large |g| the bandwidth of |h()|
This result is counter to an intuition based on classical light signals, for which the effect
of a beam-splitter is simply to split a beam in such a way that both outputs would have the
same statistics. The reason why this intuition fails is precisely because this is not all that
a beam-splitter does; it also introduces vacuum noise, which is anticorrelated at the two
output ports. The detector for beam b 2 measures the amplitude fluctuations X 2 , which are
a combination of the initial fluctuations X 0 and the two vacuum fluctuations X and X .
The first two of these are common to the beam b 3 , but the last, X , appears with opposite
sign in X 3 . As the negative feedback is turned up, the first two components are successfully
suppressed, but the last is actually amplified.
(5.56)
It is impossible to measure this spectrum without disturbing the feedback loop because all
of the light in the b 2 beam must be incident upon the photodetector in order to measure
X 2 . However, it is possible to measure the phase-quadrature of the out-of-loop beam by
homodyne detection. This was done in [TWMB95], which verified that this quadrature is
also unaffected by the feedback, with
S3Y () = 1 + 1 2 [S0Y () 1].
(5.57)
For simplicity, consider the case in which the initial beam is coherent with S0X () =
= 1. Then S2Y () = 1 and
S0Y ()
(5.58)
This can clearly be less than unity. This represents a violation of the uncertainty relation
(5.17) which follows from the commutation relations (5.12). In fact it is easy to show
(as done first by Shapiro et al. [SSH+ 87]) from the solution (5.48) that the commutation
228
relations (5.12) are false for the field b 2 and must be replaced by
[X 2 (), Y2 ( )] =
4 i( + )
,
1 g h()exp(iT
)
(5.59)
for
|t t | < T .
(5.60)
The field b 2 is only in existence for a time 2 before it is detected. Because 2 < T , this
means that the two-time commutation relations between different parts of field b 2 are
actually preserved for any time such that those parts of the field are in existence, travelling
through space towards the detector. It is only at times greater than the feedback loop delay
time T that non-standard commutation relations hold. To summarize, the commutation
relations between any of the fields at different spatial points always hold, but there is no
reason to expect the time or frequency commutation relations to hold for an in-loop field.
Without these relations, it is not clear how squeezing should be defined. Indeed, it has
been suggested [BGS+ 99] that squashing would be a more appropriate term for in-loop
squeezing because the uncertainty has actually been squashed, rather than squeezed out
of one quadrature and into another.
A second theoretical reason against the use of the word squeezing to describe the subshot-noise in-loop amplitude quadrature is that (provided that beam b 0 is not squeezed), the
entire apparatus can be described semiclassically. In a semiclassical description there is no
noise except classical noise in the field amplitudes, and shot noise is a result of a quantum
detector being driven by a classical beam of light. That such a description exists might
seem surprising, given the importance of vacuum fluctuations in the explanation of the
spectra in Section 5.2.4. However, the semiclassical explanation, as discussed for example
in Refs. [SSH+ 87] and [TWMB95], is at least as simple. Nevertheless, this theory is less
general than the quantum theory (it cannot treat an initial squeezed beam) so we do not
develop it here.
229
the correlations of X(t). For a perfect QND measurement of X 2 and X 3 , the spectrum
will reproduce those of the conventional (demolition) photodetectors which measure these
beams. This confirms that these detectors (assumed perfect) are indeed recording the true
quantum fluctuations of the light impinging upon them.
What is more interesting is to consider a QND measurement on X 1 . That is because
the set-up in Fig. 5.1 is equivalent (as mentioned above) to a set-up without the second
beam-splitter, but instead with an in-loop photodetector with efficiency 2 . In this version,
the beams b 2 and b 3 do not physically exist. Rather, b 1 is the in-loop beam and X 2 is the
operator for the noise in the photocurrent produced by the detector. As shown above, X 2
can have vanishing noise at low frequencies for g . However, this is not reflected in
the noise in the in-loop beam, as recorded by our hypothetical QND device. Following the
methods of Section 5.2.4, the spectrum of X 1 is
S1X () =
2 /2
1 + 1 [S0 () 1] + g 2 |h()|
.
2
|1 g h()exp(iT )|
(5.61)
1 2
,
2
(5.62)
which is not zero for any detection efficiency 2 less than unity. Indeed, for 2 < 0.5 it is
above shot noise.
The reason why the in-loop amplitude quadrature spectrum is not reduced to zero for large
negative feedback is that the feedback loop is feeding back noise X (t) in the photocurrent
fluctuation operator X 2 (t) that is independent of the fluctuations in the amplitude quadrature
X 1 (t) of the in-loop light. The smaller 2 , the larger the amount of extraneous noise in
the photocurrent and the larger the noise introduced into the in-loop light. In order to
minimize the low-frequency noise in the in-loop light, there is an optimal feedback gain.
230
(5.63)
(5.64)
The fact that the detection efficiency does matter in the attainable squashing (in-loop
squeezing) shows that these are true quantum fluctuations.
g h()exp(iT
) = 1 2 [S0X () 1].
(5.65)
This gives the lowest noise level in the amplitude of b 3 at that frequency,
S3X ()opt = 1 +
2 1 [S0X () 1]
.
1 + 2 1 [S0X () 1]
(5.66)
For large classical noise we have feedback proportional to S0X () and an optimal noise
value of 1/2 , as this approaches the limit of Eq. (5.55). The interesting regime [TWMB95]
is the opposite one, where S0X () 1 is small, or even negative. The case of S0X () 1 < 0
corresponds to a squeezed input beam. Putting squeezing through a beam-splitter reduces
the squeezing in both output beams. In this case, with no feedback the residual squeezing
in beam b 3 would be
S3X ()|g=0 = 1 + 2 1 [S0X () 1],
(5.67)
which is closer to unity than S0X (). The optimal feedback (the purpose of which is to
reduce noise) is, according to Eq. (5.65), positive. That is to say, destabilizing feedback
actually puts back into beam b 3 some of the squeezing lost through the beam-splitter. Since
the required round-loop gain (5.65) is less than unity, the feedback loop remains stable (see
Section 5.2.3).
This result highlights the nonclassical nature of squeezed fluctuations. When an
amplitude-squeezed beam strikes a beam-splitter, the intensity at one output port is anticorrelated with that at the other, hence the need for positive feedback. Of course, the feedback
can never put more squeezing into the beam than was present at the start. That is, S3X ()opt
always lies between S0X () and S3X ()|g=0 . However, if we take the limit 1 1 and
S0X () 0 (perfect squeezing to begin with) then all of this squeezing can be recovered,
for any 2 .
231
H = x a y c ,
(5.68)
2
where
x a = a + a ;
y c = ic + ic .
(5.69)
As described in [AMW88], this Hamiltonian could in principle be realized by two simultaneous processes, assuming that modes a and c have the same frequency. The first process
would be simple linear mixing of the modes (e.g. by an intracavity beam-splitter). The
second process would require an intracavity crystal with a (2) nonlinearity, pumped by a
classical field at twice the frequency of modes a and c. The Hamiltonian (5.68) commutes
with the x a quadrature of mode a, and causes this to drive the x c quadrature of mode c.
d
quadrature of the output field dout from mode c will give a QND
Thus measuring the Xout
b
.
measurement of a + a , which is approximately a QND measurement of X in
The QLEs in the interaction frame for the quadrature operators are
d a
b
x = x a X in
,
dt
2
d c
d
x = x c X in
+ x a .
dt
2
(5.70)
(5.71)
X () =
,
/2 + i
d
b
Xin () + X in
()/(/2 + i)
c
.
X () =
/2 + i
(5.72)
(5.73)
232
^b0
^b1
^
VACUUM
INPUT
^b2
a
^
c^
HOMODYNE
DETECTOR
d^out
d^in
Fig. 5.2 A diagram for a travelling-wave feedback experiment based on a QND measurement.
Travelling fields are denoted b and d. The first beam-splitter transmittance 1 is variable. A cavity
(drawn as a ring cavity for convenience) supports two modes, a (solid line) and c (dashed line). The
decay rates for these two modes are and , respectively. They are coupled by a nonlinear optical
process indicated by the crystal labelled . The perfect homodyne detection at the detector yields a
d
= dout + dout
. Quantum Squeezing, 2004, pp. 171122, Chapter 6
photocurrent proportional to X out
Squeezing and Feedback, H. M. Wiseman, Figure 6.2, Springer-Verlag, Berlin, Heidelberg. Redrawn
and revised and adapted with kind permission of Springer Science+Business Media.
Q = 4 / .
(5.74)
(5.75)
(5.76)
233
which shows that a measurement (by homodyne detection) of the X quadrature of dout can
b
.
indeed effect a measurement of the low-frequency variation in X in
To see that this measurement is a QND measurement, we have to calculate the statistics
of the output field from mode a, that is b out . From the solution (5.72) we find
/2 i b
b
X out
() =
X ().
/2 + i in
(5.77)
That is, for frequencies small compared with , the output field is identical to the input field,
as required for a QND measurement. Of course, we cannot expect the other quadrature to
remain unaffected, because of the uncertainty principle. Indeed, we find
2i b
Q Yind ()/( + 2i)
b
Yout
() =
Yin () +
,
+ 2i
+ 2i
(5.78)
which shows that noise has been added to Yin . Indeed, in the good measurement limit which
gave the result (5.76), we find the phase quadrature output to be dominated by noise:
b
() QYind ().
Yout
(5.79)
(5.80)
where b 0 is the beam incident on the modulated beam-splitter, as in Section 5.2.2. In the
present case, b 1 (t) is then fed into the QND device, as shown in Fig. 5.2, so b in (t) = b 1 (t)
again, and the modulation is controlled by the photocurrent from an (assumed perfect)
d
:
homodyne measurement of Xout
g
d
h(t )X out
(t 0 t )dt .
(5.81)
1 (t) =
Q 0
Here 0 is the delay time in the feedback loop, including the time of flight from the cavity
for mode c to the homodyne detector, and h(t) is as before.
On substituting Eqs. (5.80) and (5.81) into the results of the preceding subsection we
find
b
d
1
() + X in
()g h()Q
( 2i)/( + 2i)
2i X in
b
() =
,
X out
+ 2i
h() exp(iT )
1 g p()
(5.82)
234
p()
=
( + 2i)( + 2i)
(5.83)
represents the frequency response of the two cavity modes. If we assume that the field din
b
is in the vacuum state then we can evaluate the spectrum of amplitude fluctuations in X out
to be
X
() =
Sout
SinX () + g 2 Q2
.
h()exp(iT
|1 g p()
)|2
(5.84)
() = SinY () + |Qp()|
Sout
(5.85)
235
(a)
SIGNAL
PUMP
SUB-SHOT-NOISE
LIGHT
(2)
EOM
IDLER
(b)
SIGNAL
PUMP
EOM
SUB-SHOT-NOISE
LIGHT
(2)
IDLER
Fig. 5.3 A diagram showing two ways of producing sub-shot-noise light from parametric downconversion. (a) Feedback, as first used by Walker and Jakeman [WJ85b]. (b) Feedforward, as first
used by Mertz et al. [MHF+ 90]. Figure 1 adapted with permission from J. Mertz et al., Phys. Rev. A
44, 3229, (1991). Copyrighted by the American Physical Society.
236
photocurrent from the idler can be fed forwards to control the amplitude fluctuations in the
signal (for example by using an electro-optic modulator as described in Section 5.2.2). This
feedforward was realized experimentally by Mertz et al. [MHF+ 90, MHF91], achieving
similar results to that obtained by feedback. The two schemes are contrasted in Fig. 5.3.
Thus, unless one is concerned with light inside a feedback loop, there is no difference in
theory between feedback and feedforward. Indeed, the squeezing produced by QND-based
feedback discussed in Section 5.3.2 could equally well have been produced by QND-based
feedforward. However, from a practical point of view, feedback has the advantage of being
more robust with respect to parameter uncertainties.
Consider the QND-based feedback in Section 5.3.2, and for simplicity allow the QND
cavity to be very heavily damped and the feedback to be very fast so that we may make
1 + g 2 Q2
.
(1 g)2
(5.86)
(5.87)
at g = Q2 . For large Q (high-quality QND measurement) this is much less than unity.
Let us say that the experimenter does not know Q precisely, or cannot control g precisely,
so that in the experiment the actual feedback loop has
g = Q2 (1 + ),
(5.88)
where is a small relative error. To second order in this gives a new squeezed noise
level of
2
X
2 1
Sout = (1 + Q ) 1 +
.
(5.89)
1 + Q2
Exercise 5.9 Show this.
The relative size of the extra noise term decreases with increasing Q, and, as long as
| | Q, the increase in the noise level is negligible.
Now consider feedforward. Under the above conditions, the measured current is represented by the operator
d
b
d
= QX in
+ X in
.
X out
(5.90)
d
, which is added to
This is fed forwards to create a coherent field of amplitude (g/Q)X out
the output of the system. Here g is the feedforward gain and the new output of the system
will be
b
b
b
d
= X in
+ g(X in
+ X in
/Q).
X out
(5.91)
237
(5.92)
X
Sout;min
= (1 + Q2 )1 ,
(5.93)
exactly the same as in the feedback case (as expected), but with an open-loop gain of
g = Q2 /(1 + Q2 ).
The difference between feedback and feedforward comes when we consider systematic
errors. Again assuming a relative error in g of , so that
g=
Q2 (1 + )
,
1 + Q2
(5.94)
X
Sout
= (1 + Q2 )1 1 + Q2 2 .
(5.95)
Now the relative size of the extra term actually increases as the quality of the measurement
increases. In order for this term to be negligible, the systematic error must be extremely
small: | | Q1 . Thus, the feedforward approach is much less robust with respect to
systematic errors due to uncertainties in the system parameters or inability to control the
modulation exactly. This is a generic advantage of feedback over feedforward, and justifies
our emphasis on the former in this book.
In the example above the distinction between feedback and feedforward is obvious.
In the former case the measurement record (the current) used for control is affected by
the controls applied at earlier times; in the latter it is not. This distinction will always
apply for a continuous (in time) control protocol. However, discrete protocols may also be
considered, and indeed one can consider the case of a single measurement result being used
to control the system. In this case, one could argue that all control protocols are necessarily
feedforward. However, the term feedback is often used in that case also, and we will follow
that loose usage at times.
238
states) method will be seen in this chapter to be more useful in a number of applications,
and often to have more explanatory power. These advantages are further developed in the
next chapter. Hence we begin our treatment of feedback control of a quantum system by
reconsidering quantum trajectories. In this section we consider jumpy trajectories (as arise
from direct detection in quantum optics).
= i[H , ] + D[c].
(5.96)
As derived in Section 4.2, the simplest unravelling for this master equation is in terms of
quantum jumps. In quantum optics, these correspond to -function spikes in the photocurrent
I (t) that are interpreted as the detection of a photon emitted by the system. We restate
Eq. (4.40), the SME for the conditioned state I (t):
dt H iH + 12 c c I (t),
dI (t) = dN (t)G[c]
(5.97)
where the point process dN (t) = I (t)dt is defined by
I (t)],
E[dN (t)] = Tr[c c
dN (t)2 = dN (t).
(5.98)
(5.99)
The current I (t) = dN/dt could be used to alter the system dynamics in many different
ways. Some examples from quantum optics are the following: modulating the pump rate
of a laser, the amplitude of a driving field, the cavity length, or the cavity loss rate. The
last three examples could be effected by using an electro-optic modulator (a device with
a refractive index controlled by a current), possibly in combination with a polarizationdependent beam-splitter. The most general expression for the effect of the feedback would
be
[ I (t)]fb = F[t, I[0,t) ]I (t).
(5.100)
Here I[0,t) represents the complete photocurrent record from the beginning of the experiment
up to the present time. Thus the superoperator F[t, I[0,t) ] (which may be explicitly timedependent) is a functional of the current for all past times. This functional dependence
describes the response of the feedback loop, which may be nonlinear, and realistically must
include some smoothing in time. The complete description of this feedback is given simply
by adding Eq. (5.100) to Eq. (5.97). In general the resulting equation would have to be
solved by numerical simulation.
To make progress towards understanding quantum feedback control, it is helpful to
make simplifying assumptions. Firstly, let us consider a linear functional giving feedback
[ I (t)]fb =
239
(5.101)
where K is an arbitrary Liouville superoperator. Later in this section we will consider the
Markovian limit in which the response function h(s) goes to (s). To find this limit, it is
first useful to consider the case h(s) = (s ), where the feedback has a fixed delay .
Then the feedback evolution is
[ I (t)]fb = I (t )KI (t).
(5.102)
Because there is no smoothing response function in Eq. (5.102), the right-hand side of
the equation is a mathematically singular object, with I (t) being a string of -functions. If
it is meant to describe a physical feedback mechanism, then it is necessary to interpret the
equation as an implicit stochastic differential equation, as explained in Section B.6. This
is indicated already in the notation of using a fluxion on the left-hand side. The alternative
interpretation as the explicit equation
[dI (t)]fb = dN (t )KI (t)
(5.103)
yields nonsense.
Exercise 5.10 Show that Eq. (5.103) does not even preserve positivity.
In order to combine Eq. (5.102) with Eq. (5.97), it is necessary to convert it from an
implicit to an explicit equation. As explained in Section B.6, this is easy to accomplish
because of the linearity (with respect to ) of Eq. (5.102). The result is
I (t) + [dI (t)]fb = exp[K dN (t )]I (t).
(5.104)
Using the rule (5.99) and adding this evolution to that of the SME (5.97) gives the total
conditioned evolution of the system
dt H iH + 12 c c + dN (t ) eK 1 I (t). (5.105)
dI (t) = dN(t)G[c]
It is not possible to turn this stochastic equation into a master equation by taking an ensemble
average, as was possible with Eq. (5.97). This is because the feedback noise term (with
argument t ) is not independent of the state at time t. Physically, it is not possible to
derive a master equation because the evolution including feedback (with a time delay) is
not Markovian.
240
(5.108)
(5.109)
Exercise 5.12 Derive Eqs. (5.107) and (5.108), and show that the latter is of the Lindblad
form.
Hint: Remember that eK is an operation.
That is, we have a new master equation incorporating the effect of the feedback. This master
equation could have been guessed from an understanding of quantum jumps. However, the
derivation here has the advantage of making clear the relation of the superoperator K to
], the conditioned
experiment via Eq. (5.102). In the special case in which K = i[Z,
SME with feedback can also be expressed as a SSE of the form of Eq. (4.19), with c
replaced by eiZ c.
Producing nonclassical light. Just as feedback based on absorptive photodetection cannot
create a free squeezed beam (as shown in Section 5.2) by linear optics, so feedback based
on direct detection cannot create a nonclassical state of a cavity mode by linear optics. By
linear optics we mean processes that take coherent states to coherent states: linear damping,
driving and detuning. By a nonclassical state we mean one that cannot be expressed as a
mixture of coherent states. This concept of nonclassicality is really just a statement of what
sort of quantum states are easy to produce, like the concept of the standard quantum limit.
In the present case, we can understand this limitation on feedback by considering the
quantum trajectories. Both the jump and the no-jump evolution for a freely decaying cavity
take a coherent state to a coherent state, in the former case with no change and in the latter
with exponential decay of its amplitude.
241
(5.110)
M 1 (dt) = dt a,
M 0 (dt) = exp(a a
and recall Exercise 3.29.
If the post-jump feedback evolution eK also takes a coherent state to a coherent state (or to
a mixture of coherent states), it is clear that a nonclassical state can never be produced.
However, just as in the case of beams, feedback can cause the in-loop photocurrent to have
nonclassical statistics. For direct detection the simplest form of nonclassical statistics is
sub-Poissonian statistics. That is, the number of photons detected in some time interval has
a variance less than its mean. For a field in a coherent state, the statistics will be Poissonian,
and for a process that produces a mixture of coherent states (of different intensities) the
statistics will be super-Poissonian.
In the quantum trajectory formalism, the explanation for anomalous (e.g. sub-Poissonian)
in-loop photocurrent statistics lies in the modification of the jump measurement operator
by the feedback as in Eq. (5.107). That is, the in-loop photocurrent autocorrelation function
(from which the statistics can be determined) is modified from Eq. (4.50) to
L(t t) eK a(t)
(t t), (5.111)
a + Tr a a(t)
E[I (t )I (t)] = Tr a ae
where L is as defined in Eq. (5.108) with c =
a.
Exercise 5.14 Show this using the same style of argument as in Section 4.3.2.
It is the effect of the feedback specific to the in-loop current via eK , not the overall evolution
including feedback via eL(t t) , that may cause the sub-Poissonian in-loop statistics even if
only linear optics is involved.
2
|; cat = [2(1 + e2|| cos )]1/2 | + ei | .
(5.112)
= D[a],
(5.113)
242
the quantum coherence terms in decay as exp(2 ||2 t), while the coherent amplitudes
themselves decay as exp( t/2) (see Section 3.9.1).
If we consider a direct-detection unravelling of this master equation, then the no-jump
evolution leads solely to the decay of the coherent amplitudes.
Exercise 5.15 Show this.
Thus it is the jumps that are responsible for the destruction of the superposition. This makes
sense, since the rate of decay of the coherence terms scales as the rate of jumps. We can
see explicitly how this happens from the following:
cat | ei | |; cat .
(5.114)
a|;
That is, upon a detection the phase of the superposition changes by , which leads to the
decoherence.
Exercise 5.16 Show that, for || 1, an equal mixture of |; cat and |; cat has
no coherence, since it is the same as an equal mixture of | and |.
Of course, if one keeps track of the detections then one knows which cat state the
system is in at all times. (A similar analysis of the case for homodyne detection is done
in Ref. [CKT94].) It would be preferable, however, to have a deterministic cat state. If
one had arbitrary control over the cavity state then this could always be done by feedback
following each detection since any two pure states are always unitarily related. However,
this observation is not particularly interesting unless the feedback can be implemented
with a practically realizable interaction. As Horoshko and Kilin pointed out, for the state
|; /2cat this is the case, since a simple rotation of the state in phase-space has the
following effect:
(5.115)
H fb (t) = I (t) a a,
(5.116)
with I (t) the photocurrent from direct detection of the cavity output, then, following
each detection that causes to change from /2 to /2, the feedback changes back
to /2. Thus the effect of the jumps is nullified, and the time-evolved state is simply
|e t/2 ; /2cat .
Exercise 5.17 Verify Eq. (5.115) and also that |e t/2 ; /2cat is a solution of the feedbackmodified master equation
= D[ei a a a].
Practicalities of optical feedback. Physically, the Hamiltonian (5.116) requires the ability
to control the frequency of the cavity mode. Provided that it is done slowly enough, this can
be achieved simply by changing the physical properties of the cavity, such as its length, or
243
the refractive index of some material inside it. Here slowly enough means slow compared
with the separation of the eigenstates of the Hamiltonian, so that the adiabatic theorem
[BF28] applies. Assuming we can treat just a single mode in the cavity (as will be the case
if it is small enough), this energy separation equals the resonance frequency 0 . On the
other hand, the -function in Eq. (5.116) implies a modulation that is fast enough for one
to ignore any other evolution during its application. As we have seen, the fastest of the two
evolution rates in the problem (without feedback) is 2 ||2 . Thus the time-scale T for the
modulation of the cavity frequency must satisfy
||2 T 1 0 .
(5.117)
Now 0 is necessary for the derivation of the master equation (5.113). Moreover, a
typical ratio on these time-scales (the quality factor of the cavity) is of order 108 . Thus, if
both signs in Eq. (5.117) are assumed to be satisfied by ratios of 102 , the scheme could
protect Schrodinger cats with || 100, which is arguably macroscopic. In practice, other
physical limitations are going to be even more important.
First, realistic feedback will not be Markovian, but will have some time delay . For the
Markovian approximation to be valid, this must be much less than the time-scale for photon
loss: 1 ||2 . Even with very fast detectors and electronics it would be difficult to
make the effective delay less than 10 ns [SRO+ 02]. Also, even very good optical cavities
have at least of order 104 s1 . Again assuming that the above inequality is satisfied by a
factor of 102 , the limit now becomes || 10.
Second, realistic detectors do not have unit efficiency, as discussed in Section 4.8.1.
For photon counting, as required here, = 0.9 is an exceptionally good figure at present.
Taking into account inefficiency, the feedback-modified master equation is
+ (1 )D[a].
= D[ei a a a]
(5.118)
Even with = 0.9 the decay rate for the coherence terms, 2 (1 )||2 , will still be greater
than the decay rate for the coherent amplitude, /2, unless || 1.5 or smaller. In other
words, until ultra-high-efficiency photodetectors are manufactured, it is only Schrodinger
kittens that may be protected to any significant degree.
244
Recall from Eq. (4.35) that the unitary operator generating the system and bath evolution
for an infinitesimal interval is
U 0 (t + dt, t) = exp c dB 0 (t) c dB 0 (t) iH dt .
(5.119)
Here dB 0 = b 0 (t)dt, where b 0 is the annihilation operator for the input field which is in
the vacuum state, so that dB 0 dB 0 = dt but all other second-order moments vanish. Using
this, we obtain the QLE for an arbitrary system operator corresponding to the master
equation (5.96):
s dt [dB 0 c c dB 0 , s ].
= i[H , s ]dt + c s c 12 s c c 12 c c
(5.120)
(5.121)
= b 0 (t) + c(t).
(5.122)
The output photon-flux operator (equivalent to the photocurrent derived from a perfect
detection of that field) is I1 (t) = b 1 (t) b 1 (t). This suggests that the feedback considered in Section 5.4.1 could be treated in the Heisenberg picture by using the feedback
Hamiltonian
H fb (t) = I1 (t )Z(t),
(5.123)
where each of these quantities is an operator. Here, the feedback superoperator K used in
]. The generalization to arbitrary K would
Section 5.4.1 would be defined by K = i[Z,
be possible by involving auxiliary systems.
It might be thought that there is an ambiguity of operator ordering in Eq. (5.123), because
I1 contains system operators. In fact, the ordering is not important because b 1 (t) commutes
with all system operators at a later time as discussed in Section 4.7.1, so I1 (t) does also.
Of course, b 1 (t) will not commute with system operators for times after t + (when the
feedback acts), but I1 (t) still will because it is not changed by the feedback interaction. (It
commutes with the feedback Hamiltonian.) This fact would allow one to use the formalism
developed here to treat feedback of a photocurrent smoothed by time-averaging. That is to
say, there is still no operator-ordering ambiguity in the expression
or even for a general Hamiltonian functional of the current, as in Eq. (5.100). For a
sufficiently broad response function h(s), there is no need to use stochastic calculus for the
feedback; the explicit equation of motion due to the feedback would simply be
ds (t) = i[H fb (t), s (t)]dt.
(5.125)
245
However, this approach makes the Markovian limit difficult to find. Thus, as in Section 5.4.1,
the response function will be assumed to consist of a time delay only, as in Eq. (5.123).
In order to treat Eq. (5.123) it is necessary once again to use the stochastic calculus of
Appendix B to find the explicit effect of the feedback. As in Section 3.11.1, the key is to
expand the unitary operator for feedback
U fb (t + dt, t) = exp[iH fb (t)dt]
(5.126)
to as many orders as necessary. Since this evolution commutes with the no-feedback
evolution (5.121), the feedback simply adds the following extra term to Eq. (5.121):
(5.127)
!
"
(5.128)
which evaluates to
s dt [dB 0 c c dB 0 , s ]
ds = i[H , s ]dt + c s c 12 s c c 12 c c
!
"
) + b 0 (t )]dt. (5.129)
+ [c (t ) + b 0 (t )] eiZ s eiZ s [c(t
Exercise 5.19 Verify that this is a valid non-Markovian QLE. That is to say, that, for
arbitrary system operators s1 and s2 , d(s1 s2 ) is correctly given by (ds1 )s2 + s1 (ds2 ) +
(ds1 )(ds2 ).
Again, all time arguments are t unless otherwise indicated. This should be compared
with Eq. (5.105). The obvious difference is that Eq. (5.105) explicitly describes direct
photodetection, followed by feedback, whereas the irreversibility in Eq. (5.129) does not
specify that the output has been detected. Indeed, the original Langevin equation (5.121)
is unchanged if the output is subject to homodyne detection, rather than direct detection.
This is the essential difference between the quantum fluctuations of Eq. (5.129) and the
fluctuations due to information gathering in Eq. (5.105).
s dt
ds = i[H , s ] + c s c 12 s c c 12 c c
!
"
) dt.
(5.130)
+ c (t ) eiZ s eiZ s c(t
246
(5.131)
dt .
ds = Tr s i[H , ] + D[eiZ c]
(5.132)
This is precisely what would have been obtained from the Markovian feedback master
].
equation (5.108) for K = i[Z,
Moreover, it is possible to set = 0 in Eq. (5.129) and still obtain a valid QLE:
!
"
(5.133)
+ (c + b 0 ) eiZ s eiZ s (c + b 0 )dt.
This equation is quite different from Eq. (5.129) because it is Markovian. This implies that,
in this equation, it is no longer possible freely to move b 1 = (c + b 0 ), since it now has the
same time argument as the other operators, rather than an earlier one.
Exercise 5.20 Show that Eq. (5.133) is a valid QLE, bearing in mind that now it is b 0
rather than b 1 that commutes with all system operators.
This trick with time arguments and commutation relations enables the correct QLE describing feedback to be derived without worrying about the method of dealing with the 0
limit used in Section 5.4.2. There are subtleties involved in using this method in the
Heisenberg picture, as will become apparent in Section 5.5.3.
J (t) + dW (t)H[c]
J (t).
dJ (t) = i[H , J (t)]dt + dt D[c]
(5.134)
The homodyne photocurrent, normalized so that the deterministic part does not depend on
the efficiency, is
J (t) + (t)/ ,
(5.135)
Jhom (t) = x
247
(5.136)
then the superoperator K must be such as to give valid evolution irrespective of the sign of
time. That is to say, it must give reversible evolution with
K i[F , ]
(5.137)
J (t).
+ dW (t)H[c]
(5.138)
1 + H[iH ]dt + D[c]dt
For finite, this becomes
'
(
1 2
+ c + c J (t )K +
dJ (t) = dt H[iH ] + D[c]
K J (t)
2
J (t).
+ dW (t )KJ (t)/ + dW (t)H[c]
(5.139)
(5.140)
For = 1 and an initially pure state, this can be alternatively be expressed as a SSE.
Ignoring normalization, this is simply
d| J (t) = dt iH 12 c c + 2iF c + F 2 + Jhom (t) c iF | J (t). (5.141)
Exercise 5.21 Verify this, by finding the SME for J | J J | and then adding the terms
necessary to preserve the norm.
The non-selective evolution of the system is easier to find from the SME (5.140). This
is a true Ito equation, so that taking the ensemble average simply removes the stochastic
term. This gives the homodyne feedback master equation
1
i[F , c
+ c ] + D[F ].
= i[H , ] + D[c]
(5.142)
248
An equation of this form was derived by Caves and Milburn [CM87] for an idealized
model for position measurement plus feedback, with c replaced by x and set to 1. The first
feedback term, linear in F , is the desired effect of the feedback which would dominate in
the classical regime. The second feedback term causes diffusion in the variable conjugate
to F . It can be attributed to the inevitable introduction of noise by the measurement step in
the quantum-limited feedback loop. The lower the efficiency, the more noise introduced.
The homodyne feedback master equation can be rewritten in the Lindblad form (4.28) as
1
+ D[c iF ] +
= i H + 12 (c F + F c),
D[F ] L.
(5.143)
In this arrangement, the effect of the feedback is seen to replace c by c iF and to add
an extra term to the Hamiltonian, plus an extra diffusion term that vanishes for perfect
detection. In what follows, will be assumed to be unity unless stated otherwise, since the
generalization is usually obvious from previous examples.
The two-time correlation function of the current can be found from Eq. (5.140) to be
0
1
E[Jhom (t )Jhom (t)] = Tr (c + c )eL(t t) [(c iF )(t) + H.c.] + ( ). (5.144)
Exercise 5.22 Verify this using the method of Section 4.4.4.
Again, note that the feedback affects the term in square brackets, as well as the evolution
by L for time t t. This means that the in-loop photocurrent may have a sub-shot-noise
spectrum, even if the light in the cavity dynamics is classical. From the same reasoning as in
Section 5.4.2, the feedback will not produce nonclassical dynamics for a damped harmonic
if F is a Hamiltonian corresponding to linear optical processes, that is, if
oscillator (c a)
F is linear in a or proportional to a a.
(5.145)
where this is understood to be the 0 limit. Using Section 5.4, the feedback master
equation is
+ c ) F , + D eiF / (c + ) .
(5.146)
= i H + i 21 (c
Expanding the exponential to second order in 1/ and then taking the limit
reproduces (5.143). The correlation functions follow similarly as a special case.
Exercise 5.23 Show these results.
249
(5.148)
Adding feedback as in Eq. (5.136), which is the same as introducing a feedback Hamiltonian
H fb (t) = F Jhom (t),
(5.149)
i[F , c
+ c ] + N D[c ] + i[F , c + c]
= (N + 1) D[c]
[c,
]] i[F , [c,
]]
+ M 12 [c , [c , ]] + i[F , [c , ]] + M 12 [c,
+ LD[F ] i[H , ].
(5.150)
(5.151)
(5.152)
The time delay ensures that the output quadrature operator Jhom (t ) commutes with
all system operators at time t. Thus, it will commute with F (t) and there is no ambiguity in the operator ordering in Eq. (5.152). Treating the equation of motion generated by this Hamiltonian as a Stratonovich (or implicit) equation, the Ito (or explicit)
equation is
[ds (t)]fb = iJhom (t )[F (t), s (t)]dt 12 [F (t), [F (t), s (t)]]dt.
(5.153)
250
Adding in the non-feedback evolution gives the total explicit equation of motion
)dt + dB 0 (t )] 12 [F , [F , s ]]dt
+ i[F , s ][c(t
s dt [dB 0 c c dB 0 , s ].
+ c s c 12 s c c 12 c c
(5.154)
(5.155)
because b 0 (t) does commute with s (t). At first sight it would not seem sensible to use
Eq. (5.155) because b0 (t) + b 0 (t) is the quadrature of the vacuum input, which is independent of the system and so (it would seem) cannot describe feedback. However, Eq. (5.155)
is the correct Hamiltonian to use as long as we ensure that the feedback-coupling between
the system and the bath occurs after the usual coupling between system and bath. That is,
the total unitary operator evolving the system and bath at time t is
fb
U (t + dt, t) = eiH0 (t)dt U 0 (t + dt, t),
(5.156)
where U 0 (t + dt, t) is defined in Eq. (5.119). In the Heisenberg picture, the system evolves
via
s (t + dt) = U (t + dt, t)s (t)U (t + dt, t)
+iH 0f b (t)dt
= U 0 (t + dt, t)e
iH 0f b (t)dt
s (t)e
(5.157)
U 0 (t + dt, t).
(5.158)
251
Note that in Eq. (5.158) the feedback appears to act first because of the reversal of the
order of unitary operators in the Heisenberg picture. If desired, one could rewrite Eq. (5.158)
in a (perhaps) more intuitive order as
s (t + dt) = e+iH1
fb (t)dt
fb
(5.159)
Here
= F (t + dt)Jhom (t).
(5.160)
(5.161)
That is, we regain the output quadrature (or homodyne photocurrent operator), as well as
replacing F (t) by F (t + dt). This ensures that, once again, there is no operator ambiguity
in Eq. (5.161) because the Jhom represents the result of the homodyne measurement at a
time t earlier (albeit infinitesimally) than the time argument for the system operator F .
Again, this makes sense physically because the feedback must act after the measurement.
Expanding the exponentials in Eq. (5.158) or Eq. (5.159), the quantum Ito rules give
"
!
ds = i[H , s ]dt [s , c ] 12 c dt + dB 0 + 12 c dt + dB 0 [s , c]
(5.162)
Exercise 5.24 Verify this, and show that this is a valid Markovian QLE that is equivalent
to the homodyne feedback master equation (5.142).
252
damping, driving and parametric driving. Damping will be assumed to be always present,
since we will assume homodyne detection of the output field from our system. We therefore take the damping rate to be unity. Constant linear driving simply shifts the stationary
= 0, and will be ignored. Stochastic linear driving in the white-noise
state away from x
approximation causes diffusion in the x quadrature, at a rate l. Finally, if the strength of
is (where = 1 would represent a parametric
the parametric driving (H x y + y x)
oscillator at threshold), then the master equation for the system is
+ 14 lD[a a]
+ 14 [a 2 a 2 , ] L0 ,
= D[a]
(5.163)
= kx,
dt
d
V = 2kV + D.
dt
(5.164)
(5.165)
Exercise 5.25 Show that for the particular master equation above (the properties of which
will be denoted by the subscript 0) these equations hold, with
k0 = 12 (1 + ),
D0 = 1 + l.
(5.166)
(5.167)
] and that d x 2 /dt = Tr x 2 .
= Tr[x
Hint: Remember that dx/dt
For a stable system with k > 0, there is a steady state withx = 0 and
V =
D
.
2k
(5.168)
It turns out that the first two moments (the mean and variance) are actually sufficient to
specify the stationary state of the system because it is a Gaussian state. That is, its Wigner
function (see Section A.5) is Gaussian. The probability distribution for x (which is all we
are interested in here) is just the marginal distribution for the Wigner function, so it is
also Gaussian. Moreover, if the distribution is originally Gaussian (as for the vacuum, for
example), then it will be Gaussian at all times. This can be seen by considering the equation
of motion for the probability distribution for x,
(x) = x||x.
(5.169)
This equation of motion can be derived from the master equation by considering the operator
y]
=
correspondences for the Wigner function (see Section A.5). Because here we have [x,
then we must identify y with 2P . On doing this we find that
2i, if we identify x with Q
(x) =
kx + 12 D 2 (x).
x
x
253
(5.170)
This particular form, with linear drift kx and constant diffusion D > 0, is known as an
OrnsteinUhlenbeck equation (OUE).
Exercise 5.26 Derive Eq. (5.170) from Eq. (5.163), and show by substitution that it has a
Gaussian solution with mean and variance satisfying Eqs. (5.164) and (5.165), respectively.
In the present case, V0 = (1 + l)/(1 + ). If this is less than unity, the system exhibits
squeezing of the x quadrature. It is useful to characterize the squeezing by the normally
ordered variance (see Section A.5)
2
(5.171)
U a a + 2a a + a a a + a ,
Exercise 5.27 Show from this definition that U = V 1.
For this system, the normally ordered variance takes the value
U0 =
l
.
1+
(5.172)
Now, if the system is to stay below threshold (so that the variance in the y quadrature does
not become unbounded), then the maximum value for is unity.
Exercise 5.28 Show this from the master equation (5.163)
At this value, U0 = 1/2 when the x-diffusion rate l = 0. Therefore the minimum value of
squeezing which this linear system can attain as a stationary value is half of the theoretical
minimum of U0 = 1.
In quantum optics, the output light is often of more interest than the intracavity light.
Therefore it is useful to compute the output noise statistics. For squeezed systems,
the relevant quantity is the spectrum of the homodyne photocurrent, as introduced in
Section 4.4.4,
S() = lim
d E[Jhom (t + )Jhom (t)]ei .
(5.173)
t
Given the drift and diffusion coefficients for the dynamics, the spectrum in the present case
is
S() = 1 +
D 2k
.
2 + k 2
(5.174)
(a
Hint: Remember that, for example, Tr xe
ss . Thus, since the mean of x obeys the
using the state with initial condition (0) = a
ss ].
linear equation (5.164), it follows that this expression simplifies to ek Tr[x a
254
The spectrum consists of a constant term representing shot noise plus a Lorentzian,
which will be negative for squeezed systems. The spectrum can be related to the intracavity
squeezing by subtracting the vacuum noise:
D 2k
1
d[S() 1] =
= U.
(5.175)
2
2k
That is, the total squeezing integrated across all frequencies in the output is equal to the
intracavity squeezing. However, the minimum squeezing, which for a simple linear system
such as this will occur at zero frequency,2 may be greater than or less than U . It is useful
to define it by another parameter,
R = S(0) 1 = 2U/k.
(5.176)
l
.
+ )2
1
(1
4
(5.177)
In the ideal limit ( 1, l 0), the zero-frequency squeezing approaches the minimum
value of 1.
F = y/2.
(5.178)
As a separate Hamiltonian, this translates a state in the negative x direction for positive.
By controlling this Hamiltonian by the homodyne photocurrent, one thus has the ability to
change the statistics for x and perhaps achieve better squeezing. Substituting Eq. (5.178)
into the general homodyne feedback master equation (5.142) and adding the free dynamics
(5.163) gives
= L0 +
2
+ a ] +
[a a , a
D[a a ].
2
4
(5.179)
Here is the proportion of output light used in the feedback loop, multiplied by the
efficiency of the detection. For the x distribution (x) one finds that it still obeys an OUE,
but now with
k = k0 + ,
D = D0 + 2 + 2 /.
2
(5.180)
(5.181)
In reality, the minimum noise is never found at zero frequency, because virtually all experiments are subject to non-white noise
of technical origin, which can usually be made negligible at high frequencies, but whose spectrum grows without bound as
0. Often, the spectrum scales at 1/, or 1/f , where f = /(2), in which case it is known as 1/f noise.
255
k02 + 2k0 U0 .
(5.184)
Note that this has the same sign as U0 . That is to say, if the system produces squeezed
light, then the best way to enhance the squeezing is to add a force that displaces the state
in the direction of the difference between the measured photocurrent and the desired mean
photocurrent. This positive feedback is the opposite of what would be expected classically,
and can be attributed to the effect of homodyne measurement on squeezed states, as will be
explained in Section 5.6.3. Obviously, the best intracavity squeezing will be when = 1,
in which case the intracavity squeezing can be simply expressed as
!
"
(5.185)
Umin = k0 1 + 1 + R0 .
Although linear optical feedback cannot produce squeezing, this does not mean that it
cannot reduce noise. In fact, it can be proven that Umin U0 , with equality only if = 0
or U0 = 0.
Exercise 5.31 Show this for = 1 using the result 1 + R0 1 + R0 /2 (since R0 1).
The result for any follows by application of the mean-value theorem.
This result implies that the intracavity variance in x can always be reduced by homodynemediated linear optical feedback, unless it is at the vacuum noise level. In particular,
intracavity squeezing can always be enhanced. For the parametric oscillator defined originally in Eq. (5.163), with l = 0, Umin = /. For = 1, the (symmetrically ordered)
x variance is Vmin = 1 . The y variance, which is unaffected by feedback, is seen
from Eq. (5.163) to be (1 )1 . Thus, with perfect detection, it is possible to produce
a minimum-uncertainty squeezed state with arbitrarily high squeezing as 1. This is
not unexpected, since parametric driving in an undamped cavity also produces minimumuncertainty squeezed states (but there is no steady state). The feedback removes the noise
that was added by the damping that enables the measurement used in the feedback.
Next, we turn to the calculation of the output squeezing. Here, it must be remembered
that at least a fraction of the output light is being used in the feedback loop. Thus, the
256
(5.186)
2U
2k0 U0 + 2 /
.
=
k0 +
(k0 + )2
(5.187)
In all cases, R is minimized for a different value of from that which minimizes U . One
finds
Rmin =
R0
1 + R0
(5.188)
when
= 2U0 .
(5.189)
Again, has the same sign as U0 . It follows immediately from Eq. (5.188) that, since
R0 1 and 1 ,
Rmin R0 for R0 < 0.
(5.190)
That is to say, dividing the cavity output to add a homodyne-mediated classical feedback
loop cannot produce better output squeezing at any frequency than would be available
from an undivided output with no feedback. These no-go results are analogous to those
obtained for the feedback control of optical beams derived in Section 5.2.
matrix J (t),
1 2
dJ (t) = dt L0 J (t) + K[aJ (t) + J (t)a ] +
K J (t)
2
+ K/ J (t).
+ dW (t) H[a]
257
(5.191)
Here, L0 is as defined in Eq. (5.163) and K = i[F , ], where F is defined in Eq. (5.178).
Changing this to a stochastic FPE for the conditioned marginal Wigner function gives
1 2
2
dJ (x) = dt
D0 + 2 + / J (x)
(k0 + )x +
x
2 x 2
+ dW (t) x xJ (t) +
(5.192)
+ (/ )
J (x),
x
x
where xJ (t) is the mean of the distribution J (x).
Exercise 5.33 Show this using the Wigner-function operator correspondences.
This equation is obviously no longer a simple OUE. Nevertheless, it still has a Gaussian as
an exact solution. Specifically, the mean xJ and variance VJ of the conditioned distribution
obey the following equations (recall that (t) = dW/dt):
xJ = (k0 + )xJ + (t) (VJ 1) (/ ) ,
(5.193)
VJ = 2k0 VJ + D0 (VJ 1)2 .
(5.194)
Exercise 5.34 Show this by considering a Gaussian ansatz for Eq. (5.192).
Hint: Remember that, for any b, 1 + dW (t)b = exp[dW (t)b dt b2 /2].
Two points about the evolution equation for VJ are worth noting: it is completely deterministic (no noise terms); and it is not influenced by the presence of feedback.
The equation for the conditioned variance is more simply written in terms of the conditioned normally ordered variance UJ = VJ 1,
U J = 2k0 UJ 2k0 + D0 UJ2 .
(5.195)
If one were to choose = k0 + k02 + (2k0 + D0 ) then there would be no noise at
all in the conditioned mean and so one could set xJ = 0 in steady state. This value of
258
is precisely that value derived as Eq. (5.184) to minimize the unconditioned variance
under feedback. Now one can see why this minimum unconditioned variance is equal to
the conditioned variance. The feedback works simply by suppressing the fluctuations in the
conditioned mean.
In general, the unconditioned variance will consist of two terms, the conditioned quantum
variance in x plus the classical (ensemble) average variance in the conditioned mean of x:
U = UJ + E[xJ2 ].
The latter term is found from Eq. (5.197) to be
2
1
2
1
2
E[xJ ] =
(k0 + ) + k0 + (2k0 + D0 ) .
2(k0 + )
(5.198)
(5.199)
1
[2 + (2k0 + D0 )].
2(k0 + )
(5.200)
Exercise 5.35 Verify that this is identical to the expression (5.182) derived in the preceding
subsection using the unconditioned master equation.
Using the conditioned equation, there is an obvious way to understand the feedback. The
homodyne measurement reduces the conditioned variance (except when it is equal to the
classical minimum of 1). The more efficient the measurement, the greater the reduction.
Ordinarily, this reduced variance is not evident because the measurement gives a random
shift to the conditional mean of x, with the randomness arising from the shot noise of the
photocurrent. By appropriately feeding back this photocurrent, it is possible to counteract
precisely this shift and thus observe the conditioned variance.
The sign of the feedback parameter is determined by the sign of the shift which the
measurement gives to the conditioned mean xJ . For classical statistics (U 0), a higher
than average photocurrent reading ( (t) > 0) leads to an increase in xJ (except if U = 0, in
which case the measurement has no effect). However, for nonclassical states with U < 0,
the classical intuition fails since a positive photocurrent fluctuation causes xJ to decrease.
This explains the counter-intuitive negative value of required in squeezed systems, which
naively would be thought to destabilize the system and increase fluctuations. However, the
value of the positive feedback required, given by Eq. (5.184), is such that the overall decay
rate k0 + is still positive.
It is worth remarking that the above conclusions are not limited to Markovian feedback,
which is all that we have analysed. One could consider a feedback Hamiltonian proportional
to an arbitrary (even nonlinear) function of the photocurrent J (t), and the equation for the
conditional variance (5.194) will remain exactly as it is. Only the equation for the mean
will be changed. Although this equation might not be solvable, Eq. (5.198) guarantees that
the unconditioned variance cannot be less than the conditional variance. Moreover, if the
feedback Hamiltonian is a linear functional of the photocurrent then the equation for the
mean will be solvable in Fourier space, provided that the feedback is stable. That is, for
259
linear systems one can solve for arbitrary feedback using the theory of quantum trajectories
in exactly the same manner as we did for QLEs in Section 5.2. The interested reader is
referred to Ref. [WM94c].
To conclude, one can state succinctly that conditioning can be made practical by feedback. The intracavity noise reduction produced by classical feedback can never be better
than that produced (conditionally) by the measurement. Of course, nonclassical feedback
(such as using the photocurrent to influence nonlinear intracavity elements) may produce
nonclassical states, but such elements can produce nonclassical states without feedback,
so this is hardly surprising. In order to produce nonclassical states by feedback with linear
optics, it would be necessary to have a nonclassical measurement scheme. That is to say,
one that does not rely on measurement of the extracavity light to procure information
about the intracavity state. For example, a QND measurement of one quadrature would
produce a squeezed conditioned state and hence allow the production of unconditional
intracavity (and extracavity) squeezing by feedback. Again, the interested reader is referred
to Ref. [WM94c]. This is essentially the same conclusion as that which was reached for
feedback on optical beams in Section 5.3. In the following section we consider feedback
based on QND measurements in an atomic (rather than optical) system.
5.7.1 Spin-squeezing
Consider an atom with two relevant levels, with the population difference operator being
the Pauli operator z . The collective properties of N such atoms prepared identically are
conveniently described by a spin-J system for J = N/2. The collective angular-momentum
are J = N j(k) , where = x, y, z and where j(k) = (k) /2 is the angularoperators, J,
k=1
momentum operator for the kth atom. J obey the cyclic commutation relations [Jx , Jy ] =
i xyz Jz .
Exercise 5.36 Verify this from the commutation relations for the Pauli matrices. See
Box. 3.1.
260
(5.201)
plus cyclic permutations. The operator Jz represents half the total population difference
and is a quantity that can be measured, for example by dispersive imaging techniques as
will be discussed.
For a coherent spin state (CSS) of a spin-J system, the elementary spins all point in
the same direction, with no correlations. An example of such a state is a Jx eigenstate
of maximum eigenvalue J = N/2. Such a state achieves the minimum of the uncertainty
relation (5.201), with the variance of the two components normal to the mean direction
(in this case, Jz and Jy ) equal to J /2. If quantum-mechanical correlations are introduced
among the atoms it is possible to reduce the fluctuations in one direction at the expense
of the other. This is the idea of a squeezed spin state (SSS) introduced by Kitagawa
and Ueda [KU93]. That is, the spin system is squeezed when the variance of one spin
component normal to the mean spin vector is smaller than the standard quantum limit (SQL)
of J /2.
There are many ways to characterize the degree of spin-squeezing in a spin-J system.
We will use the criteria of Srensen and co-workers [SDCZ01] and Wang [Wan01], where
the squeezing parameter is given by
N (Jn1 )2
2 2 ,
Jn2 + Jn3
n21 =
(5.202)
where Jn n J and ni for i = 1, 2, 3 are orthogonal unit vectors. Systems with n2 < 1 are
said to be spin-squeezed in the direction n. It has also been shown that this indicates that the
atoms are in an entangled state [SDCZ01]. This parameter also has the appealing property
that, for a CSS, n2 = 1 for all n [Wan01]. In all that follows, we consider spin-squeezing
in the z direction and hence drop the subscript on 2 .
The ultimate limit to 2 (set by the Heisenberg uncertainty relations) is of order 1/N .
Since N is typically very large experimentally (of order 1011 ), the potential for noise reduction is enormous. However, so far, the degree of noise reduction achieved experimentally
has been modest, with 2 101 N 1 . The amount of entanglement in such states is
relatively small, so it is a good approximation to assume that the atoms are unentangled
when evaluating the denominator of Eq. (5.202). That is, for example, if the mean spin is
in the x direction, we can say thatJx = J and that Jy = Jz = 0. For squeezing in the
z direction, the squeezing parameter is thus given by
2 Jz2 /(J /2).
(5.203)
261
dY = MD[Jz ]Y dt + M dW (t)H[Jz ]Y .
(5.204)
Here Y is the state of the spin system conditioned on the current
Y (t) = 2 MJz Y + dW (t)/dt,
(5.205)
where unit detection efficiency has been assumed. We are using Y (t), rather than J (t) as
has been our convention, in order to avoid confusion with the total spin J . For a probe beam
in free space, the QND measurement strength (with dimensions of T 1 ) is
M = P [ 2 /(8A Isat )]2 ,
(5.206)
dJz Y = 2 M dW (t)(Jz )2Y
= 2 MY (t)dt Jz2 Y .
(5.207)
262
Fig. 5.4 Schematic quasiprobability distributions for the spin states, represented by ellipses on the
Bloch sphere of radius J . The initial CSS, spin polarized in the x direction, is given by state 1. State 2
is one particular conditioned spin state after a measurement of Jz , while state 3 is the corresponding
unconditioned state due to averaging over all possible conditioned states. The effect of the feedback
is shown by state 4: a rotation about the y axis shifts the conditioned state 2 back to Jz Y = 0. The
ensemble average of these conditioned states will then be similar to state 4. This is a reproduction
of Fig. 2 of Ref. [TMW02a]. Based on Figure 2 from L. K. Thomsen et al., Continuous Quantum
Nondemolition Feedback and Unconditional Atomic Spin Squeezing, Journal of Physics B: At. Mol.
Opt. Phys. 35, 4937, (2002), IOP Publishing Ltd.
263
Fig. 5.5 A schematic diagram of an experimental apparatus that could be used for production of spinsqueezing via feedback. The laser probe passes through the ensemble of atoms and is detected by
balanced homodyne detection. The current Y (t) is fed back to control the magnetic field surrounding
the atoms.
where F (t) = (t)Jy / M and (t) is the feedback strength. We have assumed instantaneous feedback because that is the form required to cancel out Eq. (5.207). Such a
Hamiltonian can be effected by modulating the magnetic field in the region of the sample.
This is shown in Fig. 5.5 from the experiment.
Assuming as above thatJz Y = 0, this feedback Hamiltonian leads to a shift in the mean
Jz of
(5.209)
dJz fb (t)Y (t)dtJx Y / M.
264
Since the idea is to produce Jz Y = 0 via the feedback, the approximations above and in
Eq. (5.207) apply, and we can find a feedback strength such that Eq. (5.207) is cancelled
out by Eq. (5.209). The required feedback strength for our scheme is thus
(t) = 2M Jz2 Y /Jx Y .
(5.210)
This use of feedback to cancel out the noise in the conditional mean is the same technique
as that which was found to be optimal in the linear system analysed in Section 5.6. The
difference here is that the optimal feedback strength (5.210) depends upon conditional
averages.
For experimental practicality it is desirable to replace the conditional averages in
Eq. (5.210) by a predetermined function (t) that could be stored in a signal generator. The evolution of the system including feedback can then be described by the master
equation
(t)2
D[Jy ].
= MD[Jz ] i(t)[Jy , Jz + Jz ] +
M
(5.211)
The choice of (t) was considered in detail in Ref. [TMW02a], where it was shown that
a suitable choice enables the master equation (5.211) to produce a Heisenberg-limited
spin-squeezing ( 2 N 1 ) at a time t M 1 .
265
By substituting the approximation for Jy2 into the expressions for Jx2 we obtain
(t) M(1 + N Mt)1 .
(5.212)
2 (t) (1 + N Mt)1
(5.213)
will be produced at time t, and this will be valid as long as Mt 1. Spontaneous emission,
and other imperfections, are of course still being ignored.
Exercise 5.39 Show that, if is held fixed, rather than varied, the variance for the
conditioned meanJz Y at time t is
J
J
+ (1 e4J t )
.
2
4M
Hint: Remember that for linear systems Jz2 = Jz2 Y + Jz 2Y .
[e4J t (1 + 2J Mt)1 ]
(5.214)
An experiment along the lines described above (with fixed) has been performed, with
results that appeared to reflect moderate spin-squeezing [GSM04]. Unfortunately the published results were not reproducible and exhibited some critical calibration inconsistencies.
The authors have since concluded that they must have been spurious and have retracted the
paper [GSM08], saying that analyzing Faraday spectroscopy of alkali clouds at high optical
depth in precise quantitative detail is surprisingly challenging. High optical depth leads to
significant difficulties with the accurate determination of effective atom number and degree
of polarization (and thus of the CSS uncertainty level), while technical noise stemming
from background magnetic fields and probe polarization or pointing fluctuations can easily
mask the atomic spin projection signal. An additional complication relative to the simple
theory given above is that the experiment was performed by probing cesium atoms on an
optical transition with many coupled hyperfine-Zeeman states, rather than the two levels
considered above. There is still a linear coupling of the probe field to the angular-momentum
operator jz defined on the entire hyperfine-Zeeman manifold, which can in principle be utilized to generate measurement-induced spin-squeezing. However, there is also a nonlinear
atomprobe interaction that can corrupt the Faraday-rotation signals if it is not suppressed
by very careful probe-polarization alignment. For more details see Ref. [Sto06]. Continuing research has led to the development of technical noise-suppression techniques and new
modelling and calibration methods that enable accurate high-confidence determination of
the CSS uncertainty level [MCSM], providing hope for improved experimental results
regarding the measurement and control of spin-squeezing in the future.
5.8 Further reading
5.8.1 Coherent quantum feedback
It was emphasized in Section 5.2 that even when we use an operator to describe the fedback current, as is necessary in the Heisenberg picture, we do not mean to imply that the
feedback apparatus is truly quantum mechanical. That is, the feedback Hamiltonians we
266
use are model Hamiltonians that produce the correct evolution. They are not to be taken
seriously as dynamical Hamiltonians for the feedback apparatus.
However, there are situations in which we might wish to consider a very small apparatus,
and to treat it seriously as a quantum system, with no classical measurement device taking
part in the dynamics. We could still consider this to be a form of quantum feedback if the
Hamiltonian for the system of interest S and apparatus A were such that the following
applied.
1. S and A evolve for time tm under a joint Hamiltonian H coup .
2. S and A may then evolve independently.
3. S and A then evolve for time tc under another joint Hamiltonian H fb = FS JA .
The form of H fb ensures that, insofar as S is concerned, the dynamics could have been
implemented by replacing step 3 by the following two steps.
3. The apparatus obervable JA is measured yielding result J .
4. S then evolves for time tc under the Hamiltonian H fb = FS J .
267
Back-action elimination. Recall from Section 1.4.2 that, for efficient measurements,
any measurement can be considered as a minimally disturbing measurement followed by
unitary evolution that depends on the result. This leads naturally to the idea, proposed
in Ref. [Wis95], of using feedback to eliminate this unnecessary unitary back-action. A
quantum-optical realization of a QND measurement of a quadrature using this technique
was also proposed there. Courty, Heidman and Pinard [CHP03] have proposed using this
principle to eliminate the radiation-pressure back-action in the interferometric measurement of mirror position. Their work has important implications for gravitational-wave
detection.
Decoherence control. The HoroshkoKilin scheme (see Section 5.4.3) for protecting a
Schrodinger cat from decoherence due to photon loss works only if the lost photons are
detected (and the information fed back). In the microwave regime lost photons cannot in
practice be detected, so an alternative approach is necessary. Several feedback schemes
have been suggested see Ref. [ZVTR03] and references therein. In Ref. [ZVTR03],
Zippilli et al. showed that the parity of the microwave cat state can be probed by passing
an atom through the cavity. The same atom, containing the result of the measurement,
can then be used to implement feedback on the mode during the latter part of its passage
through the cavity. This is thus an example of the coherent feedback discussed above. They
showed that the lifetime of the cat state can, in principle, be arbitrarily enhanced by this
technique.
Engineering invariant attractive subsystems. The preparation of a two-level quantum
system in a particular state by Markovian feedback was considered in Refs. [HHM98,
WW01, WWM01] (see also Section 6.7.2). A much more general approach is discussed
in Ref. [TV08], namely engineering attractive and invariant dynamics for a subsystem.
Technically, a subsystem is a system with Hilbert space HS such that the total Hilbert space
can be written as (HS HF ) HR . Attractive and invariant dynamics for the subsystem
means that in the long-time limit the projection of the state onto HR is zero, while the
state on HS HF has the form S F , for a particular S . Ticozzi and Viola discuss the
conditions under which such dynamics can be engineered using Markovian feedback, for a
given measurement interaction and Markovian decoherence. This is potentially useful for
preparing systems for quantum information processing (see Chapter 7).
Linewidth narrowing of an atom laser. A continuous atom laser consists of a mode
of bosonic atoms continuously damped, so as to form a beam, and continuously replenished. Such a device, which has yet to be realized, will almost certainly have a spectral
linewidth dominated by the effect of the atomic interaction energy, which turns fluctuations in the condensate atom number into fluctuations in the condensate frequency. These
correlated fluctuations mean that information about the atom number could be used to
reduce the frequency fluctuations, by controlling a spatially uniform potential. Obtaining
information about the atom number by a quantum-non-demolition measurement (similar
to that discussed in Section 5.7) is a process that itself causes phase fluctuations, due
to measurement back-action. Nevertheless, it has been shown that Markovian feedback
based upon such a measurement could reduce the linewidth by many orders of magnitude
[WT01, TW02].
268
Cooling of a trapped ion (theory and experiment). The motion of a single ion in a Paul
trap [WPW99] can be treated as a harmonic oscillator. By using its internal electronic states
and coupling to external lasers, it can be cooled using so-called Doppler cooling [WPW99].
However, the equilibrium thermal occupation number (the number of phonons of motion)
is still large. It was shown by Steixner, Rabl and Zoller [SRZ05] that in this process some
of the light emitted from the atom can be detected in a manner that allows a measurement of
one quadrature of the ions motion (similar to homodyne detection of an optical field). They
then show, using Markovian feedback theory as presented here, that the measured current
can be fed back to the trap electrodes to cool the motion of the ion. Moreover, this theory
has since been verified experimentally by the group of Blatt [BRW+ 06], demonstrating
cooling by more than 30% below the Doppler limit.
Linewidth narrowing of an atom by squashed light. It has been known for some time
[Gar86] that a two-level atom strongly coupled to a beam of broad-band squeezed light will
have the decay rate of one of its dipole quadratures changed by an amount proportional
to the normally ordered spectrum of squeezing (i.e. the decay rate will be reduced). This
could be seen by observing a narrow feature in the power spectrum of the emission of the
atom into the non-squeezed modes to which it is coupled. It was shown in Ref. [Wis98]
that the same phenomenon occurs for a squashed beam (see Section 5.2.5) as produced by
a feedback loop. Note that an atom is a nonlinear optical element, so that a semiclassical
theory of squashing cannot explain this effect, which has yet to be observed.
Applications of quantum feedback control in quantum information will be considered in
Chapter 7.
6
State-based quantum feedback control
6.1 Introduction
In the preceding chapter we introduced quantum feedback control, devoting most space
to the continuous feedback control of a localized quantum system. That is, we considered
feeding back the current resulting from the monitoring of that system to control a parameter
in the system Hamiltonian. We described feedback both in terms of Heisenberg-picture
operator equations and in terms of the stochastic evolution of the conditional state. The
former formulation was analytically solvable for linear systems. However, the latter could
also be solved analytically for simple linear systems, and had the advantage of giving an
explanation for how well the feedback could perform.
In this chapter we develop further the theory of quantum feedback control using the
conditional state. The state can be used not only as a basis for understanding feedback, but
also as the basis for the feedback itself. This is a simple but elegant idea. The conditional
state is, by definition, the observers knowledge about the system. In order to control the
system optimally, the observer should use this knowledge. Of course a very similar idea
was discussed in Section 2.5 in the context of adaptive measurements. There, ones joint
knowledge of a quantum system and a classical parameter was used to choose future
measurements so as to increase ones knowledge of the classical parameter. The distinction
is that in this chapter we consider state-based feedback to control the quantum system itself.
This chapter is structured as follows. Section 6.2 introduces the idea of state-based
feedback by discussing the first experimental implementation of a state-based feedback
protocol to control a quantum state. This experiment, in a cavity QED system, was in the
deep quantum regime, for which there is no classical analogue. By contrast, the remainder
of the chapter is oriented towards state-based control in linear quantum systems, for which
there is a classical analogue. Hence we begin this part with an analysis of state-based
feedback in general classical systems, in Section 6.3, and in linear classical systems, in
Section 6.4. These sections introduce ideas that will apply in the quantum case also, such as
optimal control, stability, detectability and stabilizability. We contrast Markovian feedback
control with optimal feedback control and also analyse a classical Markovian feedback
experiment. In Sections 6.5 and 6.6 we discuss state-based control in general quantum
systems and in linear quantum systems, respectively. As discussed in the preface, these
269
270
sections contain unpublished results obtained by one of us (H. M. W.) in collaboration with
Andrew Doherty and (latterly) Andy Chia. Finally, we conclude as usual with a discussion
of further reading.
G(2)
ss (t, t + )
G(2)
ss (t, t
+ )
I (t + )I (t)ss
.
I (t)2ss
(6.1)
For > 0, we can use the approach of Section 4.3.2 to rewrite this in terms of conditional
measurements as
g (2) ( ) = I ( )c /I (0)ss ,
(6.2)
where here the subscript c means conditioned on a detection at time 0, when the system
has reached its steady state.
That is, g (2) ( ) is the probability of getting a second detection a time after the first
detection (which occurs when the system has reached steady state), divided by the unconditioned probability for the second detection. From Eq. (6.1), the function for < 0 can be
found by symmetry.
271
HWP
VARIABLE
DELAY
PULSE
STRETCHER
EOM
COMPUTER
PBS
HWP
PBS
APD 1
QWP
TDC
BS
APD 2
OPTICAL
PUMPING
Rb ATOMS
Fig. 6.1 A simplified diagram of the experimental set-up of Smith et al., as depicted by Fig. 6 in
Ref. [RSO+ 04]. Rubidium atoms in a beam are optically pumped into a ground state that couples to
a cavity mode before entering the cavity. Two avalanche photo-diodes (APDs) measure the intensity
of the light emitted by the cavity. The correlation between the detectors is processed using gating
electronics, a time-to-digital converter (TDC) and a histogramming memory and computer. Photodetections at APD 1 trigger a change in the intensity injected into the cavity via an electro-optic
modulator (EOM). The optics shown are relevant for control of the size of the intensity step and the
polarization of the light injected into the cavity. HWP and QWP denote half- and quarter-wave plate,
respectively, and PBS, polarization beam-splitter. Figure 6 adapted with permission from J. E. Reiner
et al., Phys. Rev. A 70, 023819, (2004). Copyrighted by the American Physical Society.
Since the stationary system state is almost a pure state, we know from quantum trajectory
theory that, immediately following the first detection, the conditional state is |c (0)
ss , where a is the annihilation operator for the cavity mode. The correlation function
a|
(6.2) can thus be reformulated as
g (2) ( ) =
c ( )
c ( )|a a|
.
ss
ss |a a|
(6.3)
Here |c ( ) is the conditional state for > 0, which relaxes back to |ss as . In
other words, measuring g (2) ( ) for > 0 is directly probing a property (the mean photon
number) of the conditional state.
The next step taken in Ref. [SRO+ 02] was to control the conditional state (prepared by a
photodetection), rather than simply observing it. That is, by altering the system dynamics
subsequent to the first photodetection the conditional state could be altered, and hence
g (2) ( ) changed for > 0. Specifically, it was shown that the dynamics of the conditional
state could be frozen for an indefinite time, making g (2) ( ) constant. The state could then
272
be released to resume its (oscillatory) relaxation to |ss . This was done by changing the
coherent driving of the cavity at a suitable time = T after the first detection.
H = ig a J a J + iE(a a)
(6.4)
(compare with Eq. (1.180)). Here we have assumed that all atoms are coupled with equal
strength g, so that
J =
N
k ,
(6.5)
k=1
where k is the lowering operator for atom k. We have also included coherent driving (E)
of the cavity mode by a resonant laser.
Including damping of the atoms (primarily due to spontaneous emission through the
sides of the cavity) and cavity (primarily due to transmission through the end mirrors), we
can describe the system by the master equation (in the interaction frame)
+ g(a J a J ), + D[a](t)
+
D[ k ](t).
(6.6)
= E(a a)
k
This describes a damped harmonic oscillator (the cavity) coupled to a damped anharmonic
oscillator (the atoms). The anharmonicity is a result of the fact that J and J obey different
commutation relations from a and a . This is necessary since the maximum number of
atomic excitations is N , which we are assuming is finite.
The evolution generated by Eq. (6.6) is very rich [Ber94]. Much simpler, but still interesting, dynamics results in the limit E g [CBR91]. In particular, in this weakdriving limit the steady state of the system is approximately a pure state. To understand
why this is the case, consider a more general system consisting of damped and coupled
oscillators, which could be harmonic or anharmonic. Let us denote the ground state by
|0 , and take the coupling rates and damping rates to be of order unity. For the system
above, define a parameter
=
E
,
+ 42 /
(6.7)
where = g N is the N -atom single-photon Rabi frequency. For the case of weak
as we will see. We
driving, 1. In this limit, is equal to the stationary value for a,
now show that the steady state of the system ss is pure to order 2 . That is, one can use
273
the approximation
ss = |ss ss | + O(3 ),
(6.8)
|ss = |0 + |1 + 2 |2 + O(3 ),
(6.9)
where
where |1 and |2 are states with norm of order unity having, respectively, one and two
excitations (in the joint system of atom and cavity mode). Here and in the remainder of this
section we are only bothering to normalize the states to lowest order in .
Consider unravelling the master equation of the system by unit-efficiency quantum jumps
(corresponding to the emission of photons from the system). It is simple to verify that the
no-jump evolution will take the system into a pure state of the form of Eq. (6.9).
Exercise 6.1 Verify this for the master equation (6.6), by showing that the state
|ss = |0, 0 + (|1, 0 r|0, 1)
0
0
2
(6.10)
+ |2, 0 0 |1, 1 + |0, 2 + O(3 )
2
2
is an eigenstate of iH (/2)a a ( /2) k k k , which generates the non-unitary
evolution. Here |n, m is a state with n photons and m excited atoms, while r = 2/ and
C 2
0 = 1
,
(6.11)
N +
0 = r,
0 = r 2 1 1/N ,
=
1 + 2C
1 + 2C[1 (1/N ) /( + )]
C = 22 /( ).
(6.12)
(6.13)
,
(6.14)
(6.15)
C is known as the co-operativity parameter. Note for later that, if N with fixed,
then 1 and so 0 1, 0 r and 0 r 2 .
Having established Eq. (6.9) as the stationary solution of the no-jump evolution, we will
have obtained the desired result if we can show that the effect of the jumps is to add to ss
terms of order 3 and higher. That the extra terms from the jumps are of order 3 can be
seen as follows.
First, the rate of jumps for the system in state (6.9) is of order 2 . This comes from the
probability of excitation of the system, which is O(2 ), times the damping rates, which are
O(1). That is to say, jumps are rare events.
Second, the effect of a jump will be once more to create a state of the form |0 + O().
This is because any lowering operator destroys |0 , acts on |1 to turn it into |0
times a constant O(), and acts on 2 |2 to turn it into a state with one excitation O(2 ).
274
Renormalizing gives the desired result: the state after the jump is different from |ss only
by an amount of order at most.
Third, after a jump, the system will relax back to |ss at a rate of order unity. This is
because the real part of the eigenvalues of the no-jump evolution operator will be of order
the damping rates, which are of order unity. That is to say, the non-equilibrium state will
persist only for a time of order unity.
On putting these together, we see that excursions from |ss are only of order , and
that the proportion of time the system spends making excursions is only of order 2 . Thus
Eq. (6.8) will hold, and, for the master equation Eq. (6.6), the stationary state is given by
Eq. (6.10).
Exercise 6.2 Convince yourself, if necessary, of the three points above by studying the
particular example.
detection is thus M 1 = dt a for < 1. From the above, we know that prior to a
detection we can take the system to be in state |ss . After the detection (which we take to
be at time = 0) the state is, to O(),
|c ( ) = |0, 0 + [ ( )|1, 0 + ( )|0, 1] .
(6.16)
Here the conditioned cavity field evolution, ( ), and the conditioned atomic polarization
evolution, ( ), have the initial values 0 and 0 as defined above.
Exercise 6.3 Verify Eq. (6.16) for = 0.
The subsequent no-jump evolution of ( ) and ( ) is governed by the coupled differential
equations
( ) = (/2) ( ) + ( ) + E/(2),
(6.17)
( ) = ( /2) ( ) ( ),
(6.18)
where (0) = 0 and (0) = 0 . As the system relaxes to equilibrium, we have from
Eq. (6.10) () = 1 and () = r. These equations can be found using the no-jump
evolution via the pseudo-Hamiltonian H i(/2)a a i( /2) k k k .
We thus see that, to lowest order in the excitation, the post-jump evolution is equivalent
to two coupled harmonic oscillators with damping and driving (remember that we are in
the interaction frame where the oscillation of each oscillator at frequency , ,
has been removed). This evolution can be understood classically. What is quantum in this
system is all in the quantum jump that results from the detection.
The quantum nature of this jump can be seen in the atomic polarization. Upon the
detection of a photon from the cavity, this changes from r to 0 . Since the system is
275
in a pure state, the only way a measurement upon one subsystem (the cavity mode) can
lead to a change in the state of the second subsystem (the atoms) is if they are entangled.
We have already noted above that, if N , 0 r, so there would be no change in
the atomic polarization. That is because in this limit there is no difference between the
atomic system coupled to a harmonic oscillator and two coupled harmonic oscillators. Two
coupled harmonic oscillators, driven and damped, end up in coherent states, and so cannot
be entangled.
Exercise 6.4 Show this by substituting = || || into the master equation
+ (a b a b ), + D[a]
+ D[b],
(6.19)
= (E/2)(a a)
(6.20)
The steady-state values are, as stated above, ss = 1 and ss = r. The four constants A ,
A , B and B are given by
A = 0 ss = B ,
(6.21)
A = 0 ss = B ,
(6.22)
(6.23)
That is, the correlation function measures the square of the conditioned field amplitude. It
has an initial value of 02 , which, from Eq. (6.12), is always less than unity. This in itself
is a nonclassical effect it is the antibunching discussed in Section 4.6.1. For coherent
light g (2) (0) = 1, while for any classical light source (which can described as a statistical
mixture of coherent states), g (2) (0) can only increase from unity, giving bunching.
276
Exercise 6.6 Verify Eq. (6.23), and convince yourself that antibunching is a nonclassical
effect.
(6.24)
277
1.6
1.4
1.2
g (2)()
1.0
0.8
0.6
0.0
0
50
100
150
(ns)
Fig. 6.2 Measured g ( ). = 0 is defined by a photodetection in APD1. Data are binned into 1.0-ns
bins. Figure 8 adapted with permission from J. E. Reiner et al., Phys. Rev. A 70, 023819, (2004).
Copyrighted by the American Physical Society.
(2)
at any given time, directly. Indeed, this concept is not even well defined. First, it will
fluctuate because of the random arrival times and velocities of the atoms in the beam.
Second, the cavity mode is Gaussian, and so has no sharp cut-off in the transverse direction.
The coupling constant g also varies longitudinally in the cavity, because it is a standingwave mode. However, an average g can be calculated from the cavity geometry, and was
found to be g/(2 ) = 3.7 MHz. This implies an effective N of about 170, which is quite
large.
Recall that in the limit N (with fixed) there are no jumps in the system. However,
from Eq. (6.11), the jump in the field amplitude scales as C/N, and C is large enough for
this to be significant, with C/N 0.3. (This parameter is known as the single-atom cooperativity.) Thus a photon detection sets up a significant oscillation in the quantum state,
which is detectable by g (2) ( ). A typical experimental trace of this is shown in Fig. 6.2.
Referring back to Fig. 6.1, two APDs are necessary to measure g (2) ( ) because the deadtime of the first detector after the detection at = 0 (i.e. the time during which it cannot
respond) is long compared with the system time-scales. Since it is very unlikely that more
than two photons will be detected in the window of interest, the dead-time of the second
APD does not matter.
The large value of N has a greater impact on the feedback. From Eq. (6.14), and
using
the approximation , the size of the jump in the atomic polarization scales as
2C/(2N ) 0.03. Thus, the size of the change in the driving field in order to stabilize
a conditional state, given by Eq. (6.24), is only a few per cent. Nevertheless, this small
change in the driving amplitude is able to freeze the state, as shown in Fig. 6.3. When the
driving is returned to its original amplitude, the relaxation of the state to |ss resumes, so
278
1.5
g (2)() 1.0
0.5
0.0
100
300
500
700
(ns)
Fig. 6.3 The measured intensity correlation function with the feedback in operation. The grey box
indicates the application time of the square feedback pulse, which reduced the driving amplitude
by about 0.013. (This is somewhat larger than the value predicted by theory.) The pulse was turned
on at = 45 ns, in agreement with theory, and turned off 500 ns later. Data are binned into 1.0-ns
bins. Figure 9 adapted with permission from J. E. Reiner et al., Phys. Rev. A 70, 023819, (2004).
Copyrighted by the American Physical Society.
the effect of the feedback is to insert a flat line of arbitrary length, at the value | (Tn )|2 , into
the photocurrent
autocorrelation function. From Eq. (6.24), this straight line can be at most
279
which we will denote by bold-font small letters. This greatly simplifies the equations we
present, but necessitates a change in some of the conventions introduced in Chapter 1. In
particular, there we used a capital letter to denote a random variable, and the corresponding small letter to act as a dummy argument in a probability distribution (or a ket). We
maintain different notation for these distinct concepts, but use a new convention, explained
below.
A precisely known classical system can be described by a list of real numbers that can
be expressed as a vector x = (x1 , x2 , . . ., xn ). Here v indicates the transpose of vector v.
We require that these variables form a complete set, by which we mean that any property o
of the system is a function of (i.e. can be determined from) x, and that none of the elements
xk can be determined from the remainder of them. (If some of the elements xk could be
determined from the remainder of them, then x would represent an overcomplete set of
variables.) For example, for a Newtonian system with several physical degrees of freedom
one could have x = (q , p ), where q is the vector of coordinates and p the vector of
conjugate momenta. Taking x to be complete, we will refer to it as the configuration of the
system, so that Rn is the systems configuration space. This coincides with the terminology
introduced in Chapter 1.
We are interested in situations in which x is not known precisely, and is therefore a vector
of random variables. An observers state of knowledge of the system is then described by
a probability density function (x). Here we use x to denote the argument of this function
(a dummy variable) as opposed to x, the random variable itself. The probability density is
a non-negative function normalized according to
dn x (x) = 1.
(6.25)
3
Here dn x nm=1 dxm , and an indefinite integral indicates integration over all of configuration space. The state defines an expectation value for any property o(x), by
E[o] =
dn x (x)o(x).
(6.26)
If the notion of expectation value is taken as basic, we can instead use it to define the
probability distribution:
(x) = E[ (n) (x x )].
(6.27)
As in Chapter 1, we refer to (x) as the state of the system. Note that this differs from
usual engineering practice, where x is sometimes called the state or (even more confusingly
for quantum physicists) the state vector. Since we will soon be concerned with feedback
control, there is another potential confusion worth mentioning: engineers use the term
plant to refer to the configuration x and its dynamics, reserving system for the operation
of the combined plant and controller.
280
(6.28)
(6.29)
E[dv(t)] = 0,
(6.30)
dv(t)dv(t ) = 0 for t = t .
(6.31)
Note that Eqs. (6.29)(6.30) mean that the result y(t) has an infinite amount of noise in it,
and so does not strictly exist. However, Eq. (6.31) means that the noise is independent from
one moment to the next so that if y(t) is averaged over any finite time the noise in it will be
finite. Nevertheless, in this chapter we are being a little more rigorous than previously, and
will always write the product y dt (which does exist) rather than y.
Using the methods of Section 4.8.4, it is not difficult to show that the equation for the
conditioned classical state (commonly known as the KushnerStratonovich equation) is
x) E[y(x)]}(
x).
d(x|y) = dw(t){y(
(6.32)
This is a simple example of filtering the current to obtain information about the sys
tem,
n a term that will be explained in Section 6.4.3. Remember that E[y(xt )] means
x). Here dw(t) is another Wiener process defined by
d x (x; t)y(
dw(t) y(t)dt E[y(t)dt]
(6.33)
t )]dt
= y(t)dt E[y(x
(6.34)
t )]dt.
t )dt E[y(x
= dv(t) + y(x
(6.35)
281
This dw(t) is known as the innovation or residual. It is the unexpected part of the result
y(t)dt, which by definition is the only part that can yield information about the system.
It may appear odd to claim that dw is a Wiener process (and so has zero mean) when it is
t )]dt.
t )dt E[y(x
equal to another Wiener process dv plus something non-zero, namely y(x
The point is that the observer (say Alice) whose state of knowledge is (x) does not know x.
There is no way therefore for her to discover the true noise dv. Insofar as she is concerned
t ) E[y(x
t )] is a finite random variable of mean zero, so it makes no difference if this
y(x
is added to dv/dt, which has an unbounded variance as stated above. Technically, dw is
related to dv by a Girsanov transformation [IW89].
In general the system configuration will change in conjunction with yielding the measurement result y(t)dt. Allowing for deterministic change as well as a purely stochastic
change, the system configuration will obey the Langevin equation
dx = a(x)dt + b(x)dv
(6.36)
= [a(x) b(x)y(x)]dt
+ b(x)y(t)dt.
(6.37)
Note that the noise that appears in this SDE is not the innovation dw, since that is an
observer-dependent quantity that has no role in the dynamics of the system configuration
(unless introduced by a particular observer through feedback as will be considered later).
It can be shown that these dynamics alter the SDE for the system state from the purely
Bayesian KushnerStratonovich equation (6.32) to the following:
2
4
x)
k bk (x) E[y(x)]
(x)
d(x|y) = dw(t) y(
dt
k ak (x)(x)
dt
k k bk (x)bk (x)(x).
2 k,k
(6.38)
that this
equation has a solution corresponding to complete knowledge: c (x; t) =
(n) x xt , where xt obeys Eq. (6.36). This can be seen from the analysis in Section 4.8.4.
For an observer who starts with complete knowledge, dv and dw are identical in this case.
If one were to ignore the measurement results, the resulting evolution is found from
Eq. (6.38) simply by setting dw(t) equal to its expectation value of zero. Allowing for more
than one source of noise so that dx = l b(l) dv (l) , we obtain
d(x) = dt
k
L(x).
k ak (x)(x) +
dt
k k D k,k (x)(x)
2 k,k
(6.39)
(6.40)
282
(6.41)
(6.44)
t0
j=
t1
(6.45)
t0
Physically this is very reasonable since it simply says that the total cost is additive over
time. In this case it is possible to show that the separation principle holds1 . That is,
1
In control theory texts (e.g. [Jac93]), the separation principle is often discussed only in the context of LQG systems, in which
case it is almost identical to the concept of certainty equivalence see Sec. 6.4.4. The concept introduced has therefore been
called a generalized separation principle [Seg77].
283
OUTPUT
SYSTEM
NOISE
ENVIRONMENT
DETECTOR
y(t)
u(t)
u(t)
ACTUATOR
p(t)
ESTIMATOR
CONTROLLER
Fig. 6.4 A schematic diagram of the state-based feedback control scheme. The environment is the
source of noise in the system and also mediates the system output into the detector. The controller is
split into two parts: an estimator, which determines the state conditioned on the record y(t) from the
detector, and an actuator that uses this state to control the system input u(t). The state, here written
as p(t), would be the probability distribution (x; t) in the classical case and the state matrix (t) in
the quantum case.
uopt (t) = Uh c (x; t), t .
(6.46)
In words, Alice should control the system on the basis of the control objective and her
current knowledge of the system and nothing else. The control at time t is simply a
function of the state at time t, though it is of course still a functional of the measurement
t =t
record, as in Eq. (6.43). But all of the information in y(t ) t =t0 is irrelevant except insofar
as it determines the present state c (x; t). This is illustrated in Fig. 6.4
This is a very powerful result that gives an independent definition of c (x; t). For obvious
reasons, this type of feedback control is sometimes called state-based feedback, or Bayesian
feedback. Determining the function Uh of c (x; t) is nontrivial, but can be done using the
technique of dynamic programming. This involves a backwards-in-time equation called the
HamiltonJacobiBellman equation (or just Bellman equation for the discrete-time case)
[Jac93].
284
(6.47)
Here A, B and E are constant matrices, while u(t) is a vector of arbitrary time-dependent
functions. It is known as the input to the system. Finally, the process noise dvp is a vector
of independent Wiener processes. That is,
E[dvp ] = 0,
(6.48)
where I is the n n identity matrix. Thus A is square (n n), and is known as the drift
matrix. The matrices B and E are not necessarily square, but can be taken to be of full
column rank, so [B] and [E] can be taken to be no greater than n. (See Box 6.1.)
Strictly, the Wiener process is an example of a time-dependent function, so u(t)dt could
be extended to include dvp (t) and the matrix E eliminated. This is a common convention,
but we will keep the distinction because u(t) will later be taken to be the feedback term,
which is known by the observer, whereas dvp (t) is unknown.
As explained generally in Section B.5, we can turn the Langevin equation (6.47) into
an equation for the state (x). With u(t) known but dvp (t) unknown, (x) obeys a multidimensional OUE (OrnsteinUhlenbeck equation see Section 5.6) with time-dependent
driving:
(6.50)
(6.51)
(6.52)
(6.53)
285
AA+ A = A
A+ AA+ = A+
(AA+ ) = AA+
(A+ A) = A+ A.
If A is square and invertible then A+ = A1 . If A is non-square, then A+ is also nonsquare, with [A+ ] = [A] and [A+ ] = [A]. The pseudoinverse finds the best
solution x to the linear equation set Ax = b, in the sense that x = A+ b minimizes the
Euclidean norm Ax b2 .
286
Then it can be shown that the system state will forever remain a Gaussian state, with
the moments evolving as given above. This can be shown by substitution, as discussed in
Exercise 5.26
The moment evolution equations can also be obtained directly from the Langevin equation for the configuration (6.47). For example, Eq. (6.50) can be derived directly from
Eq. (6.47) by taking the expectation value, while
d(xx ) = (dx)x + x(dx ) + (dx)(dx )
(6.54)
(6.55)
where m.t. stands for matrix transpose and we have used Eq. (6.48). Taking the expectation
value and subtracting
(6.56)
d xx = dxx +xdx
(6.57)
= [Ax + Bu(t)]dtx + m.t.
yields Eq. (6.51).
Stability. Consider the case u = 0 that is, no driving of the system. Then the system
state will relax to a time-independent (stationary) state iff A is strictly stable. By this we
mean that max [A] < 0 see Box. 6.1. For linear systems we use the terminology for
the dynamics corresponding to that of the matrix A: stable if max (A) 0, marginally
stable if max (A) = 0 and unstable if max (A) > 0. Note, however, that the commonly used
terminology asymptotically stable describes the dynamics iff A is strictly stable. Returning
to that case, the stationary state (ss) is then given by
xss = 0,
(6.58)
AVss + Vss A + D = 0.
(6.59)
The linear matrix equation (LME) for Vss can be solved analytically for n = 1 (trivially)
or n = 2, for which the solution is [Gar85]
Vss =
(6.60)
where I is the 2 2 identity matrix. Note that here we are using tr for the trace of ordinary
matrices, as opposed to Tr for the trace of operators that act on the Hilbert space for a
quantum system. For n > 2, Eq. (6.59) can be readily solved numerically. If the dynamics
is asymptotically stable then all observers will end up agreeing on the state of the system:
ss (x) = g(x; 0, Vss ).
Stabilizability and controllability. As explained above, the above concept of stability has
ready applicability to a system with noise (E = 0) but with no driving (u(t) = 0). However,
there is another concept of stability that has ready applicability in the opposite case that
is, when there is no noise (E = 0) but the driving u(t) may be chosen arbitrarily. In that
287
case, if we ignore uncertain initial conditions, the system state is essentially identical to its
configuration which obeys
x = Ax + Bu(t).
(6.61)
Since u(t) is arbitrary and x is knowable to the observer, we can consider the case u = F x
so that
x = (A + BF )x.
(6.62)
This ensures that the observer can control the system to ensure that x 0 in the long-time
limit. As we will see later, the concept of stabilizability is useful even in the presence of
noise.
Consider, for example, the free particle of mass m, with x = (q, p). Say the observer
Alice can directly affect only the momentum of the particle, using a time-dependent linear
potential.
Exercise 6.11 Show that this corresponds to the choices
0 1/m
0
A=
, B=
.
0
0
1
Then, with arbitrary F = (fq , fp ), we have
0
A + BF =
fq
1/m
.
fp
(6.63)
(6.64)
288
(6.66)
has full row rank (see Box. 6.1). It can be shown that this is also equivalent to the condition
that [(sI A)B] has full row rank for all s C. For proofs see Ref. [ZDG96].
In the above example of a free particle, the stabilizable system is also controllable because
in fact the eigenvalues of Eq. (6.64) are those of an arbitrary real matrix. Note that this does
not mean that A + BF is an arbitrary real matrix two of its elements are fixed!
(6.67)
This is usually known as the output of the system, but we will also refer to it as the
measured current. Here C is not necessarily square, but can be taken to be of full row rank.
The measurement noise dvm is a vector of independent Wiener processes. That is,
E[dvm ] = 0,
(6.68)
As explained in Section 6.3.2, the measurement noise need not be independent of the process
noise (although in many control-theory texts this assumption is made). We can describe the
correlations between the measurement and process noises by introducing another matrix :
E dvp dv
m = dt.
(6.69)
A cross-correlation matrix is compatible with a given process noise matrix E iff we can
define a matrix E such that
E E = EE .
That is, iff D is PSD.
(6.70)
289
Using the theory presented in Section 6.3.2, the KushnerStratonovich equation appropriate to this conditioning is
dc (x) = [Ax + Bu(t)] + 12 D c (x)dt
+ dw {C(x x) }c (x).
(6.71)
(6.72)
It can be shown that, like the unconditional equation (6.49), the conditional equation (6.71)
admits a Gaussian state as its solution. This can be shown using the Ito calculus as in
Exercise 5.34. However, it is easier to derive this solution directly from the Langevin
equation (6.47) and the current equation (6.67), as we now show.
The crucial fact underlying the derivation is that, if one has two estimates x 1 and x 2
for a random variable x, and these estimates have Gaussian uncertainties described by
the covariance matrices V1 and V2 , respectively, then the optimal way to combine these
estimates yields a new estimate x 3 also with a Gaussian uncertainty V3 , given by
V3 = (V11 + V21 )1 ,
(6.73)
x 3 = V3 (V11 x 1 + V21 x 2 ).
(6.74)
Here optimality is defined in terms of minimizing tr[M], where is the covariance matrix
E[(x3 x)(x3 x) ] of the error in the final estimate and M is any PD matrix. This result
from standard error analysis can also be derived from Bayes rule, with g(x; x 1 , V1 ) being
the prior state and g(x; x 2 , V2 ) the forward probability (or vice versa), and g(x; x 3 , V3 ) the
posterior probability. The derivation of this result in the one-dimensional case was the
subject of Exercise 1.5.
Before starting the derivation it is also useful to write the problem in terms of independent
noise processes. It is straightforward to check from Eq. (6.69) that this is achieved by
defining
E dvp = dvm + E dvp:m ,
(6.75)
where dvp:m is pure process noise, uncorrelated with dvm . This allows the system Langevin
equation to be rewritten as
dx = Ax dt + Bu(t)dt + (y Cx)dt + E dvp:m (t).
(6.76)
Now, consider a Gaussian state (x, t) = g(x;x, V ), and consider the effect of the
observation of y(t)dt. Let x 1 be an estimate for x + dx, taking into account the dynamical
(back-action) effect of y on x. From Eq. (6.76), this is
x 1 = x + (A C)xdt + Bu(t)dt + y dt.
(6.77)
290
(6.78)
(6.79)
where the final term comes from the independent (and unknown) final noise term in
Eq. (6.76). The estimate x 1 for x + dx does not take into account the fact that y depends
upon x and so yields information about it. Thus from Eq. (6.67) we can form another
estimate. Taking C to be invertible for simplicity (the final result holds regardless),
x 2 = C 1 y
(6.80)
(6.81)
= (C C dt)1
(6.82)
to leading order. Strictly, x 2 as defined is an estimate for x, not x + dx. However, the infinite
noise in this estimate (6.82) means that the distinction is irrelevant.
Because dvm is independent of dvp:m , the estimates x 1 and x 2 are independent. Thus we
can optimally combine these two estimates to obtain a new estimate x 3 and its variance V3 :
V3 = V + dt[(A C)V + V (A C) + E E V C CV ],
(6.83)
(6.84)
Exercise 6.12 Verify these by expanding Eqs. (6.73) and (6.74) to O(dt).
Since x 3 is the optimal estimate for the system configuration, it can be identified with
xc (t + dt) and V3 with Vc (t + dt). Thus we arrive at the SDEs for the moments which
define the Gaussian state c (x),
dxc = [Axc + Bu(t)]dt + Vc C + dw,
(6.85)
V
(6.86)
Note that the equation for Vc is actually not stochastic, and is of the form known as a Riccati
differential equation. Equations (6.85) and (6.86) together are known as the (generalized)
Kalman filter.
Detectability and observability. The concepts of stabilizability and controllability from
control engineering introduced in Section 6.4.1 are defined in terms of ones ability to
control a system. There is a complementary pair of concepts, detectability and observability,
that quantify ones ability to acquire information about a system.
A system is said to be detectable if every dynamical mode that is not strictly stable is
monitored. (See Box. 6.1 for the definition of a dynamical mode.) That is, given a system
described by Eqs. (6.47) and (6.67), detectability means that, if the drift matrix A leads to
291
unstable or marginally stable motion, then y Cx should contain information about that
motion. Mathematically, it means the following.
The pair (C, A) is detectable iff
Cx = 0 x: Ax = x with Re() 0.
(6.87)
Clearly, if a system is not detectable then any noise in the unstable or marginally stable
modes will lead to an increasing uncertainty in those modes. That is, there cannot be a
stationary conditional state for the system.
A simple example is a free particle for which only the momentum is observed. That is,
0 1/m
A=
,
C = (0, c),
(6.88)
0
0
for which (C, A) is not detectable since C(1, 0) = 0 while A(1, 0) = 0(1, 0) . No information about the position will ever be obtained, so its uncertainty can only increase with
time. By contrast, a free particle for which only the position is observed, that is,
0 1/m
A=
,
C = (c, 0),
(6.89)
0
0
is detectable, since (1, 0) is the only eigenvector of A, and C(1, 0) = c.
A very important result is the duality between detectability and stabilizability:
(C, A) detectable (A , C ) stabilizable.
(6.90)
This means that the above definition of detectability gives another definition for stabilizability, while the definition of stabilizability in Section 6.4.1 gives another definition for
detectability.
A stronger concept related to information gathering is observability. Like controllability,
it has a simple definition for the case in which there is no process noise and, in this case,
no measurement noise either (although there must be uncertainty in the initial conditions
otherwise there is no information to gather). Thus the system is defined by x = Ax + Bu(t)
and y = Cx, and this is observable iff the initial condition x0 can be determined with
1
certainty from the measurement record {y(t)}t=t
t=t0 in any finite interval. This can be shown
to imply the following.
The pair (C, A) is observable iff
Cx = 0 x: Ax = x .
(6.91)
292
That is, even strictly stable modes are monitored. For the example of the free particle above,
observability and detectability coincide because there are no stable modes.
Like the detectablestabilizable duality, there exists an observablecontrollable duality:
(C, A) observable (A , C ) controllable.
(6.92)
Thus the above definition of observability gives another definition for controllability, while
the two definitions of controllability in Section 6.4.1 give another two definitions for
observability.
(6.93)
A A C.
(6.94)
where
If Vcss does exist, this means that, if two observers were to start with different initial
Gaussian states to describe their information about the system, they would end up with the
same uncertainty, described by Vcss .
It might be thought that this is all that could be asked for in a solution to Eq. (6.93).
However, it should not be forgotten that there is more to the dynamics than the conditioned
covariance matrix; there is also the conditioned mean xc . Consider two observers (Alice
and Bob)
with different
initial
(6.96)
(6.97)
where
293
Thus for Alice and Bob to agree on the long-time system state it is necessary to have M
strictly stable.
A solution Vcss to Eq. (6.93) that makes M strictly stable is known as a stabilizing
solution. Because of their nice properties, we are interested in the conditions under which
stabilizing solutions (rather than merely stationary solutions) to Eq. (6.93) arise. We will
also introduce a new notation W to denote a stabilizing Vcss . Note that, from Eq. (6.93), if
W exists then
MW W M = E E + W C CW.
(6.98)
Now the matrix on the right-hand side is PSD, and so is W . From this it can be shown
that M is necessarily stable. But to obtain a stabilizing solution we require M to be strictly
stable.
is detectable, and
It can be shown that a stabilizing solution exists iff (C, A)
E x = 0 x: A x = x with Re() = 0.
(6.99)
E)
is stabilizable (or, indeed, if (A,
E)
Hint: First show that (C, A) is detectable iff (C, A C) is detectable, and that the
latter holds iff L: A C + LC is strictly stable. Define L = L , to show that
this holds iff L: A + L C is strictly stable.
The second condition (6.99) above deserves some discussion. Recall that E is related to
the process noise in the system if there is no diffusion (D = 0) then E = 0. The condition
means that there is process noise in all modes of A that are marginally stable. It might seem
odd that the existence of noise helps make the system more stable, in the sense of having all
observers agree on the best estimate for the system configuration x in the long-time limit.
The reason why noise can help can be understood as follows. Consider a system with a
marginally stable mode x with the dynamics x = 0 (i.e. no process noise). Now say our
= (x x ) with
two observers begin with inconsistent states of knowledge, say (x)
A
B
= A or B and x = x . Then, with no process noise, they will never come to agreement,
because the noise in y(t) enables each of them to maintain that their initial conditions are
consistent with the measurement record. By contrast, if there is process noise in x then
Alices and Bobs states will broaden, and then conditioning on the measurement record
will enable them to come into agreement.
For a system with a stabilizing solution W , the terminology filter for the equations
describing the conditional state is easy to explain by considering the stochastic equation
for the mean. For simplicity let u = 0. Then, in the long-time limit, Eq. (6.85) turns into
dxc = Mxc dt + F y(t)dt.
(6.100)
294
(6.101)
(6.102)
Since M < 0, the Kalman filter for the mean is exactly a low-pass filter of the current y.
Possible conditional steady states. For a linear system with a stabilizing solution of the
algebraic Riccati equation, we have from the above analysis a simple description for the
steady-state conditioned dynamics. The conditioned state is a Gaussian that jitters around
in configuration space without changing shape. That is, Vc is constant, whilexc evolves
stochastically. For u(t) 0, the evolution ofxc is
dxc = Axc dt + F dw.
(6.103)
(6.104)
+xc x
c ]
= W + Ess [xc x
c ].
(6.105)
(6.106)
(6.107)
it is easy to verify that Vss as given in Eq. (6.106) does indeed satisfy the LME (6.59),
which we repeat here:
AVss + Vss A + D = 0.
(6.108)
Since Ess [xc x
c ] 0 it is clear that
Vss W 0.
(6.109)
That is, the conditioned state is more certain than the unconditioned state. It might be
thought that for any given (strictly stable) unconditioned dynamics there would always be
a way to monitor the system so that the conditional state is any Gaussian described by
a covariance matrix W as long as W satisfies Eq. (6.109). That is, any conditional state
that fits inside the unconditional state would be a possible stationary conditional state.
However, this is not the case. Since F F 0, it follows from Eq. (6.107) that W must
satisfy the linear matrix inequality (LMI)
AW + W A + D 0,
(6.110)
295
which is strictly stronger than Eq. (6.109). That is, it is the unconditioned dynamics (A
and D), not just the unconditioned steady state Vss , that determines the possible asymptotic
conditioned states.
The LMI (6.110) is easy to interpret. Say the system has reached an asymptotic conditioned state, but from time t to t + dt we ignore the result of the monitoring. Then the
covariance matrix for the state an infinitesimal time later is, from Eq. (6.51),
V (t + dt) = W + dt(AW + W A + D).
(6.111)
Now, if we had not ignored the results of the monitoring then by definition the conditioned
covariance matrix at time t + dt would have been W . For this to be consistent with the state
unconditioned upon the result y(t), the unconditioned state must be a convex (Gaussian)
combination of the conditioned states. In simpler language, the conditioned states must fit
inside the unconditioned state. This will be the case iff
V (t + dt) W 0,
(6.112)
(6.113)
Here P1 and P are PSD symmetric matrices, while Q is a PD symmetric matrix. In general
P and Q could be time-dependent, but we will not consider that option. They represent
on-going costs associated with deviation of the system configuration x(t) and control
parameters u(t) from zero. The cost associated with P1 we call the terminal cost (recall
that t1 is the final time for the control problem). That is, it is the cost associated with not
achieving x(t1 ) = 0. (The factor of two before the -function is so that this term integrates
to x
t1 P1 xt1 .)
It is also convenient to place one final restriction on our control problem: that all noise be
Gaussian. We have assumed from the start of Section 6.4 that the measurement and process
296
OUTPUT
dvp(t)
ENVIRONMENT
y dt = C x dt
+ dvm(t)
y(t)
u(t)
u(t)
u = K(t) x c
x c (t)
d x c = A x c dt
+ B u(t) dt
+ Z(t) ( y C x c ) dt
LQG CONTROLLER
Fig. 6.5 A schematic diagram of the LQG feedback control scheme. Compare this with Fig. 6.4. Here
we have introduced Z(t) = Vc (t)C , where Vc (t) is the conditioned covariance matrix of x and
E dvp dv
m = dt. The gain K depends upon the control costs for the system and actuator. Note
how the Kalman filter (the equation for dx) depends upon u(t), the output of the actuator.
noises are Gaussian, and we now assume that the initial conditions also have Gaussian
noise so that the Riccati equation (6.86) applies. With these restrictions we have defined a
LQG control problem: linear dynamics for x and linear mapping from x and u to output y;
quadratic cost in x and u, and Gaussian noise, including initial conditions.
For LQG problems the principle of certainty equivalence holds. This is stronger than
the separation principle, and means that the optimal input u(t) depends upon c (x; t) only
through the best estimate of the system configuration x(t)c , as if there were no noise
and we were certain that the system configuration was x(t)c . Moreover, the optimal u(t)
depends linearly upon the mean:
u(t) = K(t)x(t)c .
(6.114)
K(t) = Q1 B X(t).
(6.115)
(Recall that Q > 0 so that Q1 always exists.) Here X(t) is a symmetric PSD matrix
with the final condition X(t1 ) = P1 , which is determined for t0 t < t1 by solving the
time-reversed equation
dX
= P + A X + XA XBQ1 B X.
d(t)
(6.116)
Note that K is independent of D and C, and so is independent of the size of the process and
measurement noise this part of the feedback control problem is completely equivalent to
the no-noise control problem (hence certainty equivalence). The overall feedback control
scheme is shown in Fig. 6.5.
297
Table 6.1. Relations between stabilizing solutions of the algebraic Riccati equation (ARE)
for the cases of observing and controlling a linear system. Here s.s. means stabilizing
solution, det. means detectable and stab. means stabilizable. The stabilizee is the
quantity which is stabilized, and ss means steady state.
Observing
The ARE
has a unique s.s.
where
iff
The stabilizee is
since in ss
Controlling
+ W A + E E = W C CW
AW
W: max [M] < 0,
M = A W C C,
det. and (A,
E)
stab.
(C, A)
dc = xAc xBc
d c = Mdc .
A Y + Y A + P = Y BQ1 B Y
Y: max [N ] < 0,
N = A Y BQ1 B ,
(A, B) stab. and (P , A) det.
E[xc ]
dxc = N xc dt + F dw.
Asymptotic LQG problems. Note that Eq. (6.116) has the form of a Riccati equation, like
that for the conditioned covariance matrix Vc . As in that case, we are often interested in
asymptotic problems in which t1 t0 is much greater than any relevant relaxation time.
Then, if the Riccati equation (6.116) has a unique stationary solution Xss that is PSD, X
will equal Xss for much the greater part of the control interval, having relaxed there from
P1 (which is thus irrelevant). In such cases, the optimal control input (6.114) will be timedependent throughxc but K will be basically time-independent. It is often convenient to
assume that P is positive definite, in which case Xss will also be positive definite.
For such asymptotic problems it is natural to consider the stability and uniqueness of
solutions. There is a close relation between this analysis of stability and uniqueness and
that for the conditioned state in Section 6.4.3. In particular, the concept of a stabilizing
solution applies here as well. We show these relations in Table 6.1, but first we motivate a
few definitions. Just as we denote a stabilizing solution Vcss of the algebraic Riccati equation
Eq. (6.93) as W , so we will denote a stabilizing solution Xss of the Riccati equation (6.116)
in steady state by Y . That is, Y is a symmetric PSD matrix satisfying
A Y + Y A + P Y BQ1 B Y = 0
(6.117)
N = A Y BQ1 B
(6.118)
such that
is strictly stable. The relevance of this is that, for this optimal control, the conditioned
system mean obeys, in the long-time limit, the linear equation
dxc = N xc dt + F dw,
where F = CW + as before.
Exercise 6.15 Show this from the control law in Eq. (6.114).
(6.119)
298
Since{(N )} = (N ) , a stabilizing solution Y ensures that the dynamics of the feedbackcontrolled system mean will be asymptotically stable.
The conditions under which Y is a stabilizing solution, given in Table 6.1, follow from
those for W , using the duality relations of Section 6.4.2. Just as in the case of Section 6.4.3
with the noise E, it might be questioned why putting a lower bound on the cost, by requiring
that (P , A) be detectable, should help make the feedback loop stable. The explanation is
as follows. If (P , A) were not detectable, that would mean that there were some unstable
or marginally stable modes of A to which no cost was assigned. Hence the optimal control
loop would expend no resources to control such modes, and they would drift or diffuse to
infinity. In theory this would not matter, since the associated cost is zero, but in practice any
instability in the system is bad, not least because the linearization of the system will probably
break down. Note that if P > 0 (as is often assumed) then (P , A) is always detectable.
In summary, for the optimal LQG controller to be strictly stable it is sufficient that (A, B)
E)
be stabilizable and that (C, A)
and (P , A) be detectable. If we do not require
and (A,
the controller to be optimal, then it can be shown (see Lemma 12.1 of Ref. [ZDG96]) that
stability can be achieved iff (A, B) is stabilizable and (C, A) is detectable.
The if part (sufficiency) can be easily shown since without the requirement of optimality
there is a very large family of stable controllers. By assumption we can choose F such that
A + BF is strictly stable and L such that A + LC is strictly stable. Then, if the observer
uses a (non-optimal) estimate x for the system mean defined by
dx = Ax dt + Bu dt L(y C x )dt
(6.120)
(compare this with Eq. (6.85)) and uses the control law
u = F x ,
(6.121)
the resulting controller is stable. This can be seen by considering the equations for the
configuration x and the estimation error e = x x which obey the coupled equations
dx = (A + BF )x dt BF e dt + E dvp ,
(6.122)
(6.123)
(6.124)
299
One quantity we are particularly interested in is the integrand in the cost function, which
has the stationary expectation value
Ess [h] = tr[P Vss ] + tr[QK(Vss W )K ].
(6.125)
Here the stationary expectation value means at a time long after t0 , but long before t1 , so
that both the initial conditions on x and the final condition P1 on the control are irrelevant.
For ease of notation, we simply use K to denote the stationary value for K(t). From the
above results it is not too difficult to show that Eq. (6.125) evaluates to
Ess [h] = tr[Y BQ1 B Y W ] + tr[Y D].
(6.126)
Note that this result depends implicitly upon A, C, and P through W and Y .
Exercise 6.17 Derive Eq. (6.125) and verify that it is equivalent to Eq. (6.126).
It might be thought that if control is cheap (Q 0) then the gain K will be arbitrarily
large, and hence N = A BK will be such that the fluctuations inxc will be completely
suppressed. That is, from Eq. (6.124), it might be thought that the distinction between the
conditioned W and unconditioned Vss covariance matrix will vanish. However, this will be
the case only if B allows a sufficient degree of control over the system. Specifically, it can
be seen from Eq. (6.124) (or perhaps more clearly from Eq. (6.119)) that what is required
is for the columns of F to be in the column space of B (see Box 6.1). This is equivalent
to the condition that
rank[B] = rank[B F ].
(6.127)
We will call a system that satisfies this condition, for F = CW + with W a stabilizing
solution, pacifiable.
Note that, unlike the concepts of stabilizability and controllability, the notion of pacifiability relies not only upon the unconditioned evolution (matrices B and A), but also upon
the measurement via C and (both explicitly in F and implicitly through W ). Thus it
cannot be said that pacifiability is stronger or weaker than stabilizability or controllability.
However, if B is full row rank then all three notions will be satisfied. In this case, for cheap
control the solution Y of Eq. (6.117) will scale as Q1/2 , and we can approximate Y by the
solution to the equation
Y BQ1 B Y = P ,
(6.128)
(6.129)
300
might be questioned whether the quadratic cost associated with the inputs u is an accurate
reflection of the control constraints in a given instance. For instance, in an experiment on
a microscopic physical system the power consumption of the controller is typically not a
concern. Rather, one tries to optimize ones control of the system within the constraints of
the apparatus one has built. For example one might wish to put bounds on E[uu ] in order
that the apparatus does produce the desired change in the system configuration, Bu dt, for
a given input u.2 That is, we could require
J K(V Vc )K 0
(6.130)
for some PSD matrix J . The genuinely optimal control for this physical problem would
saturate the LMI (6.130). To discover the control law that achieves this optimum it would be
necessary to follow an iterative procedure to find the Q that minimizes j while respecting
Eq. (6.130).
Another sort of constraint that arises naturally in manipulating microscopic systems is
time delays and bandwidth problems in general. This can be dealt with in a systematic
manner by introducing extra variables that are included within the system configuration x,
as discussed in Ref. [BM04]. To take a simple illustration, for feedback with a delay time
, the Langevin equation would be
dx = Ax dt + Bu(t )dt + E dvp (t).
(6.131)
To describe this exactly would require infinite order derivatives, and hence an infinite
number of extra variables. However, as a crude approximation (which is expected to be
reasonable for sufficiently short delays) we can make a first-order Taylor expansion, to
write
dx = Ax dt + B[u(t) u (t)]dt + E dvp (t).
(6.132)
x = u(t),
(6.133)
(6.134)
such that u is to be considered the new control variable and x an extra system variable.
Thus the system Langevin equation would be replaced by the pair of equations
dx = [Ax + Bx ]dt + Bu (t)dt + E dvp (t),
dx = B u (t)dt,
(6.135)
(6.136)
where B = (1/ )I . Note that there is no noise in the equation of x , so the observer will
have no uncertainty about these variables.
2
Actually, it would be even more natural to put absolute bounds on u, rather than mean-square bounds. However, such nonquadratic bounds cannot be treated within the LQG framework.
301
If there were no costs assigned with either x or u then the above procedure would be
nullified, since one could always choose u such that
u (t) = Kxc (t) x ,
(6.137)
which would lead to the usual equation for LQG feedback with no delay. But note that this
equation can be rewritten as
x = x Kxc (t).
(6.138)
This is an unstable equation for x , so, as long as some suitable finite cost is assigned to
x and/or u , the choice (6.137) would be ruled out. Costs on the control u in the original
problem translate into a corresponding cost on x in the new formulation, while a cost
placed on u would reflect limitations on how fast the control signal u can be modified. In
practice there is considerable arbitrariness in how the cost functions are assigned.
(6.139)
with L a matrix that could be time-dependent, but for strict Markovicity would not be. The
Langevin equation for the system configuration is then
dx = Ax dt + BLy dt + E dvp (t)
= (A + BLC)x dt + BL dvm + E dvp .
(6.140)
(6.141)
Note that for Markovian feedback it is not necessary to assume or derive Eq. (6.139); any
function of the instantaneous current y(t) that is not linear is not well defined. That is, if one
wishes to have Markovian system dynamics then one can only consider what engineers call
proportional feedback. It should be noted that y(t) has unbounded variation, so Markovian
control is no less onerous than optimal control with unbounded K(t), as occurs for zero
control cost, Q 0. In both cases this is an idealization, since in any physical realisation
302
both the measured current and the controller response would roll off in some way at high
frequency.
The motivation for considering Markovian feedback is that it is much simpler than
optimal feedback. Optimal feedback requires processing or filtering the current y(t) in
an optimal way to determine the state c (x) (or, in the LQG case, just its mean xc (t)).
Markovian feedback is much simpler to implement experimentally. One notable example
of the use of Markovian feedback is the feedback-cooling of a single electron in a harmonic
trap (in the classical regime) [DOG03], which we discuss below. Markovian feedback is
also much simpler to describe theoretically, since it requires only a model of the system
instead of a model of the system plus the estimator and the actuator.
The simplicity of Markovian feedback can be seen in that Eq. (6.141) can be turned
directly into an OUE. The moment equations are as in Eqs. (6.50) and (6.51) but with drift
and diffusion matrices
A = A + BLC,
(6.142)
D = D + BLL B + BL + L B .
(6.143)
(6.144)
As for an asymptotic LQG problem with no control costs, the aim of the feedback would be
to minimize tr[P Vss ] for some PSD matrix P . If B is full row rank and (C, A) is detectable,
then by the definition of detectability it is possible to choose an L such that A is strictly
stable. It might be thought that the optimal Markovian feedback would have L large in
order to make the eigenvalues of A as negative as possible. However, this is not the case
in general, because L also affects the diffusion term, and if L then so does D
(quadratically) so that Vss also. Thus there is in general an optimal value for L, to
which we return after the following experimental example.
Experimental example: cooling a one-electron oscillator. The existence of an optimal feedback strength for Markovian feedback (in contrast to the case for state-based feedback) is
well illustrated in the recent experiment performed at Harvard [DOG03]. Their system was
a single electron in a harmonic trap of frequency = 2 65 MHz, coupled electromagnetically to an external circuit. This induces a current through a resistor, which dissipates
energy, causing damping of the electrons motion at rate 2 8.4 Hz. Because the
resistor is at finite temperature T 5.2 K, the coupling also introduces thermal noise into
the electrons motion, so that it comes to thermal equilibrium at temperature T . The damping rate is seven orders of magnitude smaller than the oscillation frequency, so a secular
approximation (explained below) is extremely well justified.
We can describe the motion by the complex amplitude
(t) = eit [x(t) ip(t)].
(6.145)
303
(6.146)
d = dt + T [dv1 (t) + i dv2 (t)]/ 2,
2
where dv1 and dv2 are independent Wiener increments. This equation ensures that ss ()
is a Gaussian that is independent of the phase of , has a mean of zero, and has
Ess [||2 ] = Ess [x 2 + p2 ] = 2Ess [x 2 ] = T .
(6.147)
This is as required by the equipartition theorem since ||2 equals the total energy (we are
using units for which kB 1). For cooling the electron we wish to reduce this mean ||2 .
From Eq. (6.146), the steady-state rate of energy loss from the electron due to the damping
(which is balanced by energy gain from the noisy environment) is
P = Ess [||2 ] = 2 Ess [x 2 ].
(6.148)
This can be identified with I 2 R, the power dissipated in the resistor, so if I x we must
have
I = 2 /R x.
(6.149)
The voltage drop across the resistor is V = VJ + I R, where VJ is Johnson (also known as
Nyquist) noise. Taking this noise to be white, as is the usual approximation, we have in
differential form [Gar85],
(6.150)
V dt = 2 R x dt + 2T R dvJ .
But it is this voltage that drives the motion of the electron, giving a term so that the equation
for x is
dx = p dt V dt
(6.151)
frequency. Instead, this term gives the complex noise [dv1 (t) + i dv2 (t)]/ 2, as can be
verified by making the secular approximation on the correlation function for the Gaussian
noise process eit dvJ (t).
304
Exercise 6.20 Verify this derivation of Eq. (6.146), and show that it has the steady-state
solution with the moments (6.147).
Note also that the equilibrium temperature T can be obtained by ignoring the free
evolution (that is, deleting the p dt term in Eq. (6.152)), calculating Ess [x 2 ] and defining
T = 2Ess [x 2 ].
(6.153)
That is, the same expression holds as when the p dt is retained and the secular approximation made as in Eq. (6.147). This happy coincidence will be used later to simplify our
description.
From Eq. (6.152), if the voltage were directly measured, the measurement noise would
be perfectly correlated with the process noise. For the purpose of feedback, it is necessary
to amplify this voltage. In practice this introduces noise into the fed-back signal. Thus, it is
g 2 Tg
.
1g
(6.157)
For Tg T , the new temperature Te decreases linearly as g increases towards unity until a
turning point at g 1 Tg /T , after which it increases rapidly with g. The minimal Te ,
305
6
Te 4
(K)
(a)
2
0
9
e / (2) 6
(Hz) 3
(b)
0
1.0
2 T e / e
(K/Hz) 0.5
0.0
(c)
0.0
0.2
0.4
0.6
0.8
1.0
g
Fig. 6.6 Experimental results for cooling of a single electron by Markovian feedback [DOG03]. Te
is the equilibrium temperature, while e is the measured damping rate of the electrons energy. The
lines or curves are the theoretical predictions. Note the existence of an optimal gain g. Figure 5
adapted with permission from B. DUrso et al., Phys. Rev. Lett. 90, 043001, (2003). Copyrighted by
the American Physical Society.
at the optimal gain value, is Te 2 Tg T . All of this was seen clearly in the experiment,
with Tg 0.04 K giving a minimum Te 0.85 K, a six-fold reduction in temperature. This
is shown in Fig. 6.6. The full expression (not given in Ref. [DOG03]) for the minimum
temperature with feedback is
!
"
Te = 2 T Tg + Tg2 Tg .
(6.158)
It is interesting to re-examine this system from the viewpoint of conditional dynamics.
Ignoring the p dt term in Eq. (6.152), we have a one-dimensional system, so all of
the matrices become scalars. From this equation and Eq. (6.154) it is easy to identify the
following:
y(t) = [ (T + Tg )]1/2 (t),
A = ,
(6.160)
C = [ /(T + Tg )]1/2 ,
= T [ /(T + Tg )]
(6.159)
(6.161)
1/2
(6.162)
A = A C = Tg /(T + Tg ),
(6.163)
D = T,
(6.164)
E 2 = D 2 = T Tg /(T + Tg ).
(6.165)
306
It is trivial to verify that this system satisfies the conditions for there to exist a stabilizing
solution W for the stationary conditioned variance equation
+ W A + E E = W C CW.
AW
(6.166)
(6.167)
giving
W = Tg +
Tg2 + T Tg .
(6.168)
Remarkably, this expression, multiplied by two, is identical to the above expression for
the minimum temperature (6.158). This identity is no coincidence. Recall that in steady
state
T = 2Ess [x 2 ] = 2W + 2Ess [x2c ].
(6.169)
Thus 2W is a lower bound for the temperature Te of Eq. (6.157). At the optimal value of
feedback gain, the feedback exactly cancels out the noise in the equation for the conditional
mean. We saw the same phenomenon for the one-dimensional quantum system considered
in Section 5.6. Thus with optimal Markovian feedback xc (t) = 0 in steady state, and
Ess [x 2 ] = W . Rather than show this explicitly, we show now that this can be done for any
linear system that has a stabilizing solution W .
Understanding Markovian feedback by conditioning. Recall that we can write the conditional mean equation for an arbitrary linear system as
dxc = A Vc (t)C C C xc dt + Bu(t)dt + Vc (t)C + y(t)dt. (6.170)
Now, if we add Markovian feedback as defined above then the equation for the covariance
matrix is of course unaffected, and that of the mean becomes
dxc = A Vc (t)C C C xc dt + Vc (t)C + + BL y(t)dt. (6.171)
Now let us assume that B is such that for all times there exists an L satisfying
BL(t) = Vc (t)C .
(6.172)
(Obviously that will be the case if B is full row rank.) Then, for this choice of L, the
equation for the conditioned mean is simply
dxc = A Vc (t)C C C xc dt.
(6.173)
307
This is a deterministic equation. All noise in the conditional mean has been cancelled out
by the feedback, so the unconditioned variance and conditioned variance are equal:
V (t) = Vc (t),
(6.174)
(6.175)
(6.176)
(6.177)
(6.178)
y dt = dw(t).
(6.179)
This may be a useful fact experimentally for fine-tuning the feedback to achieve the desired
result when the system parameters are not known exactly, as shown in the experiment
[BRW+ 06] discussed in Section 5.8.2.
To reiterate, under the conditions
1.
2.
3.
4.
the optimal Markovian feedback scheme is strictly stable and performs precisely as well as
does the optimal state-based feedback scheme.
In fact, we can prove that the above Markovian feedback algorithm can be derived as a
limit of the optimal feedback algorithm that arises when P is positive definite and Q 0.
As discussed in Section 6.4.4, in this case the optimal control is such that BK acts as an
infinite positive matrix on the column space of F . Recall that for optimal feedback the
conditioned mean obeys
dxc (t) = (M BK)xc (t) + F y(t),
(6.180)
308
because u(t) = Kxc (t). Taking the eigenvalues of BK to positive infinity, the solution
to Eq. (6.180) is simply
Bu(t) = F y(t),
(6.181)
(x)
(6.182)
which is the probability distribution for x. Note that a multi-dimensional (n) (x x ) cannot
be defined as an operator in D(H) because of the non-commutativity of the elements of x .
309
We will also introduce a new notation for the general Lindblad master equation:
= i[H , ] + D[c].
(6.183)
L
D[cl ].
(6.184)
l=1
Note that we have introduced Plancks constant 1034 J s on the left-hand side of
Eq. (6.183). This is simply a matter of redefinition of units and is necessary in order to
connect quantum operators with their classical counterparts. For example, in this case it is
necessary if H is to correspond to the classical Hamiltonian function. We will see later in
Section 6.6 that keeping in the formulae, rather than setting = 1 as we have been doing,
is useful for keeping track of what is distinctively quantum about quantum control of linear
systems.
Equation (6.183) can be derived by generalizing the systembath coupling introduced
in Section 3.11 by introducing L independent baths. Then the Ito stochastic differential
equation for the unitary operator that generates the evolution of the system and bath
observables obeys
z:=t dB
z:=t c ) U (t, t0 ).
(6.185)
dU (t, t0 ) = dt(c c /2 + iH ) + (c dB
Note here that c means the transpose of the vector as well as the Hermitian adjoint of
z:=t has all second-order moments equal to zero
the operators. The vector of operators dB
except for
z:=t = I dt,
dB z:=t dB
(6.186)
310
vectors. The most general Belavkin equation compatible with Eq. (6.183) is
dc = dt D[c]c + H[iH dt + dz (t)c]c .
(6.187)
Here we are defining dz = (dz1 , . . ., dzL ) , a vector of infinitesimal complex Wiener increments. Like dw, these are c-number innovations and dz simply means (dz ) . Recall that
H is the nonlinear superoperator defined in Eq. (4.24).
The innovations vector dz satisfies E[dz] = 0, and has the correlations
dz dz = H dt,
dz dz = dt,
(6.188)
where is a complex symmetric matrix. Here H (capital ) allows for inefficient detection.
The set of allowed Hs is
'
(
..
H = H = diag(1 , . , L ): l, l [0, 1] .
(6.189)
Here l can be interpreted as the efficiency of monitoring the lth output channel. This
allows for conditional evolution that does not preserve the purity of states when H = I .
It is convenient to combine and H in an unravelling matrix
1 H + Re[]
Im[]
U = U (H, )
.
(6.190)
Im[]
H Re[]
2
The set U of valid U s can then be defined by
U = U (H, ): = , H H, U (H, ) 0 .
(6.191)
H H1
J dt = dB
out H + dBout + dA
H1 I .
H(I H) + dV
+ dV
(6.193)
This is the generalization of Eq. (4.210) to allow for inefficent detection, with two vectors
and dV
understood to act on other (ancillary) baths
of ancillary annihilation operators dA
in the vacuum state. Thus, for example,
(6.194)
Exercise 6.22 Show that all of the components of J and J commute with one another, as
required.
Note that the restriction on enforced by the requirement U 0 ensures that the appearances of the matrix inverse H1 in Eq. (6.193) do not cause problems even if H is not
311
positive definite. This restriction also implies that all of the matrices under the radical signs
in Eq. (6.193) are PSD, so that the square roots here can be unambiguously defined. Finally,
and dV
are not
note also that, for efficient monitoring (H = I ), the ancillary operators dV
still is, as in Eq. (4.210).
needed, but dA
(6.195)
H fb (t) = F u(t), t .
Say the aim of the control is to minimize a cost function that is additive in time. In the
Heisenberg picture we can write the minimand as
t1
t dt,
h x , u,
(6.196)
j=
t0
where x and u are implicitly time-dependent as usual, and the expectation value is taken
using the initial state of the system and bath, including any ancillary baths needed to define
the current such as in Eq. (6.193).
In the Schrodinger picture, the current can be treated as a c-number, and we need only
the system state. However, this state c (t) is conditioned and hence stochastic, so we must
also take an ensemble average over this stochasticity. Also, the system variables must still
be represented by operators, x , so that the final expression is
t1
E{Tr[c (t)h(x, u, t)]}dt.
(6.197)
j=
t0
As in Section 6.3.3, for an additive cost function the separation principle holds. This was
first pointed out by Belavkin [Bel83], who also developed the quantum Bellman equation
[Bel83, Bel88, Bel99] (see also Refs. [DHJ+ 00, Jam04] and Ref. [BvH08] for a rigorous
treatment). The quantum separation principle means that the optimal control strategy is
quantum-state-based:
uopt = Uh c (t), t .
(6.198)
Even in the Heisenberg picture this equation will hold, but with hats on. The conditional
state c (t) is a functional (or a filter in the broad sense of the word) of the output y(t).
312
So, in the Heisenberg picture, y (t) begets c (t), which begets u opt (t). Note the distinction
between c (t) and c (t), the latter being a state conditioned upon a c-number measurement
record.
(6.199)
However, from our point of view, there is nothing peculiarly quantum about this situation. The classical control problem, expressed in terms of the classical state, has a
completely analogous form. For example, consider a classical system, with configuration
(q, p), and a classical control Hamiltonian u(t)F (q, p). Then the equation for the classical
state is
(6.200)
313
commutation relation
[p m , qm ] = i
(6.201)
with its partner, but commutes with all other positions and momenta. To connect with the
classical theory, we write our complete set of observables as
x = (q1 , p 1 , q2 , p 2 , . . ., qN , p N ).
(6.202)
(6.203)
where ! is a (2N ) (2N ) skew-symmetric matrix with the following block-diagonal form:
N
5
0 1
.
(6.204)
!=
1 0
1
This matrix, called the symplectic matrix, is an orthogonal matrix. That is, it satisfies
! 1 = ! = !. This means that i! is Hermitian. In this situation, the configuration
space is usually called phase-space and the term configuration space is reserved for the
space in which q resides. However, we will not use configuration space with this meaning.
A consequence of the canonical commutation relation is the SchrodingerHeisenberg
p)
is
uncertainty relation [Sch30], which for any given conjugate pair (q,
2
(/2)2 .
(6.205)
Vq Vp Cqp
2 and Vp similarly, while the covariance Cqp =
Here the variances are Vq = (q)
(q)(
+ (q)(
p)
p)/2.
Note the symmetrization necessary in the covariance because
of the non-commutation of the deviation terms, defined for an arbitrary observable o as
The original Heisenberg uncertainty relation (A.10) [Hei27] is weaker, lacko = o o.
ing the term involving the covariance. Using the matrix ! and the covariance matrix
(6.206)
we can write the SchrodingerHeisenberg uncertainty relation as the linear matrix inequality
[Hol82]
V + i!/2 0.
This LMI can be derived immediately from Eqs. 6.203 and 6.206, since
V + i!/2 = (x)(x) ,
(6.207)
(6.208)
and the matrix on the right-hand side is PSD by construction. Since ! is a real matrix, we
can thus also define V by
(6.209)
V = Re (x)(x) .
Recall that this means the real part of each element of the matrix.
314
Exercise 6.23 Show from Eq. (6.207) that V (if finite) must be positive definite.
Hint: First show that, if r and h are the real and imaginary parts of an eigenvector of
V + i!/2 with eigenvalue , then
V
!/2
r
r
=
.
(6.210)
!/2
V
h
h
Then show that this real matrix cannot be positive if V has a zero eigenvalue, using the fact
that ! has full rank.
It is convenient to represent a quantum state for this type of system not as a state matrix
but as a Wigner function W (x) see Section A.5. This is a pseudo-probability distribution
over a classical configuration corresponding to the quantum configuration x . It is related to
by
W (x) = W (x x ) = Tr[W (x x )]
(cf. Eq. (6.27)), where
W (x x ) =
d2N k exp 2 ik (x x ) .
(6.211)
(6.212)
315
condition on V for it to describe a valid quantum state. This, and the fact that (6.207) is a
LMI in V , will be important later.
Exercise 6.24 Show that the purity p = Tr[ 2 ] of a state with Wigner function W (x) is
given by
N
d2N x [W (x)]2 .
p = (2 )
(6.215)
Hint: First generalize Eq. (A.117) to N dimensions.
Then show that for a Gaussian state this evaluates to
p = det[V ]/(/2)2N .
(6.216)
(6.217)
This is precisely the same as the classical condition. Unlike in that case, the restrictions
on quantum dynamics mean that the matrices A and E cannot be specified independently.
To see this, we must derive Eq. (6.217) from the quantum Langevin equation generated by
Eq. (6.185). The QLE for x , generalizing that of Eq. (3.172) to multiple baths, is
(6.219)
with G real and symmetric, and the vector of Lindblad operators to be linear in x :
c = C x .
(6.220)
(6.221)
(6.222)
316
The second expression can be interpreted as Gaussian quantum process noise, because of
the stochastic properties of dB in (t) as defined by Eq. (6.186).
Exercise 6.25 Derive Eqs. (6.221) and (6.222) from Eq. (6.218).
We have not specified separately E and dvp (t) because the choice would not be unique.
All that is required (for the moment) is that the above expression for E dvp (t) gives the
correct diffusion matrix:
D dt = Re[E dvp (t)dvp (t) E ]
(6.223)
dt
= ! Re[C C]!
(6.224)
dt.
= ![C C]!
(6.225)
In Eq. (6.223) we take the real part for the same reason as in Eq. (6.209): to determine the
symmetrically ordered moments. In Eq. (6.225) we have introduced a new matrix,
C (Re[C ], Im[C ]).
(6.226)
A = !(G + C S C),
where, in terms of the blocks defined by Eq. (6.226),
0 I
S=
.
I 0
(6.227)
(6.228)
Exercise 6.26 Verify that Eq. (6.225) is the correct expression for the diffusion matrix D.
Hint: Calculate the moment equations (6.50) and (6.51) for x and V using the Ito
calculus from the quantum Langevin equation (6.218).
The calculations in the above exercise follow exactly the same form as for the classical Langevin equation. The non-commutativity of the noise operators actually plays no
important role here, because of the linearity of the dynamics. Alternatively, the moment
equations can be calculated (as in the classical case) directly from the equation for the
state:
= i[H , ] + D[c].
(6.229)
To make an even closer connection to the classical case, this master equation for the state
matrix can be converted into an evolution equation for the Wigner function using the
operator correspondences in Section A.5. This evolution equation has precisely the form
of the OUE (6.49). Thus, the Wigner function has a Gaussian solution if it has a Gaussian
initial state. As explained above, this means that there is a classical analogue to the quantum
state, which is precisely the probability distribution that arises from the classical Langevin
equation (6.47).
317
i 1
(6.230)
! A ! 1 A
= C C 0.
! 1 D! +
2
Here we have used Eqs. (6.221) and (6.224). Thus
D i A! ! A 2 0.
(6.231)
As well as being a necessary condition, this LMI is also a sufficient condition on D for a
given drift matrix A. That is, it guarantees that V (t) + i!/2 0 for all t > t0 , provided
that it is true at t = t0 . This is because the invertibility of ! allows us to construct a Lindblad
master equation explicitly from the above equations given valid A and D matrices.
The LMI (6.231) can be interpreted as a generalized fluctuationdissipation relation for
open quantum systems. A dissipative system is one that loses energy so that the evolution is
strictly stable (here we are assuming that the energy is bounded from below; that is, G 0).
As discussed in Section 6.4.1, this means that the real parts of the eigenvalues of A must
be negative. Any strictly stable A must have a non-vanishing value of A! ! A and in
this case the LMI (6.231) places a lower bound on the fluctuations D about equilibrium.
Note that in fact exactly the same argument holds for a strictly unstable system (i.e. one for
which all modes are unstable).
By contrast, it is easy to verify that the contribution to A arising from the Hamiltonian
H places no restriction on D. This is because energy-conserving dynamics cannot give
rise to dissipation. To see this, note that ! 1 (!G)! = G! = (!G) . This implies that
!G has the same eigenvalues as the negative of its transpose, which is to say, the same
eigenvalues as the negative of itself. Thus, if is an eigenvalue then so is , and therefore
318
it is impossible for all the eignevalues of !G to have negative real parts. That is, A = !G
cannot be a strictly stable system.
It might be questioned whether it is appropriate to call Eq. (6.231) a fluctuation
dissipation relation, because in equilibrium thermodynamics this term is used for a relation
that precisely specifies (not merely bounds) the strength of the fluctuations for a given linear dissipation [Nyq28, Gar85]. The reason why our relation is weaker is that we have not
made the assumption that our system is governed by an evolution equation that will bring
it to thermal equilibrium at any particular temperature (including zero temperature). Apart
from the Markovicity requirement, we are considering completely general linear evolution.
Our formalism can describe the situation of thermal equilibrium, but also a situation of
coupling to baths at different temperatures, and even more general situations. Thus, just
as the SchrodingerHeisenberg uncertainty relation can provide only a lower bound on
uncertainties in the system observables, our fluctuationdissipation relation can provide
only a lower bound on fluctuations in their evolution.
Stabilizability and controllability. The concepts of stabilizability and controllability for
linear quantum systems can be brought over without change from the corresponding classical definitions in Section 6.4.1. However, the term controllability is also used in the
context of control of Hamiltonian quantum systems [RSD+ 95], with a different meaning.
To appreciate the relation, let us write the system Hamiltonian, including the control term,
as
H = H 0 +
H j uj (t),
(6.232)
j
where
H 0 = 12 x Gx,
(6.233)
H j = x !Bej ,
(6.234)
where the ej s are orthonormal vectors such that u(t) = j uj (t)ej . Note that j in this
section is understood to range from 1 to [B], the number of columns of B. From these
operators we can form the following quantities:
H j = (x !)Bej ,
(6.235)
(6.236)
(6.237)
and so on. Here we have used A = !G as appropriate for Hamiltonian systems. The
complete set of these operators, plus H 0 , plus real linear combinations thereof, is known
as the Lie algebra generated by the operators {H 0 , H 1 , H 2 , . . ., H [B] }. (See Box 6.2.) It is
these operators (divided by and multiplied by the duration over which they act) which
generate the Lie group of unitary operators which can act on the system. Note that there is
319
closure: A, B G, A B G
associativity: A, B, C G, (A B) C = A (B C)
existence of an identity element I G: A G, A I = I A = A
existence of an inverse: A G, A1 G: A A1 = A1 A = I .
For example, the set of real numbers, with being addition, forms a group. Also, the
set of positive real numbers, with being multiplication, forms a group. Both of these
examples are Abelian groups; that is, A, B G, A B = B A. An example of a
non-Abelian group is the set of unitary matrices in some dimensions d > 1, with
being the usual matrix product.
In physics, it is very common to consider groups that are continuous, called Lie
groups. A common example is a group of matrices (that may be real or complex):
G = {exp(iY ) : Y g}.
(6.238)
Here g is the Lie algebra for the Lie group G. This set is called a Lie algebra because
(i) it forms a vector space with a concept of multiplication (in this case, the usual
matrix multiplication) that is distributive and (ii) it is closed under a particular sort of
binary operation called the Lie bracket (in this case, equal to i times the commutator
[Y, Z] Y Z ZY ). Closure means that
Y, Z g, i[Y, Z] g.
(6.239)
(6.240)
g = span X ,
where span(S) is the set of all real linear combinations of matrices in the set S. In
many cases (for example when the Xk are finite-dimensional matrices) the recursive
definition will cease to produce distinct sets after some finite number of iterations, so
that X = XN for some N .
no point in considering commutators containing more than one H j (j = 0) since they will
be proportional to a constant.
Now, the criterion for controllability for a linear system is that the controllability matrix
(6.66) has full row rank. By inspection, this is equivalent to the condition that, out of the
2N (B) Hilbert-space operators in the row-vector
(x !)[B AB A2 B A2N1 B],
(6.241)
320
(See Section 6.5.1 for the definition of D(H).) The significance of this concept of controllability is that any unitary evolution U can be realized by some control vector u(t) over
some time interval [t0 , t1 ]. Hence, operator-controllability means that from an initial pure
state it is possible to prepare an arbitrary quantum state of the system.
(6.242)
This follows automatically from the assumption (6.220) that we have already made, provided that we use a Wiener-process unravelling to condition the system. Once again,
321
however, quantum mechanics places restrictions on the matrix C and the correlations of
the measurement noise dvm (t).
We saw in Section 6.5.2 that the most general output of a quantum system with Wiener
noise is a vector of complex currents J defined in Eq. (6.193). This can be turned into a real
vector by defining
dvm
Re J
,
(6.243)
= C x +
y = T +
Im J
dt
with
C = 2T C/.
(6.244)
(6.245)
(also equal to the dimension of y ), is equal to the rank of U . The number of rows of C,
[C] (also equal to twice the dimension of c ), is equal to the number of rows (or columns)
of U . This guarantees that the matrix T exists.
In Eq. (6.243) we have defined
+ Re J
dvm = T
dt,
(6.246)
Im J
where (cf. Eq. (6.193))
H H1
J dt = dB
in H + dBin + dA
H(I H) + dV
H1 I .
+ dV
(6.247)
The measurement noise operator dvm (t) has the following correlations:
dvm dv
m = I dt,
Re[E dvp dv
m ] = dt,
(6.248)
(6.249)
(6.250)
322
is, as expected,
dw = dvm + C(x xc ).
(6.251)
These quantum Kalman-filter equations can also be derived from the quantum version of
the KushnerStratonovich equation, Eq. (6.187). Indeed, by using the Wigner function
to represent the quantum state, the evolution can be expressed precisely as the Kushner
Stratonovich equation (6.71), involving the matrices A, B, D, C and .
Fluctuationobservation relations. The above analysis shows that, even including measurement, there is a linear classical system with all the same properties as our linear quantum
system. As in the case of the unconditioned dynamics, however, the structure of quantum
mechanics constrains the possible conditional dynamics. In the unconditioned case this was
expressed as a fluctuationdissipation relation. That is, any dissipation puts lower bounds
on the fluctuations. In the present case we can express the constraints as a fluctuation
observation relation. That is, for a quantum system, any information gained puts lower
bounds on fluctuations in the conjugate variables, which are necessary in order to preserve
the uncertainty relations.
Recall that the Riccati equation for the conditioned covariance matrix can be written as
c + Vc A + E E Vc C CVc ,
V c = AV
(6.252)
(6.253)
.
E E = D = ! C [I S U S]C!
(6.254)
From Eq. (6.252) we see that E E always increases the uncertainty in the system state. This
represents fluctuations. By contrast, the term Vc C CVc always decreases the uncertainty.
This represents information gathering, or observation. The fluctuationobservation relation
is expressed by the LMI
2
E E !C C! 0.
4
(6.255)
Exercise 6.28 Show this, by showing that the left-hand side evaluates to the following
matrix which is clearly PSD:
I H
0
.
C!
! C
(6.256)
0
I H
The first thing to note about Eq. (6.255) is that it is quantum in origin. If were zero,
there would be no lower bound on the fluctuations. The second thing to notice is that
observation of one variable induces fluctuations in the conjugate variable. This follows
from the presence of the matrix ! that postmultiplies C in Eq. (6.255). It is most easily
seen in the case of motion in one dimension (N = 1). Say we observe the position q, so
323
that
y dt =
c dt + dw,
q
(6.257)
where here is a scalar expressing the measurement strength. Then Eq. (6.255) says that
2 0 0
0.
(6.258)
EE
4 0
That is, there is a lower bound of ( /2)2 on the spectral power of momentum fluctuations.
The third thing to note about Eq. (6.255) is that, since D = E E + , our relation
implies the weaker relation
2
(6.259)
!C C! 0.
4
As well as being a necessary condition on E given C, Eq. (6.255) is also a sufficient
condition.
D
Detectability and observability. The definitions of detectability and observability for linear quantum systems replicate those for their classical counterparts see Section 6.4.2.
However, there are some interesting points to make about the quantum case.
First, we can define the notion of potential detectability. By this we mean that, given the
unconditioned evolution described by A and D, there exists a matrix C such that (C, A) is
detectable. Classically this is always the case because C can be specified independently of A
and D, so this notion would be trivial, but quantum mechanically there are some evolutions
that are not potentially detectable; Hamiltonian evolution is the obvious example.
We can determine which unconditional evolutions are potentially detectable from A and
D as follows. First note that from Eq. (6.244) the existence of an unravelling U such that
A) being detectable. Indeed, C C results from
(C, A) is detectable is equivalent to (C,
the unravelling U = I /2, so a system is potentially detectable iff the U = I /2 unravelling
A) being detectable.
A) being detectable is equivalent to (C C,
is detectable. Now, (C,
But, from Eq. (6.225), C C = ! D!/. Since the above arguments, mutatis mutandis,
also apply for potential observability, we can state the following.
A quantum system is potentially detectable (observable) iff (! D!, A) is detectable
(observable).
The second interesting point to make in this section is that, for quantum systems, if
is detectable then (A,
E)
is stabilizable.3 Consider for simplicity the case of
(C, A)
efficient detection, where H = I . Then the left-hand side of Eq. (6.255) is zero, and we can
3
E)
was stabilizable, but the conclusion drawn there, discussed in
Note that in Ref. [WD05] it was incorrectly stated that (A,
Section 6.6.4, is still correct.
324
choose
E = !C .
2
where
Moreover, A = ! G,
Im
G=G+C
Re
(6.260)
Re
C
Im
(6.261)
= ! G!
= G!
= A , while C! E . Thus, by virtue of the
! ). But ! A!
E)
stabilizable. Now for inefficient detection
detectablestabilizable duality, we have (A,
E)
will also be stabilizable in this
the fluctuations in the system are greater, so (A,
case.
Third, we note that, as was the case for controllability, we can give a Lie-algebraic
formulation for observability, at least for the non-dissipative case in which C can be taken
= 0 and A = !G. From Section 6.4.2, observability for the
to be real, so that Im[C C]
linear system is equivalent to the matrix
[C A C (A )2 C (A )2N1 C ]
(6.262)
having full row rank. Now, for this non-dissipative case (so called because the drift evolution
is that of a Hamiltonian system), the method of Section 6.6.2 can be applied to give the
following new formulation.
A non-dissipative linear quantum system is observable iff the Lie algebra generated by
{H 0 , o 1 , o 2 , . . ., o [C] } includes a complete set of observables.
.
Here H 0 is as above, while o l = e
l Cx
This definition does not generalize naturally to other sorts of quantum systems in the
way that the definition of controllability does. If H 0 and {o l } are arbitrary operators,
then the above Lie algebra does not correspond to the operators the observer obtains
information about as the system evolves conditionally. Moreover, unlike in the linear case,
the observability of a general system can be enhanced by suitable application of the control
Hamiltonians {H j }. Indeed, Lloyd [Llo00] has defined observability for a general quantum
system such that it is achievable iff it is operator-controllable (see Section 6.6.2) and
the observer can make at least one nontrivial projective measurement. (His definition of
observability is essentially that the observer can measure any observable, and he does not
consider continuous monitoring.)
325
(6.263)
(6.264)
is strictly stable. In the above we have introduced a subscript U to emphasize that the
stationary conditioned covariance matrix depends upon the unravelling U , since all of the
E and C depend upon U . We call an unravelling stabilizing if Eq. (6.263) has
matrices A,
a stabilizing solution.
(or (C, A)) is detectable
As discussed in Section 6.4.3, a solution is stabilizing iff (C, A)
E)
and condition (6.99) is satisfied. As stated there, the second condition is satisfied if (A,
is stabilizable. But, as we saw in the preceding section, in the quantum case, this follows
automatically from the first condition. That is, quantum mechanically, the conditions for
the existence of a stabilizing solution are weaker than classically. Detectability of (C, A)
is all that we require to guarantee a stabilizing solution.
In the quantum case we can also apply the notion of potential detectability from the
preceding section. It can be shown [WD05] that if the system is potentially detectable then
the stabilizing unravellings form a dense subset4 of the set of all unravellings. Now, for
E
detectable unravellings, the solutions to the algebraic MRE (6.263) are continuous in A,
and C [LR91]. But these matrices are continuous in U , and hence WU is continuous in U .
Thus, as long as (i) one restricts oneself to a compact set of WU s (e.g. a set of bounded WU s);
(ii) one is interested only in continuous functions of WU ; and (iii) the system is potentially
detectable, then one can safely assume that any such WU is a stabilizing solution.
Possible conditional steady states. We showed in the classical case that the only restriction
on the possible stationary conditioned covariance matrices that a system described by A
and D can have is the LMI
AWU + WU A + D 0.
(6.265)
This is also a necessary condition in the quantum case, by exactly the same reasoning,
although in this case we also have another necessary condition on the covariance matrix
given by the uncertainty relation (6.207), which we repeat here:
WU + i!/2 0.
(6.266)
If WU is the covariance matrix of a pure state, then Eq. (6.265) is also a sufficient
condition for WU to be a realizable stationary conditioned covariance matrix. This can
4
If a set A is a dense subset of a set B, then for every element of B there is an element of A that is arbitrarily close, by some
natural metric.
326
(6.267)
Then, if Eq. (6.265) is satisfied, the system at an infinitesimally later time t + dt will be
a mixture of states, all with covariance matrix WU , and with Gaussian-distributed means,
as explained in Section 6.4.3. We call such an ensemble a uniform Gaussian pure-state
ensemble. Then, by virtue of the SchrodingerHJW theorem, there will be some way of
monitoring the environment that part of the bath that has become entangled with the
system in the interval [t, t + dt) such that the system is randomly collapsed to one of the
pure state elements of this ensemble, with the appropriate Gaussian weighting. That is, a
pure state with covariance matrix WU can be reprepared by continuing the monitoring, and
therefore WU must be the stationary conditioned covariance matrix under some monitoring
scheme.
Thus we have a necessary condition (Eqs. (6.265) and (6.266)) and a sufficient condition (Eqs. (6.265) and (6.267)) for WU to be the steady-state covariance matrix of some
monitoring scheme.5 If WU is such a covariance matrix, then there is an unravelling matrix
E and C so that WU is the solution of
U that will generate the appropriate matrices A,
A) is detectable,
Eq. (6.263). Moreover, as argued above, as long as WU is bounded and (C,
this WU can be taken to be stabilizing.
To find an unravelling (which may be non-unique) generating WU as a stationary conditioned covariance matrix, it is simply necessary to put the U -dependence explicitly in
Eq. (6.263). This yields the LME for U :
R U R = D + AWU + WU A ,
(6.268)
U / + S C!.
This can be solved efficiently (that is, in a time polynomial
where R = 2CW
in the size of the matrices). It does not matter whether this equation has a non-unique
solution U , because in steady state the conditional state and its dynamics will be the same
for all U satisfying Eq. (6.268) for a given WU . This can be seen explicitly as follows. The
shape of the conditioned state in the long-time limit is simply WU . The stochastic dynamics
of the meanxc is given by
dxc = [Axc + Bu(t)]dt + F dw,
(6.269)
which depends upon U only through the stochastic term. Recall that F = CWU + , which
depends on U through C and as well as WU . However, statistically, all that matters is the
In Ref. [WD05] it was incorrectly stated that Eqs. (6.265) and (6.266) form the necessary and sufficient conditions. However,
this does not substantially affect the conclusions of that work, since other constraints ensure that the states under consideration
will be pure, as will be explained later.
327
covariance of the noise in Eq. (6.269), which, from Eq. (6.263), is given by
F dw dw F = dt(AWU + WU A + D),
(6.270)
(6.271)
As noted in the classical case, there exist states with covariance matrix W satisfying
Vss W 0 and yet not satisfying Eq. (6.265). This is also true in the quantum case, even
with the added restriction that W correspond to a pure state by satisfying (6.267). That is,
there exist uniform Gaussian pure-state ensembles that represent the stationary solution ss
of the quantum master equation but cannot be realized by any unravelling. In saying that
the uniform Gaussian ensemble represents ss we mean that
ss = d2Nxc (xc )W
(6.272)
xc ,
where W
x) = g(x;xc , W ), and the Gaussian
xc has the Gaussian Wigner function Wc (
distribution of means is
(xc ) = g(xc ; 0, Vss W ).
(6.273)
In saying that the ensemble cannot be realized we mean that there is no way an observer
can monitor the output of the system so as to know that the system is in the state W
xc ,
such that W remains fixed in time butxc varies so as to sample the Gaussian distribution
(6.273) over time. On the other hand, there are certainly some ensembles that satisfy both
Eq. (6.265) and Eq. (6.267), which thus are physically realizable (PR) in this sense. This
existence of some ensembles representing ss that are PR and some that are not is an
instance of the preferred-ensemble fact discussed in Section 3.8.2.
Example: on-threshold OPO. To illustrate this idea, consider motion in one dimension with
a single output channel (N = L = 1), described by the master equation
= i[(q p + p q)/2,
] + D[q + ip],
(6.274)
where the output arising from the second term may be monitored. This could be realized in
quantum optics as a damped cavity (a harmonic oscillator in the rotating frame) containing
an on-threshold parametric down-converter, also known as an optical parametric oscillator
(OPO). Here p would be the squeezed quadrature and q the anti-squeezed quadrature. The
monitoring of the output could be realized by techniques such as homodyne or heterodyne
detection.
Exercise 6.29 Show that in this case we have
0 1
G=
,
1 0
C = (1, i)
(6.275)
328
0
.
1
(6.276)
+i
0,
i
1
0.
1 2
(6.279)
(6.280)
The first of these implies > 0, > 0 and 1 + 2 . The second then implies
(1 2 )/2.
In Fig. 6.7 we show four quantum states W
xc that are pure (they saturate Eq. (6.279)) and
satisfy Vss W 0. That is, they fit inside ss . However, one of them does not satisfy
Eq. (6.280). We see the consequence of that when we show the mixed states that these four
pure states evolve into after a short time = 0.2 in Fig. 6.8. (We obtain this by analytically
solving the moment equation (6.51), starting withx = 0 for simplicity.) This clearly shows
that, for the initial state that fails to satisfy Eq. (6.280), the mixed state at time can no
longer be represented by a mixture of the original pure state with random displacements,
because the original state does not fit inside the evolved state. The ensemble formed from
these states is not physically realizable. We will see later, in Section 6.6.6, how this has
consequences in quantum feedback control.
329
p
Fig. 6.7 Representation of states in phase-space for the system described in the text. The horizontal
and vertical axes are q and p, respectively, and the curves representing the states are one-standarddeviation contours of the Wigner function W (q, p). That is, they are curves defined by the parametric
q),
where (q,
p)
is the centroid of the state. The stationary uncondiequation W (q, p) = e1 W (q,
tioned state has a p-variance of /2 and an unbounded q-variance. A short segment of the Wigner
function
of this state is represented by the lightly shaded region between the horizontal lines at
p = /2. The ellipses represent pure states, with area . They are possible conditioned states of
the system, since they fit inside the stationary state. For states realizable by continuous monitoring
of the system output, the centroid of the ellipses wanders stochastically in phase-space, which is
indicated in the diagram by the fact that the states are not centred at the origin. The state in the
top-right corner is shaded differently from the others because it cannot be physically realized in this
way, as Fig. 6.8 demonstrates.
One new feature that arises in the quantum case is the following. Classically, for a
minimally disturbing measurement, the stronger the measurement, the better the control.
Consider the steady-state case for simplicity. If we say that C = C1 , with C1 fixed, then
the stationary conditioned covariance matrix W is given by the ARE
AW + W A + D = W C1 C1 W.
(6.281)
(6.282)
330
Fig. 6.8 Representation of the four pure states from Fig. 6.7, plus the mixed states they evolve into
after a time = 0.2. For ease of comparison we have centred each of these states at the origin, and we
have omitted the shading for the stationary state, but the other details are the same as for the preceding
figure. Note that, apart from the top-right state, the initial states (heavy shading) all fit inside the
evolved states (light shading). Hence they are all physically realizable (PR). The top-right initial state
is unshaded, as is apparent from the parts that do not fit inside the evolved state (light shading), and
so is not PR. (The part that does fit inside appears with medium shading, as in Fig. 6.7). The four
initial states that appear here are defined as follows. Top-left: the state with minimum q-variance that
fits inside the stationary state. Bottom-left: the state arising from the U = I /2 unravelling. Top-right:
the state with minimum (q p)2 that fits inside the stationary state. Bottom-right: the state with
minimum (q p)2 that is PR.
(6.283)
Here, the eigenvalues of W are not monotonically decreasing with , and neither (in
general) is Ess [h]. The cost may actually monotonically increase with , or there may be
some optimum that minimizes Ess [h].
Example: the harmonic oscillator. We can illustrate the above idea, as well as other concepts
in LQG control, using the example of the harmonic oscillator with position measurement
and controlled by a spatially invariant (but time varying) force. This was considered by
Doherty and Jacobs [DJ99], who also discussed a physical realization of this system in
cavity QED. We do not assume that the oscillator frequency is much larger than the
measurement rate , so it is not appropriate to work in the interaction frame. Indeed, we
take the oscillator frequency to be unity, and for convenience we will also take the particle
mass to be unity. We can model this by choosing
1 0
1 0
U=
.
(6.284)
G=
,
C = (1, 0),
0 0
0 1
C1 = (2, 0)/ .
331
(6.285)
In the above we have assumed that D0 = 0 (that is, that there are no noise sources apart
from the measurement back-action). This allows a simple solution to the algebraic MRE
(6.283):
2
W =
,
(6.286)
(1 + ) 2
4
where = 1 + 4 2 1.
Exercise 6.32 Show that, as well as solving Eq. (6.283), this W saturates the LMI (6.266),
and hence corresponds to a pure state.
When 1, the measurement of position is slow compared with the oscillation of the
particle. In this limit, 2 2 and W (/2)I . That is, the conditioned state is a coherent
state of the oscillator, and the conditioned variance in position is /2. In physical units, this
(the standard quantum limit for the position variance) is /(2m).
Now consider feedback control of the oscillator, for the purpose of minimizing the energy
in steady state. That is, we choose the cost function P = I , and (from the control constraint
mentioned above), B = (0, 1) , so that Q is just a scalar. Then Eq. (6.117) for Y becomes
0 1
0 1
1 0
0
0
Y.
(6.287)
Y +Y
+
=Y
1 0
1 0
0 1
0 Q1
Exercise 6.33 Show that, for Q 1, this ARE has the approximate solution
Q
1
.
Y
Q
Q
(6.288)
0
.
1
Thus the optimal feedback, which adds to the equations of motion the term
d q
1
0
,
=
fb
+p
dt p
Q q
(6.289)
(6.290)
is asymptotically stable.
Now for this problem we have
)
F = CW =
( 2, ).
4
(6.291)
332
Therefore, under the optimal feedback, the approximate (for Q 1) equation for the
unconditioned variance (6.124) is
'
(
0
1
2
(Vss W ) + m.t. =
.
(6.292)
Q1/2 Q1/2
2
4 2
In order to counter the largeness of Q1/2 , we must have
1 1
Vss W =
+ O(Q1/2 ),
1 1
(6.293)
1 + 4 2 1)
+ 2
0
Vss =
(6.294)
+ O(Q1/2 ).
0
+ (1 + ) 2
4
Note that, even though we have set the control cost Q to zero, Vss does not approach W .
The classical fluctuations (6.293) are of the same order () as the quantum noise W . This
is because the control constraint, that B = (0, 1) , means that the system is not pacifiable.
This follows from Eq. (6.127), since rank[B] is one, but rank[B F ] is two.
Under this optimal feedback control, the integrand in the cost function (6.126) evaluates
to
Ess [h] = tr[Y BQ1 B Y W ] + tr[Y D]
) !
"
2
1 + /2 + /2 + O(Q1/2 ).
=
(6.295)
(6.296)
333
c ]dt + dt D[q]
c + dw(t)H[q]
c . (6.297)
dc = i[(q 2 + p 2 )/2 + u(t)p,
On moving to the interaction frame with respect to H 0 = (q 2 + p 2 )/2 we have
dc = i[u(t)(p cos t q sin t), c ]dt + dt D[q cos t + p sin t]c
(6.298)
+ 12 D[p].
Recall from
Under the secular approximation, D[q cos t + p sin t] 12 D[q]
Section 6.4.5 that we cannot average oscillating terms that multiply
dw(t). Rather, we
(t)
=
2 dw(t) cos t and
must consider
the
average
of
the
correlation
functions
of
dw
1
dw2 (t) = 2 dw(t)sin t, namely dwi (t) dwj (t) = ij dt. Similarly, we cannot assume that
u(t) is slowly varying and average over oscillating terms that multiply u(t). Instead we
should define u1 (t) = u(t)cos t and u2 (t) = u(t)sin t (and we expect that these will have
slowly varying parts). Thus we obtain the approximate SME
c ]dt + (/2)dt(D[q]
+ D[p])
c
dc = i[u1 (t)p u2 (t)q,
c + /2 dw2 H[p]
c.
+ /2 dw1 H[q]
(6.299)
Exercise 6.35 Show that for this system we have C = 2/ I , A = 0, D = (/2)I and
B = I . Thus verify that W = (/2)I and that the system is pacifiable.
334
covariance matrix WU will be positive definite. Thus the control cost associated with the
system will always be non-zero, and will depend upon the unravelling U .
Consider an asymptotic LQG problem. Then the cost to be minimized (by choice of
unravelling) is
m = Ess [h] = tr[Y BQ1 B Y WU ] + tr[Y D],
(6.300)
(6.301)
Recall also from Section 6.6.4 that there is a sufficient condition on a pure-state WU for it
to be physically realizable, namely that it satisfy the second LMI
AWU + WU A + D 0.
(6.302)
Now the problem of minimizing a linear function (6.300) of a matrix (here WU ) subject to
the restriction of one or more LMIs for that matrix is a well-known mathematical problem.
Significantly, it can be solved numerically using the efficient technique of semi-definite
programming [VB96]. This is a generalization of linear programming and a specialization
of convex optimization. Note that here efficient means that the execution time for the
semi-definite program scales polynomially in the system size n. As pointed out earlier, an
unravelling U that gives any particular permissible WU can also be found efficiently by
solving the linear matrix equation (6.268).
Example: on-threshold OPO. We now illustrate this with an example. Consider the system
described in Section 6.6.4, a damped harmonic oscillator at threshold subject to dyne
detection (such as homodyne or heterodyne). Since optimal performance will always be
obtained for efficient detection, such detection is parameterized by the complex number ,
such that || 1, with the unravelling matrix given by
1 1 + Re
Im
.
(6.303)
U=
Im
1 Re
2
Homodyne detection of the cavity output corresponds to = e2i , with the phase of the
measured quadrature,
x = q cos p sin .
(6.304)
335
That is, = 0 corresponds to obtaining information only about q, while = /2 corresponds to obtaining information only about p. In heterodyne detection information about
both quadratures is obtained equally, and = 0 so that U = I /2.
Now let us say that the aim of the feedback control is to produce a stationary state where
q = p as nearly as possible. (There is no motivation behind this aim other than
to illustrate
2
ss . That is,
the technique.) The quadratic cost function to be minimized is thus (q p)
P =
1
1
1
1.
(6.305)
In this optical example it is simple to displace the system in its phase space by application
of a coherent driving field. That is, we are justified in taking B to be full row rank, so that
the system will be pacifiable.
Any quadratic cost function will be minimized for a pure state, so we may assume that
Eq. (6.301) is saturated, with = 1 + 2 . Ignoring any control costs, we have Q 0.
Thus, from Eq. (6.129), the minimum cost m achievable by optimal control is
m = Ess [h] = tr[P WU ],
(6.306)
(6.308)
336
where
f = x !BL/.
(6.309)
Generalizing the analysis of Section 5.5, the ensemble-average evolution including the
feedback is described by the master equation
(6.310)
Remember that the matrix T is defined such that T T = U . Equation (6.310) is not
limited to linear systems. That is, it is valid for any c with cl L(H), any H D(H), any
f with fl D(H) and any U U given by Eq. (6.190).
Exercise 6.36 Referring back to Section 5.5, convince yourself of the correctness of
Eq. (6.310) and show that it is of the Lindblad form.
For linear systems, the master equation (6.310) can be turned into an OUE for the Wigner
function, as could be done for the original master equation as explained in Section 6.6.2.
However, just as for the original evolution (with no feedback), it is easier to calculate the
evolution of x in the Heisenberg picture, including the feedback Hamiltonian (6.308). The
result is precisely Eq. (6.141), with hats placed on the variables. Thus the classical results
for Markovian feedback all hold for the quantum case.
Under the conditions stated at the beginning of this section, it is thus clear that the
optimal measurement sensitivity (if it exists) and the optimal unravelling are the same
for Markovian feedback as for state-based feedback. The optimal unravelling is found by
solving the semi-definite program of minimizing
m = Ess [h] = tr[P WU ]
(6.311)
subject to the LMIs (6.302) and (6.301). Recall that the feedback-modified drift matrix is
M = A + BLC = A WU C C C.
(6.312)
(6.313)
(6.314)
337
to momenta and positions of particles, then it is easy to imagine implementing a timedependent potential linear in the qs (i.e. a time-dependent but space-invariant force), but
not so for a time-dependent Hamiltonian term linear in the ps. In such circumstances
state-based feedback may be strictly superior to Markovian feedback.
This can be illustrated by the harmonic oscillator with position measurement, as considered in Section 6.6.5. Say B = (0, 1) , describing the situation in which only a positiondependent potential can be controlled. Taking m = = 1 as before, the feedback-modified
drift matrix is
A = A + BLC
0 1
0
=
+
L C
1 0
1
0
1
=
.
1 + LC 0.
(6.315)
0
(6.316)
(6.317)
Thus the only effect the feedback can have in this situation is to modify the frequency of
the oscillator from unity to 1 LC. It cannot damp the motion of the particle at all.
How do we reconcile this analysis with the experimental result, discussed in Section 5.8.2,
demonstrating cooling of an ion using Markovian feedback? The answer lies in the secular
approximation, as used in Section 6.6.5 for this sort of system. The rapid ( = 1 MHz)
oscillation of the ion means that the signal in the measured current y(t) also has rapid
sinusoidal oscillations. In the experiment the current was filtered through a narrow (B =
30 kHz) band-pass filter centred at the ions oscillation frequency. This gives rise to two
currents the cosine and the sine components of the original y(t). The innovations in
these currents correspond exactly to the two noise terms dw1 and dw2 in the SME (6.299)
under the secular approximation. As shown in that section, the system in the secular
approximation is pacifiable. Moreover, because the bandwidth B was much greater than
the characteristic relaxation rate of the ion ( = 400 Hz) it is natural (in this rotating frame)
to regard these current components as raw currents y1 (t) and y2 (t) that can be fed back
directly, implementing a Markovian feedback algorithm. Thus we see that the limitations
of Markovian feedback can sometimes be overcome if one is prepared to be lenient in ones
definition of the term Markovian.
338
339
on the equator of the Bloch sphere, for which the Markovian feedback algorithm produced
a completely mixed state in steady state. This deficiency can be overcome using state-based
feedback [WMW02]. Moreover, it was proven rigorously (i.e. without reliance on numerical
evidence from stochastic simulations) that state-based feedback is superior to Markovian
feedback in the presence of imperfections such as inefficient detection or dephasing.
A final application of state-based control is in deterministic Dicke-state preparation. As
discussed in Section 5.7, Markovian feedback can (in principle) achieve deterministic spinsqueezing close to the Heisenberg limit. This is so despite the fact that the approximations
behind the feedback algorithm [TMW02b], which are based on linearizing about the mean
spin vector and treating the two orthogonal spin components as continuous variables, break
down in the Heisenberg limit. The breakdown is most extreme when the state collapses
to an eigenstate of Jz (a Dicke state) with eigenvalue zero. This can be visualized as the
equatorial ring around the spin-J Bloch sphere of Fig. 5.4, for which the spin vector has
zero mean. Without feedback, the QND measurement alone will eventually collapse the
state into a Dicke state, but one that can be neither predicted nor controlled. However,
Stockton et al. show using stochastic simulations that state-based feedback does allow the
deterministic production of a Jz = 0 Dicke state in the long-time limit [SvHM04].
Applications of state-based quantum feedback control in quantum information will be
considered in Chapter 7.
= C(t)
dt
(6.318)
340
Here C(t)
= t h(s)ds,
) 1]/
), so the
t, while > 0 is a risk parameter. In the limit 0, [R(T
C(T
problem reduces to the usual (risk-neutral) sort of control problem.
A useful and elegant example of risk-sensitive control is LEQG [Whi81]. This is akin to
the LQG control discussed above (an example of risk-neutral control), in that it involves
linear dynamics and Gaussian noise. But, rather than having a cost function that is the
expectation of a time-integral of a quadratic function of system and control variables, it has a
cost function that is the exponential of a time-integral of a quadratic function. This fits easily
to be a quadratic function of system observables and
in James formalism, on choosing h(s)
control variables (which are also observables in the quantum Langevin treatment [Jam05]).
Just as for the LQG case, many results from classical LEQG theory follow over to quantum
LEQG theory [Yam06]. This sort of risk-sensitive control is particularly useful because
the linear dynamics (in either LQG or LEQG) is typically an approximation to the true
dynamics. Because risk-sensitive control avoids large excursions, it can ensure that the
system does not leave the regime where linearization is a good approximation. That is, the
risk-sensitive nature of the control helps ensure its validity.
A different approach to dealing with uncertainties in the dynamics of systems is the robust
estimator approach adopted by Yamamoto [Yam06]. Consider quantum LQG control, but
with bounded uncertainties in the matrices A and C. Yamamoto finds a non-optimal linear
filter such that the mean square of the estimation error is guaranteed to be within a certain
bound. He then shows by example that linear feedback based on this robust observer results
in stable behaviour in situations in which both standard (risk-neutral) LQG and (risksensitive) LEQG become unstable. Yet another approach to uncertainties in dynamical
parameters is to describe them using a probability distribution. Ones knowledge of these
parameters is then updated simultaneously, and in conjuction, with ones knowledge of
the system. The interplay between knowledge about the system and knowledge about its
dynamics leads to a surprising range of behaviour under different unravellings. This is
investigated for a simple quantum system (resonance fluorescence with an uncertain Rabi
frequency) in Ref. [GW01].
7
Applications to quantum information processing
7.1 Introduction
Any technology that functions at the quantum level must face the issues of measurement
and control. We have good reasons to believe that quantum physics enables communication
and computation tasks that are either impossible or intractable in a classical world [NC00].
The security of widely used classical cryptographic systems relies upon the difficulty of
certain computational tasks, such as breaking large semi-prime numbers into their two
prime factors in the case of RSA encryption. By contrast, quantum cryptography can be
absolutely secure, and is already a commercial reality. At the same time, the prospect of a
quantum computer vastly faster than any classical computer at certain tasks is driving an
international research programme to implement quantum information processing. Shors
factoring algorithm would enable a quantum computer to find factors exponentially faster
than any known algorithm for classical computers, making classical encryption insecure.
In this chapter, we investigate how issues of measurement and control arise in this most
challenging quantum technology of all, quantum computation.
The subjects of information theory and computational theory at first sight appear to belong
to mathematics rather than physics. For example, communication was thought to have been
captured by Shannons abstract theory of information [SW49, Sha49]. However, physics
must impact on such fundamental concepts once we acknowledge the fact that information
requires a physical medium to support it. This is a rather obvious point; so obvious, in
fact, that it was only recently realized that the conventional mathematics of information
and computation are based on an implicit classical intuition about the physical world. This
intuition unnecessarily constrains our view of what tasks are tractable or even possible.
Shannons theory of information and communication was thoroughly grounded in classical physics. He assumed that the fundamental unit of information is a classical bit, which
is definitely either in state zero or in state one, and that the process of sending bits
through channels could be described in an entirely classical way. This focus on the classical
had important practical implications. For example, in 1949 Shannon used his formulation of
information theory to prove [Sha49] that it is impossible for two parties to communicate
with perfect privacy, unless they have pre-shared a random key as long as the message they
wish to communicate.
341
342
Insofar as Shannons theory is concerned, any physical quantity that can take one of two
distinct values can support a bit. One physical instantiation of a bit is as good as any other
we might say that bits are fungible. Clearly, bits can exist in a quantum world. There are
many quantum systems that are adequate to the task: spin of a nucleus, polarization of a
photon, any two stationary states of an atom etc., but, as the reader well knows, there is a
big difference between a classical bit and a two-level quantum system: the latter can be in
an arbitrary superposition of its two levels.
One might think that such a superposition is not so different from a classical bit in a
mixture, describing a lack of certainty as to whether it is in state zero or one, but actually the
situations are quite different. The entropy of the classical state corresponding to an uncertain
bit value is non-zero, whereas the entropy of a pure quantum superposition state is zero. To
capture this difference, Schumacher coined the term qubit for a quantum bit [Sch95]. Like
bits, qubits are fungible and we can develop quantum information theory without referring
to any particular physical implementation. This theory seeks to establish abstract principles
for communication and computational tasks when information is encoded in qubits. For a
thorough introduction to this subject we refer the reader to the book by Nielsen and Chuang
[NC00].
It will help in what follows to state a few definitions. In writing the state of a qubit,
we typically use some preferred orthonormal basis, which, as in Chapter 1, we denote
{|0, |1} and call the logical basis or computational basis. The qubit Hilbert space could
be the entire Hilbert space of the system or just a two-dimensional subspace of the total
Hilbert space. In physical terms, the logical basis is determined by criteria such as ease
of preparation, ease of measurement and isolation from sources of decoherence (as in
the pointer basis of Section 3.7). For example, if the qubit is represented by a spin of a
spin-half particle in a static magnetic field, it is convenient to regard the computational
basis as the eigenstates of the component of spin in the direction of the field, since the
spin-up state can be prepared to a good approximation by allowing the system to come to
thermal equilibrium in a large enough magnetic field. If the physical system is a mesoscopic
superconducting system (see Section 3.10.2), the computational basis could be two distinct
charge states on a superconducting island, or two distinct phase states, or some basis in
between these. A charge qubit is very difficult to isolate from the environment and thus it
may be preferable to use the phase basis. On the other hand, single electronics can make
the measurement of charge particularly easy. In all of these cases the qubit Hilbert space
is only a two-dimensional subspace of an infinite-dimensional Hilbert space describing the
superconducting system.
Once the logical basis has been fixed, we can specify three Pauli operators, X, Y and Z,
by their action on the logical states |z, z {0, 1}:
Z|z = (1)z |z,
(7.1)
Y |z = i(1) |1 z,
(7.2)
X|z = |1 z.
(7.3)
343
Here, we are following the convention common in the field of quantum information [NC00].
Note the different notation from what we have used previously (see Box 3.1) of x , y and
z . In particular, here we do not put hats on X, Y and Z, even though they are operators.
When in this chapter we do use X and Y , these indicate operators with continuous spectra,
as in earlier chapters. Another convention is to omit the tensor product between Pauli
operators. Thus, for a two-qubit system, ZX means Z X. Note that the square of any
Pauli operator is unity, which we denote I .
This chapter is structured as follows. Section 7.2 introduces a widely used primitive of
quantum information processing: teleportation of a qubit. This involves discrete (in time)
measurement and feedforward. In Section 7.3, we consider the analogous protocol for
variables with continuous spectra. In Section 7.4, we introduce the basic ideas of quantum
errors, and how to protect against them by quantum encoding and error correction. In
Section 7.5 we relate error correction to the quantum feedback control of Chapter 5 by
considering continuously detected errors. In Section 7.6 we consider the conventional error
model (i.e. undetected errors), but formulate the error correction as a control problem with
continuous measurement and Hamiltonian feedback. In Section 7.7 we consider the same
problem (continuous error correction) but without an explicit measurement step; that is,
we treat the measurement and control apparatus as a physical system composed of a small
number of qubits. In Section 7.8 we turn to quantum computing, and show that discrete
measurement and control techniques can be used to engineer quantum logic gates in an
optical system where the carriers of the quantum information (photons) do not interact.
In Section 7.9, we show that this idea, called linear optical quantum computation, can
be augmented using techniques from continuous measurement and control. In particular,
adaptive phase measurements allow one to create, and perform quantum logic operations
upon, qubits comprising arbitrary superpositions of zero and one photon. We conclude as
usual with suggestions for further reading.
7.2 Quantum teleportation of a qubit
We begin with one of the protocols that set the ball rolling in quantum information: quantum
teleportation of a qubit [BBC+ 93]. This task explicitly involves both quantum measurement
and control. It also requires an entangled state, which is shared by two parties, the sender
and the receiver. The sender, Alice, using only classical communication, must send an
unknown qubit state to a distant receiver, Bob. She can do this in such a way that neither
of them learns anything about the state of the qubit. The protocol is called teleportation
because the overall result is that the qubit is transferred from Alice to Bob even though there
is no physical transportation of any quantum system from Alice to Bob. It is illustrated in
Fig. 7.1 by a quantum circuit diagram, the first of many in this chapter.
7.2.1 The protocol
The key resource (which is consumed) in this quantum teleportation protocol is the bipartite
entangled state. Alice and Bob initially each have one qubit of a two-qubit maximally
344
IFF
C
ALICE
BOB
Fig. 7.1 A quantum circuit diagram for quantum teleportation of an arbitrary qubit state |C from
Alice to Bob, using an entangled Bell state | shared by Alice and Bob. The single lines represent
quantum information in qubits, with time increasing from left to right. The two boxes containing dials
represent a measurement of the operator contained within (ZZ and XX, respectively), with possible
outcomes 1. The double lines represent classical bits: the outcomes of the measurements and the
controls which implement (or not) the quantum gates X and Z, respectively. For details see the text.
(7.4)
This is often known as a Bell state, because of the important role such states play in Bells
theorem [Bel64] (see Section 1.2.1). In addition, Alice has in her possession another qubit,
which we will refer to as the client, prepared in an arbitrary state (it could even be entangled
with other systems). For ease of presentation, we will assume that the client qubit is in a
pure state
|C = |0C + |1C .
(7.5)
This state is unknown to Alice and Bob; it is known only to the client who has entrusted it
to Alice for delivery to Bob. The total state of the three systems is then
1
| = (|0C + |1C )(|0A |0B + |1A |1B ).
2
(7.6)
At this stage of the protocol, Alice has at her location two qubits, the client qubit, in an
unknown (to her) state, and one of an entangled pair of qubits. The other entangled qubit is
held at a distant location by Bob.
The next stage requires Alice to measure two physical quantities, represented by commuting operators, on her two qubits. These quantities are joint properties of her two qubits,
with operators ZA ZC and XA XC .
345
Exercise 7.1 Show that these operators commute, that they both have eigenvalues 1 and
that the simultaneous eigenstates are
(7.7)
(7.8)
(7.9)
(7.10)
Here the first label refers to the eigenvalue for ZZ and the second label to the
eigenvalue of XX, and the order of the qubits is AC as above.
This is known as a Bell measurement, because the above eigenstates are Bell states.
On rewriting the state of the three qubits, Eq. (7.6), in terms of these eigenstates for
qubits A and C, we find
| =
1
[|+; +(|0B + |1B ) + |+; (|0B |1)
2
+ |; +(|1B + |0B ) + |; (|1B + |0B )].
(7.11)
346
Exercise 7.3 Suppose the client state is itself entangled with another system, Q. Convince
yourself that, after teleportation, this will result in Bobs qubit being entangled in the same
way with Q.
Clearly the teleportation protocol just described is just a rather simple form of
measurement-based control in which the results of measurement upon a part of the total
system are used to effect a local unitary transformation on another part of the system. While
Alice and Bob share entangled qubits they must always be regarded as acting on a single
quantum system, no matter how distant they are in space. Only at the end of the protocol
can Bobs qubit be regarded as an independent quantum system.
(7.12)
which is the probability for the client to find Bobs system in the desired state |, if he
were to check.
Exercise 7.4 Show that F = 1 iff = ||.
How much less than unity can the fidelity be before we stop calling this process quantum
teleportation? To turn the question around, what is the maximum fidelity that can be obtained
without using a quantum resource (i.e. an entangled state).
It turns out that the answer to this question hangs on what it means to say that the
client state is unknown to Alice and Bob. One answer to this question has been given by
Braunstein et al. [BFK00] by specifying the ensemble from which client states are drawn.
To make Alices and Bobs task as difficult as possible, we take the ensemble to weight all
pure states equally.
Exercise 7.5 Convince yourself that the task of Alice and Bob is easier if any other ensemble
is chosen. In particular, if the ensemble comprises two orthogonal states (known to Alice
and Bob), show that they can achieve a fidelity of unity without any shared entangled state.
We may parameterize qubit states on the Bloch sphere (see Box 3.1) by = (, ) according to
i
|0 + e sin
|1.
(7.13)
| = cos
2
2
The uniform ensemble of pure states then has the probability distribution [1/(4 )]d =
[1/(4)]d sin d.
347
For this ensemble, there are various ways of achieving the best possible classical teleportation (that is, without using entanglement). One way is for Alice to measure ZA and tell
Bob the result, and for Bob to prepare the corresponding eigenstate. From Eq. (7.13), the
probabilities for Alice to obtain the results 1 are cos2 (/2) and sin2 (/2), respectively.
Thus, the state that Bob will reconstruct is, on average,
2
|00| + sin
|11|.
= cos
2
2
2
(7.14)
Exercise 7.6 Show that the same state results if Alice and Bob follow the quantum teleportation protocol specified in Section 7.2.1, but with their entangled state | replaced by
the classically correlated state
AB =
1
(|0000| + |1111|).
2
| |.
F =
4
(7.15)
(7.16)
348
(7.17)
and are assumed to form a complete set of observables (see Section 6.6). This allows us to
define a particularly convenient choice of entangled state for Alice and Bob:
|AB = eiYA XB /2 |X := X0 A |Y := Y0 B .
(7.18)
(7.19)
XY
B eiY x/2 C X + x|C .
(7.20)
Exercise 7.9 Show this, by first using the BakerCampbellHausdorff theorem (A.118) to
show that
eiYC XA /2 eiYA XB /2 = eiYA XB /2 eiYC XA /2 eiYC XB /2
(7.21)
Using the last part of this exercise a second time, we can write Eq. (7.20) in a basisindependent manner as
| XY B eiY XB /2 eiXYB /2 |B .
(7.22)
349
n |nA |nB ,
(7.23)
n=0
where [0, 1). This state is generated from the ground (vacuum) state |0, 0 by the
unitary transformation
U (r) = er(a b a b) ,
(7.24)
where = tanh r and a and b are the annihilation operators for Alices and Bobs mode,
respectively. Compare this with the unitary transformation defining the one-mode squeezed
state (A.103).
350
The two-mode squeezed state (7.23) approximates the EPR state in the limit 1
(r ). This can be seen from the expression for |AB in the basis of X A and X B :
(xA , xB ) = A xA |B xB |
2r
e
e2r
1/2
2
2
exp (xA xB )
(xA + xB ) .
= (2 )
8
8
(7.25)
(7.26)
This should be compared with the corresponding equation for the EPR state (7.18),
(xA , xB ) A xA |B xB |eiYA XB /2 |X := 0A |Y := 0B
(xA xB ).
(7.27)
(7.28)
(7.29)
so that in the limit r the perfect EPR correlations are reproduced. This result can be
more easily derived in a pseudo-Heisenberg picture.
Exercise 7.11 Consider the unitary operator U (r) as an evolution operator, with r as a
pseudo-time. Show that, in the pseudo-Heisenberg picture,
(7.30)
(XA X B ) = (X A X B ),
r
(7.31)
(YA + YB ) = (YA + YB ).
r
Hence, with r = 0 corresponding to the vacuum state |0, 0, show that, in the state (7.23),
the correlations (7.29) result.
If we use the finite resource (7.23), but follow the same teleportation protocol as for the
ideal EPR state, the final state for Bob is still pure, and has the wavefunction (in the XB
representation)
i
B(X,Y ) (x) =
dx e 2 x Y (x, x )C (X + x ),
(7.32)
where C (x) is the wavefunction for the client state and (x, x ) is given by Eq. (7.26).
Clearly in the limit r the teleportation works as before.
Exercise 7.12 Show that when the client state is an oscillator coherent state |, with
R, the teleported state at B is
1
ixY
(X,Y )
1/4
2
exp (x tanh r(2 X))
tanh r .
(7.33)
B (x) = (2)
4
2
For finite squeezing the state at B is not (even after the appropriate displacements in
phase space) an exact replica of the client state. We are interested in the fidelity,
F = ||e 2 gY XB e 2 gXYB | (X,Y ) |2 .
i
(7.34)
351
In the ideal teleportation g = 1, but here we allow for the gain g to be non-unity. For finite
squeezing, it is in fact usually optimal to choose g = 1.
Exercise 7.13 Show that for the client state a coherent state, |, the optimal choice of the
gain is g = tanh r, in which case the fidelity is given by
F = e(1g)
||2
(7.35)
From this expression it is clear that the fidelity is the same for all coherent states under this
classical protocol. Thus the average fidelity would then be given by
d2 ()|| |
(7.38)
F = |
() =
=
d2 2||2
.
e
(7.39)
352
Strictly, it would be impossible to demonstrate an average fidelity greater than 0.5 for
the coherent-state ensemble using the quantum teleportation protocol of Section 7.3.2.
The reason for this is that for that protocol the teleportation fidelity depends upon the
coherent amplitude || as given by Eq. (7.35). Because this decays exponentially with
||2 , if one averaged over the entire complex () plane, one would obtain a fidelity close
to zero. In practice (as discussed in the following subsection) only a small part of the
complex plane near the vacuum state (|| = 0) was sampled. For a discussion of how
decoherence of the entangled resource due to phase fluctuations will affect Eq. (7.35), see
Ref. [MB99]. For a discussion of other criteria for characterizing CV quantum teleportation,
see Refs. [RL98, GG01].
XA X C
XA
YA
1
YA YC
Ubs
(7.40)
X C Ubs = 2 X C + X A .
YC
YC + YA
From this it is clear that the post-beam-splitter quadrature measurements described above
are equivalent to a pre-beam-splitter measurement of X C X A and YC + YA . These quadratures can be measured using homodyne detection, as discussed in Section 4.7.6. Such measurements of course absorb all of the light, leaving Alice with only classical information
(the measurement results). In a realistic device, inefficiency and dark noise introduce extra
noise into these measurements, as discussed in Section 4.8.
On receipt of Alices measurement results, Bob must apply the appropriate unitary
operator, a displacement, to complete the protocol. Displacements are easy to apply in
quantum optics using another mode, prepared in a coherent state with large amplitude, and
a beam-splitter with very high reflectivity for mode B. This is is discussed in Section 4.4.1.
In the experiment the two modes used were actually at different frequencies, and the role
353
354
SYSTEM QUBIT
ENVIRONMENT
QUBIT
Fig. 7.2 Circuit diagram for a C-NOT interaction, which here represents the interaction of a two-state
system with a two-state environment. In this case the value of the environment bit controls () a bit-flip
error () on the target (the system bit). That is, if the environment bit has value 1, the value of the
system bit changes, otherwise nothing happens. As discussed later, the same interaction or gate can
be applied to quantum bits, and this figure follows the conventions of Fig. 7.1.
Let the nature of the coupling be such as to transform the variables according to
B B ,
(7.41)
.
(7.42)
(7.43)
while the state of the system{(b)} is arbitrary. (See Section 1.1.2 for a review of notation.)
This interaction or logic gate is depicted in Fig. 7.2. Distinct physical systems (bits or
qubits) are depicted as horizontal lines and interactions are depicted by vertical lines. In
this case the interaction is referred to as a controlled-NOT or C-NOT gate, because the state
of the lower system (environment) controls the state of the upper system according to the
function defined in Eq. (7.41). The environment variable is unchanged by the interaction;
see Eq. (7.42).
This model becomes the binary symmetric channel of classical information theory
[Ash90] when we regard B as the input variable to a communication channel with output
variable B . The received variable B will reproduce the source variable B iff
= 0. Iff = 1, the received variable has undergone a bit-flip error. This occurs with
probability , due to the noise or uncertainty in the environmental variable.
The same model can be used as a basis for defining errors in a qubit. The system variable
B is analogous to (I + Z)/2, where Z is the system Pauli operator ZS . Likewise the
environment variable is analogous to (I + Z)/2, where here Z is the environment Pauli
operator ZE . The state of the environment is then taken as the mixed state
E = (1 )|00| + |11|.
(7.44)
A bit-flip error on the system is analagous to swapping the eigenstates of Z, which can
be achieved by applying the system Pauli operator X. Thus we take the interaction to be
specified by the unitary transformation
I +Z
1
I Z
+I
= (XI XZ + I I + I Z).
U = X
2
2
2
(7.45)
355
SYSTEM QUBIT
ENVIRONMENT
QUBIT
Fig. 7.3 Quantum circuit diagram for a C-NOT interaction that represents the interaction of a two-state
system with a two-state environment. In this case the environment acts to produce (or not) a phase-flip
error on the system qubit. Like all our quantum circuit diagrams, this figure follows the conventions
of Fig. 7.1.
Here the order of operators is system then environment, and in the second expression we
have dropped the tensor product, as discussed in Section 7.1.
Exercise 7.15 Show that the state (7.44) of the environment is left unchanged by this
interaction, in analogy with Eq. (7.42).
The system qubit after the interaction is given by
S = TrE U (S E )U
= XS X + (1 )S ,
(7.46)
(7.47)
pure state 1 |0 + |1.
In this form the interpretation of the noisy channel as an error process is quite clear: S
is the ensemble made up of a fraction of qubits that have suffered a bit-flip error and a
fraction 1 that have not.
Exercise 7.17 Show that the unitary operator in Eq. (7.45) can be generated by the
systemenvironment interaction Hamiltonian
H = (I X)(I + Z)
4
(7.48)
for times t = .
From the discussion so far, it might seem that there is no distinction between errors for
classical bits and qubits. This is certainly not the case. A new feature arises in the quantum
case on considering the example depicted in Fig. 7.3. This is the same as the previous
example in Fig. 7.2, except that the direction of the C-NOT gate has been reversed. In a
classical description this would do nothing at all to the system bit. The quantum case is
different. The interaction is now described by the unitary operator
1
U = (I X ZX + I I + ZI ).
2
(7.49)
356
As in the previous example, we can take the initial state of the environment to be such that
it is left unchanged by the interaction,
E = (1 )|++| + ||,
(7.50)
sition 1 |+ + |.
The reduced state of the system at the output is now seen to be
S = ZS Z + (1 )S .
(7.51)
(7.53)
where Y = iXZ (here this product is an ordinary matrix product, not a tensor product).
This error is a simultaneous bit-flip and phase-flip error. All errors can be regarded as some
combination of these elementary errors. In reality, of course, a given decoherence process
will not neatly fall into these categories of bit-flip or phase-flip errors. However, the theory
of quantum error correction shows that if we can correct for these elementary types of
errors then we can correct for an arbitrary single-qubit decoherence process [NC00].
357
I ZZ
Error
Correcting unitary
+1
1
+1
1
+1
+1
1
1
None
On qubit 1
On qubit 3
On qubit 2
None
XI I
IIX
I XI
on two and even less likely to have occurred on all three. (A crucial assumption here is
the independence of errors across the different bits. Some form of this assumption is also
necessary for the quantum case.) The occurrence of an error can be detected by measuring
the parity of the bit values, that is, whether they are all the same or not. If one is different,
then a majority vote across the bits as to the value of X is very likely to equal the original
value, even if an error has occurred on one bit. This estimate for X can then be used to
change the value of the minority bit. This is the process of error correction.
These ideas can be translated into the quantum case as follows. We encode the qubit
state in a two-dimensional subspace of the multi-qubit tensor-product space, known as the
code space. The basis states for the code space, known as code words, are entangled states
in general. For a three-qubit code to protect against bit-flip errors we can choose the code
words to be simply
|0L = |000,
|1L = |111.
(7.54)
An arbitrary pure state of the logical qubit then has the form |L = |000 + |111.
Suppose one of the physical qubits undergoes a bit-flip. It is easy to see that, no matter
which qubit flips, the error state is always orthogonal to the code space and simultaneously
orthogonal to the other two error states.
Exercise 7.18 Show this.
This is the crucial condition for the error to be detectable and correctable, because it makes it
possible, in principle, to detect which physical qubit has flipped, without learning anything
about the logical qubit, and to rotate the error state back to the code space. Unlike in the
classical case, we cannot simply read out the qubits in the logical basis, because that would
destroy the superposition. Rather, to detect whether and where the error occurred, we must
measure the two commuting operators ZZI and I ZZ. (We could also measure the third
such operator, ZI Z, but that would be redundant.) The result of this measurement is the
error syndrome. Clearly there are two possible outcomes for each operator (1) to give
four error syndromes. These are summarized in Table 7.1.
The above encoding is an example of a stabilizer code [Got96]. In general this is defined
as follows. First, we define the Pauli group for n qubits as
Pn = {1, i} {I, X, Y, Z}n .
(7.55)
358
ENTANGLEMENT
WITH METER
STABILIZER
ENCODING
ERROR DUE TO
ENTANGLEMENT
WITH ENVIRONMENT
CORRECTION
ENVIRONMENT
STATE
Fig. 7.4 The conventional error-correction protocol using the stabilizer formalism. After the state
has been encoded, an error occurs through coupling with the environment. To correct this error,
the encoded state is entangled with a meter in order to measure the stabilizer generators, and then
feedback is applied on the basis of those measurements. Figure 1 adapted with permission from C.
Ahn et al., Phys. Rev. A 67, 052310, (2003). Copyrighted by the American Physical Society.
That is, any member may be denoted as a concatenation of letters (such as ZZI above
for n = 3) times a phase factor of 1 or i. Note that this is a discrete group (here a set
of operators closed under mutiplication), not a Lie group see Box 6.2. It can be shown
that there exist subgroups of 2nk commuting Pauli operators S Pn for all n k 0.
Say that I is not an element of S and that k 1. Then it can be shown that S defines
the stabilizer of a nontrivial quantum code. The code space C(S) is the simultaneous +1
eigenspace of all the operators in S. Then the subspace stabilized is nontrivial, and the
dimension of C(S) is 2k . Hence this system can encode k logical qubits in n physical
qubits. In the above example, we have n = 3 and k = 1.
The generators of the stabilizer group are defined to be a subset of this group such that
any element of the stabilizer can be described as a product of generators. Note that this
terminology differs from that used to define generators for Lie groups see Box 6.2. It
can be shown that n k generators suffice to describe the stabilizer group S. In the above
example, we can take the generators of S to be ZZI and I ZZ, for example. As this example
suggests, the error-correction process consists of measuring the stabilizer. This projection
discretizes whatever error has occurred into one of 2nk error syndromes labelled by the
2nk possible outcomes of the stabilizer generator measurements. This information is then
used to apply a unitary recovery operator that returns the state to the code space. A diagram
of how such a protocol would be implemented in a physical system is given in Fig. 7.4.
To encode a single (k = 1) logical qubit against bit-flip errors, only three (n = 3) physical
qubits are required. However, to encode against arbitrary errors, including phase-flips, a
larger code must be used. The smallest universal encoding uses code words of length n = 5
[LMPZ96]. Since this has k = 1, the stabilizer group has four generators, which can be
chosen to be
XZZXI, I XZZX, XI XZZ, ZXI XZ.
(7.56)
359
However, unlike the above example, this is not based on the usual classical codes (called
linear codes), which makes it hard to generalize. The smallest universal encoding based on
combining linear codes is the n = 7 Steane code [Ste96].
7.4.3 Detected errors
It might be thought that if one had direct knowledge of whether an error occurred, and
precisely what error it was, then error correction would be trivial. Certainly this is the case
classically: if one knew that a bit had flipped then one could just flip it back; no encoding
is necessary. The same holds for the reversible (unitary) errors we have been considering,
such as bit-flip (X), phase-flip (Z) or both (Y ). For example, if one knew that a Z-error had
occurred on a particular qubit, one would simply act on that qubit with the unitary operator
Z. This would completely undo the effect of the error since Z 2 = I ; again, no encoding is
necessary. From the model in Section 7.4.1, one can discover whether or not a Z-error has
occurred simply by measuring the state of the environment in the logical basis.
However, we know from earlier chapters to be wary of interpreting the ensemble resulting
from the decoherence process (7.47) in only one way. If we measure the environment in
the | basis then we do indeed find a Z-error with probability , but if we measure the
environment in a different basis (which may be forced upon us by its physical context, as
described in Chapter 3) then a different sort of error will be found. In particular, for the case
= 1/2 and the environment initially in a superposition state, we reproduce exactly the
situation of Section 1.2.6. That is, if we measure the environment in the logical basis (which
is conjugate to the | basis), then we discover not whether or not the qubit underwent a
phase-flip, but rather which logical state the qubit is in.
Exercise 7.19 Verify this.
Classically such a measurement does no harm of course, but in the quantum case it changes
the system state irreversibly. That is, there is no way to go back to the (unknown) premeasurement state of the qubit. Moreover, there are some sorts of errors, which we will
consider in Section 7.4.4, that are inherently irreversible. That is, there is no way to detect
the error without obtaining information about the system and hence collapsing its state.
These considerations show that the effect of detected errors is nontrivial in the quantum
case. Of course, we can correct any errors simply by ignoring the result of the measurement
of the environment and using a conventional quantum error correcting protocol, as explained
in Section 7.4.2. However, we can do better if we use quantum encoding and make use of
the measurement results. That is, we can do the encoding using fewer physical qubits. The
general idea is illustrated in Fig. 7.5. A simple example is Z-measurement as discussed
above. Since this is equivalent to phase-flip errors, it can be encoded against using the
three-qubit code of Eq. (7.54). However, if we record the results of the Z-measurements,
then we can correct these errors using just a two-qubit code, as we now show. This is not
just a hypothetical case; accidental Z-measurements are intrinsic to various schemes for
linear optical quantum computing [KLM01].
360
STABILIZER
ENCODING
ERROR DUE TO
ENTANGLEMENT
WITH ENVIRONMENT
CORRECTION
ENVIRONMENT
STATE
Fig. 7.5 A modified error-correction protocol using the stabilizer formalism but taking advantage of
the information obtained from measuring the environment. That is, in contrast to Fig. 7.4, the error
and measurement steps are the same. The correction is, of course, different from Fig. 7.4 also. Figure 1
adapted with permission from C. Ahn et al., Phys. Rev. A 67, 052310, (2003). Copyrighted by the
American Physical Society.
We can still use the stabilizer formalism introduced above to deal with the case of
detected errors. For Z-measurements, we have n = 2 and k = 1, so there is a single
stabilizer generator, which can be chosen to be XX. This gives a code space spanned
by the code words
(7.59)
(7.60)
(7.61)
361
(7.62)
(7.63)
Exercise 7.22 Show that this equation also describes monitoring of whether the system is
in logical state 0, for example.
Hint: Note that Z + I = 20 , with 0 = |00|, and show that Eq. (7.63) can be unravelled
using the measurement operators
M 1 = 4 dt 0 ,
(7.64)
M 0 = 1 2 dt 0 .
(7.65)
362
emission event on the j th qubit, in an infinitesimal time interval, takes the form
j
M 1 (dt) = j dt (Xj iYj )/2 j dt Lj ,
(7.66)
where j is the decay rate for the j th qubit, and we have defined the lowering operator
Lj = |1j 0|j . The corresponding no-jump measurement operator is
M 0 (dt) = 1
(j /2)Lj Lj dt iH dt,
(7.67)
j
where, as in Section 4.2, we have allowed for the possibility of some additional Hamiltonian
dynamics. The master equation for the n-qubit system is thus
j D[Lj ] i[H , ].
(7.68)
=
j
Exercise 7.23 Show that, for H = 0, the coherence of the j th qubit, as measured by
T2 = 2/j , while the probability of
Xj (t) or Yj (t), decays exponentially with lifetime
its occupying logical state |0, as measured by Zj (t) + 1 /2, decays exponentially with
lifetime T1 = 1/j . Here the lifetime is defined as the time for the exponential to decay
to e1 .
In the following section we will show that the techniques of correcting detected errors
introduced in Section 7.4.3 can be adapted to deal with continuous detections, whether
non-demolition, as in Eq. (7.64), or demolition, as in Eq. (7.66).
363
we can protect a one-qubit code space perfectly, provided that the spontaneously emitting
qubit is known and a correcting unitary is applied instantaneously.
The code words of the code were previously introduced in Eqs. (7.57) and (7.58). If
the emission is detected, such that the qubit j from which it originated is known, it is
possible to correct back to the code space without knowing the state. This is because the code
and error fulfil the necessary and sufficient conditions for appropriate recovery operations
[KL97]:
= E .
|E E|
(7.69)
Here E is the operator for the measurement (error) that has occurred and E is a constant.
The states | form an orthonormal basis for the code space (they could be the logical states,
such as |0L and |1L in the case of a single logical qubit). These conditions differ from
the usual condition only by taking into account that we know a particular error E = Lj has
occurred, rather than having to sum over all possible errors.
Exercise 7.24 Convince yourself that error recovery is possible if and only if Eq. (7.69)
holds for all measurement (error) operators E to which the system is subject.
More explicitly, if a spontaneous emission on the first qubit occurs, |0L |01 and
|1L |00. Since these are orthogonal states, this fulfills the condition given in (7.69),
so a unitary exists that will correct this spontaneous emission error. One choice for the
correcting unitary is
U 1 = (XI + ZX)/ 2,
(7.70)
U 2 = (I X + XZ)/ 2.
(7.71)
Exercise 7.25 Verify that these are unitary operators and that they correct the errors as
stated.
As discussed above, in this jump process the evolution between jumps is non-unitary, and
so also represents an error. For this two-qubit system the no-jump infinitesimal measurement
operator Eq. (7.67) is
1
2
(7.72)
M 0 (dt) = 1 L1 L1 dt L2 L2 dt iH dt
2
2
(7.73)
= I I dt[(1 + 2 )I I + 1 ZI + 2 I Z + iH ].
The non-unitary part of this evolution can be corrected by assuming a driving Hamiltonian
of the form
H = (1 Y X + 2 XY ).
(7.74)
This result can easily be seen by plugging (7.74) into (7.73) with a suitable rearrangement
of terms:
M 0 (dt) = I I [1 (1 + 2 )dt] 1 dt ZI (I I XX) 2 dt I Z(I I XX).
(7.75)
364
d = M 0 (dt) M 0 (dt) + dt
2
j U j Lj Lj U j .
(7.76)
j =1
2
(7.77)
j =1
From Section 5.4.2, the unitary feedback can be achieved by a feedback Hamiltonian of
the form
H fb = I1 (t)V1 + I2 (t)V2 .
(7.78)
Here Ij (t) = dNj (t)/dt is the observed photocurrent from the emissions by the j th qubit,
while Vj is an Hermitian operator such that exp(iVj ) = U j .
Exercise 7.27 Show that choosing Vj = (/2)U j works.
Hint: Show that U j2 = I , like a Pauli operator.
This code is optimal in the sense that it uses the smallest possible number of qubits
required to perform the task of correcting a spontaneous emission error, since we know that
the information stored in one unencoded qubit is destroyed by spontaneous emission.
7.5.2 Feedback to correct spontaneous-emission diffusion
So far we have considered only one unravelling of spontaneous emission, by direct detection
giving rise to quantum jumps. However, as emphasized in Chapter 4, other unravellings
are possible, giving rise to quantum diffusion for example. In this subsection we consider
homodyne detection (which may be useful experimentally because it typically has a higher
efficiency than direct detection) and show that the same encoding allows quantum diffusion
also to be corrected by feedback.
As shown in Section 4.4, homodyne detection of radiative emission of the two qubits
gives rise to currents with white noise,
(7.79)
Jj (t)dt = j eij Lj + eij Lj dt + j dWj (t).
Choosing the Y -quadratures (j = /2 j ) for definiteness, the corresponding conditional
evolution of the system is
dJ(t) = i[H , J]dt +
2
j =1
j D[Lj ]J dt +
2
j =1
(7.80)
365
We can now apply the homodyne mediated feedback scheme introduced in Section 5.5.
With the feedback Hamiltonian
H fb = 1 F1 J1 (t) + 2 F2 J2 (t),
(7.81)
the resulting Markovian master equation is
= i[H , ] i
2
1
0
(7.82)
j =1
This allows us to use the same code words, and Eqs. (7.70) and (7.71) suggest using the
following feedback operators:
F1 = XI + ZX,
F2 = I X + XZ.
(7.83)
Using also the same driving Hamiltonian (7.74) as in the jump case, the resulting master
equation is
= 1 D[Y I iZX] + 2 D[I Y iXZ].
(7.84)
Exercise 7.28 Verify this, and show that it preserves the above code space.
Hint: First show that Y I iZX = Y I (I I XX).
protects against the nontrivial no-emission evolution. Therefore the code space is protected.
Next, for a diffusive unravelling, we again choose homodyne measurement of the Y quadrature. The same driving Hamiltonian (7.85) is again required, and the feedback
operators generalize to
Fj = I j 1 XI nj + Xj 1 ZXnj .
(7.86)
366
(7.87)
(7.88)
where is a complex number, a and b are real vectors, and = (X, Y, Z) .
We now use the standard condition (7.69), where here we take E = c + (see Section 4.4). Henceforth, is to be understood as real and positive, since the relevant phase
has been taken into account in the definition (7.88). From Eq. (7.69), we need to
consider
E E = (| + |2 + a 2 + b2 )I + Re( + )A + Im( + ) iB + (
a b)
(| + |2 + a 2 + b2 )I + D,
(7.89)
0 = {S,
(7.90)
As long as this is satisfied, there is some feedback unitary eiVthat will correct the error.
As usual, even when the error with measurement operator dt E does not occur, there
is still non-unitary evolution. As shown in Section 4.4, it is described by the measurement
367
operator
| | i
M 0 = 1 E E dt
(e c ei c )dt iH dt.
2
Now we choose the driving Hamiltonian
i| | i
(e c ei c ).
H = iD S +
2
This is an Hermitian operator because of (7.90).
(7.91)
(7.92)
Exercise 7.29 Show that, with this choice, M 0 is proportional to the identity plus a term
S),
which annihilates the code space.
proportional to D(1
Thus, for a state initially in the code space, the condition (7.90) suffices for correction of
both the jump and the no-jump evolution.
We now have to show that a single S exists for all qubits, even with different operators
cj . Since D j (the operator associated with cj as defined in (7.89)) is traceless, it is always
possible to find some other Hermitian traceless one-qubit operator sj , such that {sj , D j } = 0
and sj2 = I . Then we may choose the single stabilizer generator
S = s1 sn
S}.
Having chosen S,
choosing H as
so that the stabilizer group2 is {1,
i|j | ij
cj eij cj )
H =
iD j S +
(e
2
j
(7.93)
(7.94)
will, by our analysis above, provide a total evolution that protects the code space, and the
errors will be correctable; furthermore, this code space encodes n 1 qubits in n.
Exercise 7.30 Show that the n-qubit jump process of Section 7.5.1 follows by choosing,
= 0 and S = Xn , and that D j = j Zj .
Exercise 7.31 Show that the n-qubit diffusion process in Section 7.5.1 follows by choosing,
j , |j | and j = /2.
Hint: See Ref. [AWM03].
7.5.5 Other generalizations
In the above we have emphasized that it is always possible to choose one stabilizer, and so
encode n 1 qubits in n qubits. However, there are situations in which one might choose
a less efficient code with more than one stabilizer. In particular, it is possible to choose
a stabilizer Sj for each error channel cj , with Sj = Sk in general. For example, for the
spontaneous emission errors cj = Xj iYj one could choose Sj as particular stabilizers
2
Strictly, this need not be a stabilizer group, since S need not be in the Pauli group, but the algebra is identical, so the analysis is
unchanged.
368
of the universal five-qubit code. This choice is easily made, since the usual generators of
the five-qubit code are {XZZXI, I XZZX, XI XZZ, ZXI XZ} as discussed above. For
each qubit j , we may pick from this set a stabilizer Sj that acts as X on that qubit, since X
anticommutes with D j = Zj .
In this case, since there are four stabilizer generators, only a single logical qubit can be
encoded. However, this procedure would be useful in a system where spontaneous emission
is only the dominant error process. If these errors could be detected (with a high degree of
efficiency) then they could be corrected using the feedback scheme given above. Then other
(rarer) errors, including missed spontaneous emissions, could be corrected using standard
canonical error correction, involving measuring the stabilizer generators as explained in
Section 7.4.2. The effect of missed emissions from detector inefficiency is discussed in
Ref. [AWM03].
Another generalization, which has been investigated in Ref. [AWJ04], is for the case in
which there is more than one decoherence channel per qubit, but they are all still able to
be monitored with high efficiency. If there are at most two error channels per qubit then
the encoding can be done with a single stabilizer (and hence n 1 logical qubits) just as
above. If there are more than two error channels per qubit then in general two stabilizers are
required. That is, one can encode n 2 logical qubits in n physical qubits, requiring just
one more physical qubit than in the previous case. The simplest example of this, encoding
two logical qubits in four physical qubits, is equivalent to the well-known quantum erasure
code [GBP97] which protects against qubit loss.
7.6 QEC using continuous feedback
We turn now, from correction of detected errors by feedback, to correction of undetected
errors by conventional error correction. As explained in Section 7.4.2, this usually consists
of projective measurement (of the stabilizer generators) at discrete times, with unitary
feedback to correct the errors. Here we consider a situation of continuous error correction,
which may be more applicable in some situations. That is, we consider continual weak
measurement of the stabilizer generators, with Hamiltonian feedback to keep the system
within the code space. This section is based upon Ref. [SAJM04].
For specificity, we focus on bit-flip errors for which the code words are given in Eq. (7.54),
and we assume a diffusive unravelling of the measurement of the stabilizer generators. These
measurements will have no effect when the system is in the code space and will give errorspecific information when it is not. However, because the measurement currents are noisy, it
is impossible to tell from the current in an infinitesimal interval whether or not an error has
occurred in that interval. Therefore we do not expect Markovian feedback to be effective.
Rather, we must filter the current to obtain information about the error syndrome.
The optimal filter for the currents in this case (and more general cases) has been determined by van Handel and Mabuchi [vHM05]. Since the point of the encoding is to make the
quantum information invisible to the measurements, the problem reduces to a classical one
of estimating the error syndrome. It is known in classical control theory as the Wonham filter
369
[Won64]. Here we are using the word filter in the sense of Chapter 6: a way to process the
currents in order to obtain information about the system (or, in this case, about the errors).
The filtering process actually involves solving nonlinear coupled differential equations in
which the currents appear as coefficients for some of the terms. As discussed in Chapter 6,
it is difficult to do such processing in real time for quantum systems. This motivates the
analysis of Ref. [SAJM04], which considered a non-optimal, but much simpler, form of
filtering: a linear low-pass filter for the currents.
In this section we present numerical results from Ref. [SAJM04] showing that, in a
suitable parameter regime, a feedback Hamiltonian proportional to the sign of the filtered
currents can provide protection from errors. This is perhaps not surprising, because, as
seen in Section 7.4, the information about the error syndrome is contained in the signatures
of the stabilizer generator measurements (that is, whether they are plus or minus one), a
quantity that is fairly robust under the influence of noise.
The general form of this continuous error-correcting scheme is similar to the discrete
case. It has four basic elements.
1. Information is encoded using a stabilizer code suited to the errors of concern.
2. The stabilizer generators are monitored and a suitable smoothing of the resulting currents determined.
3. From consideration of the discrete error-correcting unitaries, a suitable feedback Hamiltonian that
depends upon the signatures of the smoothed measurement currents is derived.
4. The feedback is added to the system dynamics and the average performance of the QEC scheme
is evaluated.
Given m stabilizer generators and d errors possible on our system, the stochastic master
equation describing the evolution of a system under this error correction scheme is
dc (t) =
d
k D[E k ]c (t)dt
k=1
m
D[M l ]c (t)dt +
l=1
d
(7.95)
k=1
Note that we have set the system Hamiltonian, H (which allows for gate operations on the
code space) to zero in (7.95). The first line describes the effects of the errors, where k E k
is the Lindblad operator for error k, with k a rate and E k dimensionless. The second line
describes the measurement of the stabilizers M l , with the measurement rate (assumed for
simplicity to be the same for all measurements). We also assume the same efficiency for
all measurements so that the measurement currents dQl /dt can be defined by
dQl = 2 Tr c M l dt + / dWl .
(7.96)
370
The third line describes the feedback, with Fk a dimensionless Hermitian operator intended
to correct error E k . Each Gk is the feedback strength (a rate), a function of the smoothed
(dimensionless) currents
t
rer(tt ) dQl (t )/(2).
(7.97)
Rl (t) = (1 erT )1
tT
Here the normalization of this low-pass filter has been defined so that Rl (t) is centred
around 1. We take T to be moderately large compared with 1/r.
In a practical situation the k s are outside the experimenters control (if they could be
controlled, they would be set to zero). The other parameters, , r and the characteristic size
of Gl (which we will denote by ), can be controlled. The larger the measurement strength
, the better the performance should be. However, as will be discussed in Section 7.6.1,
in practice will be set by the best available measurement device. In that case, we expect
there to be a region in the parameter space of r and where this error-control scheme will
perform optimally. This issue can be addressed using simulations.
To undertake numerical simulations, one needs to consider a particular model. The
simplest situation to consider is protecting against bit-flips using the three-qubit bit-flip
code of Section 7.4.2. We assume the same error rate for the three errors, and efficient
measurements. This is described by the above SME (7.95), with k = , = 1, and
E 1 = XI I,
M 1 = ZZI,
E 2 = I XI,
E 3 = I I X,
M 2 = I ZZ.
(7.98)
(7.99)
A suitable choice for Fk is to set them equal to E k . Because the smoothed currents Rl
correspond to the measurement syndrome (the sign of the result of a strong measurement
of M l ), we want Gk to be such that the following apply.
1.
2.
3.
4.
(7.100)
(7.101)
(7.102)
0.95
1
0.9
0.9
0.9
0.8
0.8
0.85
0.7
= 0.2 Hz
= 0.1 Hz
0.8
371
1
0.9
0.7
= 0.3 Hz
2
0.6
0.8
0.8
0.6
0.6
0.8
0.7
= 0.4 Hz
0.6
= 0.5 Hz
2
0.4
= 0.6 Hz
2
0.4
0.8
0.8
0.8
0.6
0.6
0.4
0.6
= 0.8 Hz
= 0.7 Hz
0.4
= 0.9 Hz
2
0.4
t (seconds)
Fig. 7.6 Fidelity curves with and without error correction for several error rates . The thick solid
curve is the average fidelity F3 (t) of the three-qubit code with continuous error correction. The
parameters used were dt = 104 s, = 150 s1 , = 150 s1 , r = 20 s1 and T = 0.15 s. The dotted
curve is the average fidelity F1 (t) of one qubit without error correction. The thin solid curve is the
fidelity F3d (t) achievable by discrete QEC when the duration between applications is t. Figure 2
adapted with permission from M. Sarovar et al., Phys. Rev. A 69, 052324, (2004). Copyrighted by
the American Physical Society.
optimum values of r and increase with , for fixed. This is as expected, because the
limit where , r and are large compared with should approximate that of frequent
strong measurements with correction. It was found that the best performance was achieved
for . However, as will be discussed in Section 7.6.1, in practice may (like ) be
bounded above by the physical characteristics of the device. This would leave only one
parameter (r) to be optimized.
The performance of this error-correction scheme can be gauged by the average fidelity
F3 (t) between the initial encoded three-qubit state and the state at time t [SAJM04]. This
is shown in Fig. 7.6 for several values of the error rate (the time-units used are nominal;
a discussion of realistic magnitudes is given in Section 7.6.1). Each plot also shows the
fidelity curve F1 (t) for one qubit in the absence of error correction. A comparison of
these two curves shows that the fidelity is preserved for a longer period of time by the
error-correction scheme for small enough error rates. Furthermore, for small error rates
372
( < 0.3 s1 ) the F3 (t) curve shows a great improvement over the exponential decay in
the absence of error correction. However, we see that, past a certain threshold error rate,
the fidelity decay even in the presence of error correction behaves exponentially, and the
two curves look very similar; the error-correcting scheme becomes ineffective. In fact, well
past the threshold, the fidelity of the (supposedly) protected qubit becomes lower than that
of the unprotected qubit. This results from the feedback corrections being so inaccurate
that the feedback mechanism effectively increases the error rate.
The third line in the plots of Fig. 7.6 is of the average fidelity achievable by discrete
QEC (using the same three-qubit code) when the time between the detection-correction
operations is t. The value of this fidelity (F3d (t)) as a function of time was analytically
calculated in Ref. [ADL02] as
F3d =
1
(2 + 3e2 t e6 t ).
4
(7.103)
A comparison between F3 (t) and F3d (t) highlights the relative merits of the two schemes.
The fact that the two curves cross each other for large t indicates that, if the time between
applications of discrete error correction is sufficiently large, then a continuous protocol will
preserve fidelity better than a corresponding discrete scheme.
All the F3 (t) curves show an exponential decay at very early times, t 0.1 s. This is
an artefact of the finite filter length and the specific implementation of the protocol in
Ref. [SAJM04]: the simulations did not produce the smoothed measurement signals Rl (t)
until enough time had passed to get a full buffer of measurements. That is, feedback started
only at t = T . We emphasize again that this protocol is by no means optimal.
The effect of non-unit efficiency was also simulated in Ref. [SAJM04], as summarized
by Fig. 7.7. The decay of fidelity with decreasing indicates that inefficient measurements
have a negative effect on the performance of the protocol as expected. However, the curves
are quite flat for 1 small. This is in contrast to the correction of detected errors by
Markovian feedback as considered in Section 7.5, where the rate of fidelity decay would be
proportional to 1 . This is because in the present case the measurement of the stabilizer
generators has no deleterious effect on the encoded quantum information. Thus a reduced
efficiency simply means that it takes a little longer to obtain the information required for
the error correction.
373
0.9
0.85
0.1
0.2
0.3
0.4
0.8
Hz
Hz
Hz
Hz
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Fig. 7.7 Average fidelity after a fixed amount of time as a function of inefficiency 1 for several error
rates. The parameters used were dt = 104 s, = 50 s1 , = 50 s1 , r = 10 s1 and T = 0.15 s.
Figure 3 adapted with permission from M. Sarovar et al., Phys. Rev. A 69, 052324, (2004). Copyrighted
by the American Physical Society.
can then be used to control the barrier between the wells, as well as the relative depth of the
two wells. It is possible to design the double-well system so that, when the well depths are
equal, there are only two energy eigenstates below the barrier. These states, |+ and |,
with energies E+ and E , are symmetric and antisymmetric, respectively. The localized
states describing the electron on the left or right of the barrier can thus be defined as
1
|L = (|+ + |),
2
1
|R = (|+ |).
2
(7.104)
(7.105)
An initial state localized in one well will then tunnel to the other well at the frequency
= (E+ E ).
Using |L and |R as the logical basis states |0 and |1, respectively, we can define Pauli
matrices in the usual way. Then the Hamiltonian for the system can be well approximated
by
(t)
(t)
Z+
X.
H =
2
2
(7.106)
374
A (time-dependent) bias gate can control the relative well depth (t) and similarly a barrier
gate can control the tunnelling rate (t). Further details on the validity of this Hamiltonian
and how well it can be realized in the PP+ in Si system can be found in Ref. [BM03].
A number of authors have discussed the sources of decoherence in a charge qubit system
such as this one [BM03, FF04, HDW+ 04]. For appropriate donor separation, phonons can
be neglected as a source of decoherence. The dominant sources are fluctuations in voltages
on the surface gates controlling the Hamiltonian and electrons moving in and out of trap
states in the vicinity of the dot. The latter source of decoherence is expected to dominate at
low frequencies (long times), as for so-called 1/f noise. In any case, both sources can be
modelled using the well-known spinboson model (see Section 3.4.1) The key element of
this model for the discussion here is that the coupling between the qubit and the reservoir
is proportional to Z.
If the tunnelling term proportional to (t)X in Eq. (7.106) were not present, decoherence
of this kind would lead to pure dephasing. However, in a general single-qubit gate operation,
both dephasing and bit-flip errors can arise in the spinboson model. We use the decoherence
rate calculated for this model as indicative for the bit-flip error rate in the toy model used
above in which only bit-flips occur. Hollenberg et al. [HDW+ 04] calculated that, for a device
operating at 10 K, the error rate would be = 1.4 106 s1 . This rate could be made a
factor of ten smaller by operating at lower temperatures and improving the electronics
controlling the gates.
We now turn to estimating the measurement strength for the PP+ system. In order to
read out the qubit in the logical basis, we need to determine whether the electron is in the
left or the right well quickly and with high probability of success. The technique of choice
is currently based on radio-frequency single-electron transistors (RF-SETs) [SWK+ 98]. A
single-electron transistor is a very sensitive transistor whose operation relies upon singleelectron tunneling onto and off a small metallic island (hence its name). That is, the
differential resistance of the SET can be controlled by a very small bias voltage, which
in this case arises from the Coulomb field associated with the qubit electron. Depending
on whether the qubit is in the L or R state, this field will be different and hence the SET
resistance will be different. In the RF configuration (which enables 1/f noise to be filtered
from the signal) the SET acts as an Ohmic load in a tuned tank circuit. The two different
charge states of the qubit thus produce two levels of power reflected from the tank circuit.
The electronic signal in the RF circuit carries a number of noise components, including
amplifier noise, the Johnson noise of the circuit and random telegraph noise in the SET
bias conditions due to charges hopping randomly between charge trap states in or near the
SET. The quality of the SET is captured by the minimum
charge sensitivity per root hertz,
S. In Ref. [BRS+ 05] a value of S 5 105 e/ Hz was measured, for the conditions of
observing the single-shot response to a charge change q = 0.05e. Here e is the charge
on a single electron, and q means a change in the bias field for the SET corresponding
to moving a charge of q from its original position (on the PP+ system) to infinity. This
is of order the field change expected for moving the electron from one P donor to the
other. Thus the characteristic rate for measuring the qubit in the charge basis is of order
(q/S)2 = 106 Hz. Thus we take = 106 s1 . For definiteness we will say that = 1
375
(that is, a quantum-limited measurement), even though that is almost certainly not the case
(see for example Refs. [WUS+ 01, Goa03]). Note also that we are ignoring the difficulties
associated with measuring stabilizers such as ZZI . That is, we simply use the one-qubit
measurement rate for this joint multi-qubit measurement.
We next need to estimate typical values for the feedback strength. The feedback Hamiltonian is proportional to an X operator, which corresponds to changing the tunnelling rate
for each of the double-dot systems that comprise each qubit. In Ref. [BM03], the maximum
tunnelling rate was calculated to be about 109 s1 , for a donor separation of 40 nm. We take
this to be the upper bound on .
To summarize, in the PP+ -based charge qubit, with RF-SET readout, we have
6 1
10 s and 109 s1 . The fact that the measurement strength and the error rate are of the
same order of magnitude for this architecture is a problem for our error-correction scheme.
This means that the rate at which we gain information is about the same as the rate at which
errors happen, and it is difficult to operate a feedback correction protocol in such a regime.
Although it is unlikely that the measurement rate could be made significantly larger in the
near future, as mentioned above it is possible that the error rate could be made smaller by
improvements in the controlling electronics.
376
ENCODED
LOGICAL
QUBIT
ANCILLA
QUBITS
0
0
Fig. 7.8 A circuit for implementing error correction using the three-qubit bit-flip code without
measurement. The top three qubits form the encoded logical qubit and the bottom two form the
ancilla. The first four gates are C-NOT gates as described in Section 7.4.1. The last three are Toffoli
gates, which are similar but have two controls (shown by the open or filled circles). The target (large
encircled cross) undergoes a bit-flip iff the controls have the appropriate value (zero for an open
circle, one for a filled circle). Note that, to repeat the error-correction procedure, the ancilla qubits
must be replaced or reset to the |0 state at the end of each run (at the far right of the circuit). Figure
1 adapted with permission from M. Sarovar and G. J. Milburn, Phys. Rev. A 72, 012306, (2005).
Copyrighted by the American Physical Society.
qubit indefinitely. Note that we are assuming that the operations involved in the circuit the
unitary gates and the ancilla reset are instantaneous. In this section we address the obvious
question: can we replace these instantaneous discrete operations by continuous processes?
That is, can we use a finite apparatus to obtain a continuous version of coherent QEC (see
Section 5.8.1) just as there are continuous versions of conventional QEC with measurement
as discussed in Section 7.6?
The answer to this question is yes, as shown in Ref. [SM05]. Following that reference,
we need to modify two components of the circuit model.
1. The unitary gates which form the systemancilla coupling are replaced by a finite-strength, timeindependent Hamiltonian. This Hamiltonian will perform both the detection and the correction
operations continuously and simultaneously.
2. The ancilla reset procedure is replaced by the analogous continuous process of cooling. Each
ancilla qubit must be independently and continuously cooled to its ground state |0.
(7.107)
Here, the ordering of the tensor product for all operators in the equation runs down the
circuit as shown in Fig. 7.8 (i.e. the first three operators apply to the encoded qubit and
377
(7.108)
Here, D 1 = |100100| + |011011| is the projector onto the subspace where there has
been a bit-flip error on the first physical qubit, and D 2 and D 3 similarly for the second and
third physical qubits. These operators act on the three qubits encoding the logical qubit,
while the Pauli operators cause the appropriate bit-flips in the ancilla qubits. Similarly, the
correction Hamiltonian is
H C = C 1 (P I ) + C 2 (P P ) + C 3 (I P ).
(7.109)
Here P (1 Z)/2 = |11|, the projector onto the logical one state of a qubit. We have
also defined C 1 = X (|0000| + |1111|), an operator that corrects a bit-flip on the first
physical qubit (assuming that the second and third remain in the code space), and C 2 and
C 3 similarly for the second and third physical qubits.
The operation in Fig. 7.8 of detection followed by correction can be realized by the
unitary U DC = exp(iH C /2)exp(iH D /2).
Exercise 7.32 Verify this.
Now, by the BakerCampbellHausdorff theorem (A.118), it follows that the unitary U DC
has a generator of the form
H = H D + H C + i[H D , H C ],
(7.110)
378
= 50
0.9
= 30
Fidelity
0.8
= 10
=5
0.7
0.6
=1
0.5
0.4
2.5
10
15
20
25
30
35
40
45
50
s
Fig. 7.9 Fidelity, after a fixed period of time (T = 10), of an encoded qubit (three-qubit code)
undergoing continuous error correction using cooled ancillae. Here time is measured in arbitrary
units, with = 1/20. The curves are for different Hamiltonian strengths () and the horizontal axis
shows how the cooling rate is scaled with ; i.e. = s, where s is varied along the horizontal axis.
Figure 2 adapted with permission from M. Sarova and G. J. Milburn, Phys. Rev. A 72, 012306, (2005).
Copyrighted by the American Physical Society.
In Ref. [SM05], Eq. (7.107) was solved by numerical integration and the fidelity F (t)
|(t)| determined. Here (t) is the reduced state of the encoded subsystem and
(0) = || is the initial logical state. For a given error rate we expect there to be an
optimal ratio between the Hamiltonian strength and the cooling rate . Figure 7.9 shows
the fidelity after a fixed period of time 1/(2 ) for several values of these parameters, and it
is clear that the best performance is when 2.5. This optimal point is independent of
the ratio of to and of the initial state of the encoded qubits. The following results were
all obtained in this optimal parameter regime.
Figure 7.10 shows the evolution of fidelity with time for a fixed error rate and several
values of . This clearly shows the expected improvement in performance with an increase
in the Hamiltonian strength. Large values of and are required in order to maintain fidelity
at reasonable levels. To maintain the fidelity above 0.95 up to time T = 1/(2 ) requires
/ > 200. However, a comparison with the unprotected qubits fidelity curve shows
a marked improvement in coherence, due to the error-correction procedure. Therefore,
implementing error correction even in the absence of ideal resources is valuable. This
was also evident in the scenario of error correction with measurement in the preceding
section.
379
= 50
0.95
= 10
0.9
=5
Fidelity
0.85
0.8
0.75
0.7
=1
0.65
0.6
unprotected qubit
0.55
0.5
10
15
20
25
30
35
40
45
50
Time
Fig. 7.10 Fidelity curves for several Hamiltonian strengths versus time. Time is measured in arbitrary
units, with = 1/20. The solid curves are the fidelity of an encoded qubit (three-qubit code) with
continuous error correction. The dashed curve is the fidelity of one qubit undergoing random bit-flips
without error correction. Figure 3 adapted with permission from M. Sarovar and G. J. Milburn,
Phys. Rev. A 72, 012306, (2005). Copyrighted by the American Physical Society.
Aside from describing a different implementation of error correction, the scheme above
casts error correction in terms of the very natural process of cooling; it refines the viewpoint that error correction extracts the entropy that enters the system through errors. Error
correction is not cooling to a particular state such as a ground state, but rather a subspace of
Hilbert space, and the specially designed coupling Hamiltonian allows us to implement this
cooling to a (nontrivial) subspace by a simple cooling of the ancilla qubits to their ground
state.
380
N SIGNAL
MODES
K ANCILLA
MODES
U (G)
m1
n1
n2
n3
n4
m2
m3
m4
mk
nK
showed that non-deterministic photonic qubit gates are possible with linear optical networks
when some of the input modes (referred to as ancilla modes) are prepared in singlephoton states before the optical network and directed to photon counters at the network
output. The conditional state of all non-ancilla modes (the signal modes), conditioned on a
particular count on the output ancilla modes, is given by a non-unitary transformation of the
input signal state and can simulate a highly nonlinear optical process. This transformation
is defined in terms of a conditional measurement operator acting on the signal modes
alone.
Consider the situation depicted in Fig. 7.11. In this device N + K modes pass through
a linear optical device, comprising only mirrors and beam-splitters. We describe this by
a unitary transformation (that is, we ignore losses through absorption etc.) so that the
total photon-number is conserved. The K ancilla modes are prepared in photon-number
eigenstates. At the output, photon-number measurements are made on the ancilla modes
alone. We seek the conditional state for the remaining N modes, given the ancilla photonnumber count.
The linear optical device performs a unitary transformation on all the input states:
U (G) = exp[ia G a ],
(7.111)
381
a 1
a 2
..
.
a = a N ,
a N+1
..
.
a N+K
(7.112)
(7.113)
One should not confuse the unitary transformation U (G) (an operator) with the induced
unitary representation S(G) = exp(iG) (a matrix). Because S(G) is unitary, the transformation leaves the total photon number invariant:
U (G)a a U (G) = a S(G)S(G) a = a a .
(7.114)
k
a k a k .
(7.115)
(7.116)
|m
anc = |m1 N+1 |m2 n+2 . . . |mk N+K .
(7.117)
with
s11 s12 s13
a 1
S(G)a = s21 s22 s23 a 2 .
a 3
s31 s32 s33
(7.118)
We will regard a 2 and a 3 as the ancilla modes, prepared in the single-photon state |1, 0.
That is, n2 = 1 and n3 = 0. We will condition on a count of m2 = 1 and m3 = 0. We want
382
n|M(1,
0|1, 0)|n = n, 1, 0|U (G)|n, 1, 0
(7.119)
(7.120)
Exercise 7.35 Show this, and also show that, since a 3 does not appear in Eq. (7.120),
further simplification is possible, namely to
(7.121)
Hint: First show that 0, 0, 0|U (G) = 0, 0, 0|, and so replace a 1n a 2 U (G) by
U (G)a 1n a 2 U (G).
M(1,
0|1, 0) = s12 s21 a 1 A a 1 + s22 A,
(7.122)
1)n (a 1 )n a 1n /n!.
(7.123)
|1L = |01 |12 .
(7.124)
The modes could be distinguished spatially (e.g. a different direction for the wave vector),
or they could be distinguished by polarization.
One single-qubit gate that is easily implemented uses a beam-splitter (for spatially distinguished modes) or a wave-plate (for modes distinguished in terms of polarization). These
linear optical elements involving two modes can be described by the unitary transformation
U ( ) = exp i (a 1 a 2 + a 1 a 2 ) ,
(7.125)
383
which coherently transfers excitations from one mode to the other. Another simple singlequbit gate is a relative phase shift between the two modes, which can be achieved simply by altering the optical path-length difference. For spatially distinguished modes this
can be done by altering the actual length travelled or using a thickness of refractive
material (e.g. glass), whereas for polarization-distinguished modes, a thickness of birefringent material
(e.g. calcite) can be used. This gate can be modelled by the unitary
U () = exp i(/2)(a 1 a 1 a 2 a 2 ) .
By concatenating arbitrary rotations around the X and Z axes of the Bloch sphere of the
qubit, one is able to implement arbitrary single-qubit gates.
Exercise 7.37 Convince yourself of this.
Hint: For the mathematically inclined, consider the Lie algebra generated by X and Z
(see Section 6.6.2). For the physically inclined, think about rotating an object in threedimensional space.
A simple choice for an entangling two-qubit gate is the conditional sign-change (CS)
gate. In the logical basis it is defined by
|xL |yL eixy |xL |yL .
(7.126)
This was the sort of interaction considered in Ref. [Mil89], in which the logical basis
was the photon-number basis. It is then implementable by a so called mutual-Kerr-effect
nonlinear phase shift:
U Kerr = exp[i a1 a1 a2 a2 ].
(7.127)
This requires the photons to interact via a nonlinear medium. In practice it is not possible
to get a single-photon phase shift of , which this transformation implies, without adding
a considerable amount of noise from the medium. However, as we now show, we can
realize a CS gate non-deterministically using the dual-rail encoding and the general method
introduced in the preceding subsection.
With dual-rail encoding, a linear optical network for a two-qubit gate will have, at most,
two photons in any mode. As we will show later, the CS gate can be realized if we can
realize the following transformation on a single mode in an arbitrary superposition of no,
one and two photons:
| = 0 |01 + 1 |11 + 2 |21 | = 0 |01 + 1 |11 2 |21 ,
(7.128)
384
a1
m=1
r2
a2
r1
r3
a3
m=0
Fig. 7.12 The NS gate | | constructed from three beam-splitters with reflectivities of ri =
sin i with 1 = 3 = 22.5 , 2 = 114.47 . Adapted by permission from Macmillan Publishers Ltd:
Nature, E. Knill et al., 409, 46, Figure 1, copyright 2001.
for some complex number . The phase of corresponds to an unobservable global phase
shift, while ||2 is the probability of the measurement outcome under consideration. One
solution is easily verified to be
21/4
(3/21/2 2)1/2
1 21/2
(7.129)
S=
21/4
1/2
1/2 1/21/2 .
1/2
1/2
1/2
1/2
(3/2 2)
1/2 1/2
2 1/2
Here = 1/2, so the success probability is 1/4. This is the best that can be achieved in a
linear optical system via a non-deterministic protocol without some kind of feedforward
protocol [Eis05]. An explicit linear optical network to realize this unitary transformation S
using three beam-splitters is shown in Fig. 7.12.
In Fig. 7.13, we show how two non-deterministic NS gates can be used to implement a
CS gate in dual-rail logic. Here the beam-splitters are all 50 : 50 ( = /4). Since success
requires both the NS gates to work, the overall probability of success is 1/16. A simplification of this scheme that uses only two photons was proposed by Ralph et al. [RLBW02].
QUBIT 1
n1
n2
x L = n1
(n1+ n2 = 1 )
n2
n3
50:50 BS
QUBIT 2
L = n3
1
0
n3
n4
NS
m=1
m=0
e i n2 n3 n2
50:50 BS
1
0
NS
QUBIT 1
m=1
m=0
n2
385
n3
QUBIT 2
(n3+ n4 = 1 )
n4
Fig. 7.13 A CS two-qubit gate implemented using two non-deterministic NS gates. This gate has no
effect on the two-qubit input state, except to change the sign of the |1|1 component, as indicated.
Adapted by permission from Macmillan Publishers Ltd: Nature, E. Knill et al., 409, 46, Figure 1,
copyright 2001.
It is simplified first by setting the beam-splitter parameters r1 and r3 to zero in the NS gate
implementation (Fig. 7.12), and second by detecting exactly one photon at each logical
qubit output. The device is non-deterministic and succeeds with probability of 1/9, but is
not scalable insofar as success is heralded by the coincident detection of both photons at
distinct detectors: failures are simply not detected at all. It is this simplified gate that was
the first to be experimentally realized in 2003 [OPW+ 03], and it has become the work-horse
for LOQC experiments [KMN+ 07].
386
IFF
IFF
IFF
Fig. 7.14 Quantum teleportation of a C-NOT gate on to the state | | (with | being the
control). The C-NOT at the start of the circuit can be considered part of the preparation of the
entangled resource used in the teleportation, which can be discarded and reprepared if this C-NOT
fails. Other details are as in Fig. 7.1.
we implement a C-NOT gate between two qubits in the four-qubit state | | the
two qubits that will carry the teleported state. The result is to produce an entangled state
(in general) at the output of the dual-rail teleporter, rather than the product state | |.
Moreover, by modifying the controls applied in the teleportation protocol the device can be
made to output a state that is identical to that which would have been obtained by applying
a C-NOT gate directly on the state | | (with | being the control).
Exercise 7.38 Verify that the circuit in Fig. 7.14 works in this way.
This teleportation of the C-NOT gate works regardless of the initial state of the two
qubits. As dicussed above, the C-NOT gate can be realized using a non-deterministic NS
gate. The point of the teleportation protocol is that, if it fails, we simply repeat the procedure
with another two entangled states | |, until the preparation succeeds. When it has
succeeded, we perform the protocol in Fig. 7.14. Note that the entangled state | can also
be prepared non-deterministically using a NS gate.
There is one remaining problem with using teleportation to achieve two-qubit gates in
LOQC: it requires the measurement of the operators XX and ZZ on a pair of qubits. This
0 + 1
387
a0
a1
50:50 BS
50:50 BS
a2
0
HWP
IFF
k = 0, l = 1
0 + 1 IF k + l = 1
Fig. 7.15 A simple non-deterministic teleportation protocol. The protocol works whenever the total
count at the output photon counters is unity.
can be achieved with these two qubits alone, using only single-qubit unitaries, single-qubit
measurements in the logical basis and two applications of a C-NOT gate.
Exercise 7.39 Try to construct the circuit that achieves this using the resources described.
The problem is that the C-NOT itself is a two-qubit gate! It would seem that this would
lead to an infinite regress, with an ever-decreasing probability of success. However, it
can be shown that, by using the appropriate entangled resource, the teleportation step can
be made near-deterministic. This near-deterministic teleportation protocol requires only
photon counting and the ability to perform local quantum control on the basis of these
measurement results.
Figure 7.15 shows the basic LOQC quantum teleportation protocol. Note that the states |0
and |1 here are photon-number states, not the dual-rail encoded logical states introduced in
Section 7.8.2. That is, the teleporter actually works on a single-rail qubit | = |00 +
|10 , transferring it from mode a0 to mode a2 . A dual-rail qubit can be teleported by
teleporting the two single-rail qubits in its two modes. In Section 7.9 we will consider
LOQC based on single-rail qubits, for which the teleportation scheme of Fig. 7.15 can be
used directly.
The teleportation
scheme begins by preparing the ancilla state | = (|0112 +
388
these modes is counted. The teleportation works whenever the total count on modes 0 and
1 is unity. To see this, it is instructive to consider what happens in the other cases. If the
total count at the output is 0, then we can infer that initially mode a0 must have been in the
vacuum state. Likewise, if the total count is 2 then we can infer that initially mode a0 must
have contained a single photon. In both cases the output photon count serves to measure
the number of photons in the input mode, destroying the quantum information there. This
is a failure of the teleportation, but a so-called heralded failure because we know it has
occurred.
Exercise 7.40 Show that the probability of this heralded failure is 1/2, independently
of |.
Hint: First show that, for this purpose, mode a1 entering the second beam-splitter can be
considered as being in either state |0 or state |1, each with probability 1/2.
If the teleporter does not fail as just described, then it succeeds. That is, the input state
appears in mode 2 up to a simple transformation without having interacted with mode 2
after the preparation of the initial ancilla state.
Exercise 7.41 Taking the beam-splitter transformations to be
1
|01 (|01 + |10),
2
1
|10 (|10 |01),
2
(7.130)
|02 + |12
|02 |12
for k = 1, l = 0,
for k = 0, l = 1.
(7.131)
In the second case, the state can be corrected back to | by applying the operator Z. In
this single-rail case, this corresponds simply to an optical phase shift of .
The probability of success of the above teleporter is 1/2, which is not acceptable.
However, we can improve the probability of successful teleportation to 1 1/(n + 1) by
generalizing the initial entangled resource from |t1 12 to an n-photon state
|tn 1(2n) =
n
|1j |0(nj ) |0j |1(nj ) / n + 1.
(7.132)
j =0
389
modes:
a k
1
n+1
n
ei2kl/(n+1) a l .
(7.133)
l=0
Since this is linear, it can be implemented with passive linear optics; for details see
Ref. [KLM01]. After applying Fn+1 , we measure the number of photons in each of the
modes 0 to n.
Suppose this measurement detects k photons altogether. It is possible to show that,
if 0 < k < n + 1, then the teleported state appears in mode n + k and only needs to be
corrected by applying a phase shift. The modes 2n l are in state |1 for 0 l < (n k)
and can be reused in future preparations requiring single photons. The remaining modes
are in the vacuum state |0. If k = 0 the input state is measured and projected to |00 ,
whereas if k = n + 1 it is projected to |10 . The probability of these two failure events is
1/(n + 1), regardless of the input. Note that both the necessary correction and which mode
we teleported to are unknown until after the measurement.
Exercise 7.42 Consider the above protocol for n = 3. Show that
1
|t2 = (|0011 + |1001 + |1100).
3
(7.134)
Say the results of photon counting on modes 0, 1 and 2 are r, s and t, respectively. Show that
the teleportation is successful iff 0 < r + s + t < 3. Compute the nine distinct conditional
states that occur in these instances and verify that success occurs with probability of 2/3.
The problem with the approach presented above is that, for large n, the obvious networks
for preparing the required states have very low probabilities of success, but to attain the
stringent accuracy requirements for quantum computing [NC00] one does require large n.
However, it is possible to make use of the fact that failure is detected and corresponds to
measurements in the photon-number basis. This allows exponential improvements in the
probability of success for gates and state production with small n, using quantum codes
and exploiting the properties of the failure behaviour of the non-deterministic teleportation.
For details see Knill et al. [KLM01]. Franson [FDF+ 02] suggested a scheme by which the
probability of unsuccessfully teleporting the gate will scale as 1/n2 rather than 1/n for
large n. Unfortunately the price is that gate failure does not simply result in an accidental
qubit error, making it difficult to scale.
Some important improvements have been made to the original scheme, making quantum
optical computing experimentally viable. Nielsen [Nie04] proposed a scheme based on the
cluster-state model of quantum computation. This is an alternative name for the one-way
quantum computation introduced in Ref. [RB01], in which quantum measurement and
control completely replace the unitary two-qubit gates of the conventional circuit model.
A large entangled state (the cluster) is prepared, then measurements are performed on
individual qubits, and the results are used to control single-qubit unitaries on other qubits
in the cluster, which are then measured, and so on. The cluster state does not have to be
390
391
(t)dt | J (t).
d| J (t) = 12 a a dt + ei(t) aJ
(7.135)
Here J (t) is the dyne current, which is ostensibly white noise in order for J (t)| J (t) to
be the appropriate weight for a particular trajectory (see Section 4.4.3).
Now say that the mode initially contains at most one photon: |(0) = c0 |0 + c1 |1.
Then there is a simple analytical solution for the conditioned state:
| J (t) = (c0 + c1 Rt )|0 + c1 e t/2 |1,
where Rt is a functional of the dyne photocurrent record up to time t:
t
Rt =
ei(s) e s/2 J (s)ds.
(7.136)
(7.137)
(7.138)
Here ost (J) is the ostensible probability of J; that is, the distribution it would have if
J (t)dt were equal to a Wiener increment dW (t). Now, from the above solution (7.136),
(J) depends upon the system state only via the single complex functional A = R . That
is, all of the information about the system in the complete dyne record J is contained
in the complex number A. We can thus regard the dyne measurement in this case as a
measurement yielding the result A, with probability distribution
(A)d2 A = |c0 + c1 A |2 ost (A)d2 A.
(7.139)
392
Here ost (A) is the distribution for A implied by setting J (t)dt = dW (t). Thus, the measurement can be described by the POM
2
(7.140)
In the above the shape of the mode exiting from the cavity is a decaying exponential
u(t) = e t . The mode-shape u(t) means, for example, that the mean photon number in
the part of the output field emitted in the interval [t, t + dt) is |c1 |2 u(t)dt.
Exercise 7.44 Verify this using the methods of Section 4.7.6.
We can generalize the above theory to dyne detection upon a mode with an arbitrary
mode-shape u(t), such that u(t) 0 and U () = 1, where
t
U (t) =
u(s)ds.
(7.141)
0
We do this by defining a time-dependent decay rate, (t) = u(t)/[U (t) 1]. Then we
can consider modes with finite duration [0, T ], in which case U (T ) = 1. For a general
mode-shape, Eq. (7.140) still holds, but with A = RT and
t
Rt =
ei(s) u(s) J (s)ds.
(7.142)
0
(7.143)
Rt
u(t) J (t)dt.
|Rt |
(7.144)
Bearing in mind that [J (t)dt]2 = dt (both ostensibly and actually), this has the solution
Rt = U (t)ei(t) ,
(7.145)
where
(t) =
t9
0
u(s)
J (s)ds.
U (s)
393
(7.146)
(7.148)
(7.149)
|/ ,
where the squared norm of this state is equal to the probability density for obtaining this
outcome. We now discuss applications of this result.
394
homodyne detection has also been demonstrated [BBL04], but has a vanishing probability
of success for high-fidelity preparation. We now show that it is possible deterministically
to produce an arbitrary single-rail state from a single-photon state using linear optics and
adaptive phase measurements.
We begin by splitting a single photon into two modes at a beam-splitter with intensity
|0 + ei 1 |1.
(7.150)
Exercise 7.47 Verify this, and show that the result is completely random (actually random,
not just ostensibly random).
Now, by feedforward onto a phase modulator on the second mode, this random phase can
be changed into any desired phase . Thus we can deterministically produce the arbitrary
state
|0 + ei 1 |1.
(7.151)
395
SINGLE-RAIL
INPUT QUBIT
SINGLE-RAIL/DUAL-RAIL
BELL STATE
SINGLE-RAIL
BSM
CONDITIONAL
PHASE FLIP
APM
ARBITRARY
DUAL-RAIL
UNITARY
CONDITIONAL
PHASE FLIP
SINGLE-RAIL
OUTPUT QUBIT
achieved using a single-rail Bell state such as |0|1 + |1|0. In both cases only two of the
four Bell states can be identified with linear optics, so the teleportation works 50% of the
time, as illustrated in Fig. 7.16.
Now suppose we take a dual-rail Bell state and use an adaptive phase measurement
to project one of its arms into a single-rail state. We obtain the state |0|10 + |1|01,
which is Bell entanglement between dual- and single-rail qubits. If we now perform a Bell
measurement between the single-rail half of the entanglement and an arbitrary single-rail
qubit then (when successful) the qubit will be teleported onto the dual-rail part of the
entanglement, thus converting a single-rail qubit into a dual-rail qubit.
We now have a way of (non-deterministically) performing an arbitrary rotation on an
unknown single-rail qubit. The idea is depicted schematically in Fig. 7.16. First we teleport
the single-rail qubit onto a dual-rail qubit. Then we perform an arbitrary rotation on the
dual-rail qubit. We then use an adaptive phase measurement to transform the dual-rail qubit
back into a single-rail qubit. The only non-deterministic step is the Bell measurement in
the teleportation, which in this simple scheme has a success probability of 50%. This is
a major improvement over previous schemes. As discussed in Section 7.8.3, the success
probability for this step can be increased arbitrarily by using larger entangled resources.
Also as discussed in that section, the fundamental two-qubit gate, the CS gate, is in fact
a single-rail gate. Thus, by employing quantum feedback control we are able to perform
universal quantum computation in LOQC using single-rail encoding.
396
397
and control, much as in cluster-state quantum computing as discussed above. In fact, the
quantum phase-estimation algorithm can be used as an (adaptive) protocol for estimating
the phase in a single-qubit phase gate exp(iZ/2), using only single-qubit operations
(preparations, measurements and control), as long as it is possible for the gate to be applied
multiple times to a given single qubit between preparation and measurement [GLM06].
The quantum phase-estimation algorithm enables a canonical measurement of phase (see
Section 2.4), but the nature of the prepared states means that it does not attain the Heisenberg limit for the phase variance (2.133). However, a simple generalization of the quantum
phase-estimation algorithm, using the principles of adaptive phase estimation discussed
in Section 2.5, enables a variance scaling at the Heisenberg limit to be achieved, with
an overhead factor of less than 2.5 [HBB+ 07]. Moreover, this was recently demonstrated
experimentally by Higgins et al. using single-photon multi-pass interferometry [HBB+ 07]
the first experiment to demonstrate Heisenberg-limited scaling for phase estimation. In this
chapter, we have concentrated on showing that quantum computing can benefit from an
understanding of quantum measurement and control, but this work demonstrates that the
converse is also true.
Appendix A
Quantum mechanics and phase-space
where, for all i, i C (the complex numbers). The dual vector, or bra, is defined as
i i |.
(A.2)
| =
i
If the state vector is to be normalized we require the inner product, or bracket, to satisfy
|i |2 = 1.
(A.3)
| =
i
where the Aij are complex numbers. Operators are sometimes called q-numbers, meaning
quantum numbers, as opposed to c-numbers, which are ordinary classical or complex
numbers. Ignoring some subtle issues to do with infinite-dimensional Hilbert spaces, we
can simply state that all physical quantities (commonly called observables) are associated
with Hermitian operators. An Hermitian operator is one that is equal to its Hermitian
398
adjoint, defined as
A =
Aj i |i j |.
399
(A.5)
ij
= ||,
(A.6)
where, unless otherwise stated, we take the ket to be normalized. We derive this
expression from more basic considerations in Section 1.2.2.
Note that Eq. (A.6) shows that the absolute phase of a state plays no physical
role; ei | gives the same mean value for all observables as does |. Of course the
relative phase of states in a superposition does matter. That is, for a state such as
ei1 |1 + ei2 |2 , the average value of physical quantities will depend in general upon
2 1 .
Exercise A.1 Convince yourself of these statements.
can be diagonalized as
Any Hermitian operator
=
||,
(A.7)
Var[] = |
,
(A.8)
is in general greater than zero. This is the puzzling phenomenon of quantum noise; even
though we have a state of maximal knowledge about the system, there is still some
uncertainty in the values of physical quantities. Moreover, it is possible to derive so-called
uncertainty relations of the form
2
B]||
Var[]Var[B] ||[,
/4,
(A.9)
B
B]
B
is called the commutator. If the commutator is a c-number
where [,
(that is, it is proportional to the identity operator), this relation puts an absolute lower
bound on the product of the two uncertainties.
Exercise A.3 The position Q and momentum P of a particle have operators that obey
= i (see Section A.3). Using Eq. (A.9), derive Heisenbergs uncertainty relation
[P , Q]
(P )2 (Q)2 (/2)2 ,
(A.10)
where P = P P and similarly for Q.
400
Appendix A
H
dim(H)
H
dim(H)
Hilbert space
dimension D
Matrix
Operator
v
(
v )
Column vector
Conjugate row vector
|
|
(
v ) u
v u = 0
v v = 1
|
| = 0
| = 1
Inner product
Orthogonality
Normalized state vector
u(
v )
| |
{
}
Eigenvectors A
=
{|}
Eigenstates A|
= |
{}
Eigenvalues (complex)
{}
Eigenvalues (complex)
(A )
Hermitian adjoint
Hermitian adjoint
Unitary matrix (U ) = U 1
Unitary operator U = U 1
Hermitian matrix = ( )
= {} Real eigenvalues
} Orthogonal eigenvectors
= {
{ej }D1
j =0
Orthonormal basis
= ej ek = j k
U
I
vj
Aj k
tr A
{|j }D1
j =0
k
Uj k ek U
1
H = H1 H2 Tensor product
= D = dim(H) = D1 D2
v1 u2
Tensor product
= EkD2 +j = (ek )1 (ej )2
A1 B2
Tensor product
= A1 B2 [
v1 u2 ] =
(A1 v1 ) (B2 u2 )
j
Aj k
Tr[A]
Hermitian operator
= {} Real eigenvalues
= {|} Orthogonal eigenstates
Orthonormal basis
= j |k = j k
Change of basis |j =
Identity j |j j |
k
Uj k |k
Probability amplitude j = j |
k
Matrix element j |A|
Trace j j |A|j
H = H1 H2 Tensor product
= D = dim(H) = D1 D2
|1 |2
Tensor product |1 |2
= |kD2 +j = |k 1 |j 2
A 1 B 2
Tensor product
= A 1 B 2 [|1 |2 ] =
A 1 |1 B 2 |2
401
We can combine the classical and quantum expectations in a single entity by defining a
new operator. This is called (for historical reasons) the density operator, and is given by
=
N
j |j j |.
(A.12)
j =1
,
= Tr
(A.13)
In the case in which the ensemble of state vectors has only one element, represents a
pure state. In that case it is easy to verify that 2 = . Moreover, this condition is
sufficient for to be a pure state, since 2 = means that is a projection operator (these
are discussed in Section 1.2.2). Using the normalization condition (A.14), it follows that
must be a rank-1 projection operator, which we denote as . That is to say, it must be of
the form
= ||
(A.15)
for some ket |. A state that cannot be written in this form is often called a mixed or
impure state. The mixedness of can be measured in a number of ways. For instance,
the impurity is usually defined to be one minus the purity, where the latter is p = Tr 2 .
402
Appendix A
(A.16)
where the equality holds if and only if is pure. To obtain a quantity with the dimensions
of thermodynamic entropy, it is necessary to multiply it by Boltzmanns constant kB .
An interesting point about the definition (A.12) is that it is not possible to go backwards
from to the ensemble of state vectors {j , |j : j = 1, 2, . . ., N}. Indeed, for any
Hilbert space, there is an uncountable infinity of ways in which any impure state matrix
can be decomposed into a convex (i.e. positively weighted) ensemble of rank-1 projectors.
This is quite different from classical mechanics, in which different ensembles of states of
complete knowledge correspond to different states of incomplete knowledge. Physically,
we can say that any mixed quantum state admits infinitely many preparation procedures.
The non-unique decomposition of a state matrix can be shown up quite starkly using a
two-dimensional Hilbert space: an electron drawn randomly from an ensemble in which
half are spin up and half are spin down is identical to one drawn from an ensemble in
which half are spin left and half spin right. No possible experiment can distinguish
between them.
Exercise A.6 Show this by showing that the state matrix under both of these preparation
procedures is proportional to the identity.
Hint: If the up and down
spin basis states are |and |, the left and right spin states are
| = (| + |)/ 2 and | = (| |)/ 2.
In this case, it is because the state matrix has degenerate eigenvalues that it is possible for
both of these ensembles to comprise orthogonal states. If has no degenerate eigenvalues,
it is necessary to consider non-orthogonal ensembles to obtain multiple decompositions.
(A.17)
(A.18)
(A.19)
403
where U (t, 0) = exp(iH t). This is called the unitary evolution operator, because it
satisifes the unitarity conditions
(A.20)
U U = U U = 1.
n
U (t, 0) = 1 +
(i)
dsn H (sn )
dsn1 H (sn1 )
ds1 H (s1 ).
(A.21)
0
n=1
Exercise A.7 Show this, and show also that U (t, 0) is unitary.
Hint: Assuming the solutions (A.19), derive the differential equation and initial conditions
for U (t, 0) and U (t, 0). Then show that Eq. (A.21) satisfies these, and that the unitarity
conditions (A.20) are satisfied at t = 0 and are constants of motion.
In the HP, the equation of motion for an arbitrary operator A is
d
(A.22)
Note that, because H (t) commutes with itself at a particular time t, the Hamiltonian
operator is one operator that is the same in both the HP and the SP. The solution of the HP
equation is
U (t, 0).
= U (t, 0)A(0)
A(t)
(A.23)
The two pictures are equivalent because all expectation values are identical:
U (t, 0)(0)
Tr A(t)(0)
= Tr U (t, 0)A(0)
U (t, 0)(0)U (t, 0)
= Tr A(0)
= Tr A(0)(t)
.
(A.24)
Here the placement of the time argument t indicates which picture we are in.
Often it is useful to split a Hamiltonian H into H 0 + V (t), where H 0 is
time-independent and easy to deal with, while V (t) (which may be time-dependent) is
typically more complicated. Then the unitary operator (A.21) can be written as
(i)n
dsn VIF (sn )
1 +
n=1
sn
0
(A.25)
s2
(A.26)
Here VIF (t) = eiH0 t V (t)eiH0 t and IF stands for interaction frame.
Exercise A.8 Show this, by showing that eiH0 t U IF (t, 0) obeys the same differential equation
as U (t, 0).
That is, one can treat VIF (t) as a time-dependent Hamiltonian, and then add the evolution
eiH0 t at the end. This can be used to define an interaction picture (IP), so called because
404
Appendix A
V (t) is often the interaction Hamiltonian coupling two systems, while H 0 is the free
Hamiltonian of the uncoupled systems. The IP is a sort of half-way house between the SP
and HP, usually defined so that operators evolve according to the unitary eiH0 t , while
states evolve according to the unitary U IF (t, 0). That is, one breaks up the expectation
value for an observable A at time t as follows:
U (t, 0)(0)
A(t) = Tr U (t, 0)A(0)
(A.27)
1
10
0
iH 0 t
U IF (t, 0)(0)U IF (t, 0) .
(A.28)
= Tr eiH0 t A(0)e
ignore the final exp(iH0 t) altogether, and just use UIF (t, 0) as ones unitary evolution
operator. The latter is often simpler, since VIF may often be made time-independent (even
if H is explicitly time-dependent) by a judicious division into H 0 and V . If it cannot, then
a secular or rotating-wave approximation is often used to make it time-independent (see
Exercise 1.30).
We refer to the method of just using U IF as working in the interaction frame. This
terminology is used in analogy with, for example, working in a rotating frame to
calculate projectile trajectories on a rotating Earth. Working in the interaction frame is
very common in quantum optics, where it is often (but incorrectly) called working in the
interaction picture. The interaction frame is not a picture in the same way as the
Heisenberg or Schrodinger picture. The HP or SP (or IP) includes the complete
Hamiltonian evolution, whereas working in the interaction frame ignores the boring free
evolution. The interaction frame may contain either a Heisenberg or a Schrodinger
picture, depending on whether U IF (t, 0) is applied to the system operators or the system
state. The HP in the IF has time-independent states and time-dependent operators:
(t) = (0);
= U (t, 0)A(0)
U IF (t, 0).
A(t)
IF
(A.29)
= A(0).
A(t)
(A.30)
Thus the SP state in the IF is the same as the IP state (as usually defined). But the SP
operators in the IF are not the same as the IP operators, which are evolved by U 0 (t, 0) as
in Eq. (A.28).
We make frequent use of the interaction frame in this book, so it is necessary for the
reader to understand the distinctions explained above. In fact, because we use the
interaction frame so often, we frequently omit the IF subscript, after warning the reader
that we are working in the interaction frame. Thus the reader must be very vigilant, since
we often use the terms Heisenberg picture and Schrodinger picture with the phrase in
the interaction frame understood.
405
system and the apparatus by which it is measured. The states of composite systems in
quantum mechanics are described using the tensor product.
Consider two systems A and B prepared in the states |A and |B , respectively. Let
the dimension of the Hilbert space for systems A and B be DA and DB , respectively. The
state of the total system is the tensor product state | = |A |B . More specifically,
A 1
if we write the state of each component in an orthonormal basis |A = jD=0
aj |jA ,
DB 1
B
|B = k=0 bk |k , then the state of the total system is
| =
D
A 1 D
B 1
j =0
aj bk |j A |k B .
(A.31)
k=0
Note that the dimension of the Hilbert space of the composite system C is
DC = DA DB . We can define a composite basis |l(k,j ) C = |j A |k B , where
l(k, j ) is a new index for the composite system. For example, we could have
l = k DA + j . Then an arbitrary pure state of C can be written as
|C =
D
C 1
cl |l C .
(A.32)
l=0
406
Appendix A
parts. Formally, we say that the joint pure state need not factorize. That is, there exist
states |AB HAB HA HB such that
(A.33)
|AB = |A |B ,
where |A HA and |B HB . Note that we are omitting the tensor-product symbols
for kets, as will be done when confusion is not likely to arise.
A , operating on states in HA , then we
If we were to calculate the mean of an operator
would use the procedure
A 1 B )|AB
A = AB |(
A jB |AB =
A | jA ,
=
AB |jB
jA |
j
(A.34)
, we get
||
= Tr ||
A ,
A = Tr A
where
jB |AB AB |jB TrB [|AB AB |]
A =
(A.35)
(A.36)
is called the reduced state matrix for system A. The operation TrB is called the partial
trace over system B.
It should be noted that the result in Eq. (A.36) also has a converse, namely that any state
matrix A can be constructed as the reduced state of a (non-unique) pure state |AB in a
larger Hilbert space. This is sometimes called a purification of the state matrix A , and is
an example of the GelfandNaimarkSegal theorem [Con90].
Exercise A.9 Construct
a |AB that is a purification of A , given that the latter has the
preparation procedure j , |j .
For a bipartite system in a pure state, the entropy of one subsystem is a good measure of
the degree of entanglement [NC00]. In particular, the entropy of each subsystem is the
same. Note that the von Neumann entropy is not an extensive quantity, as is assumed in
thermodynamics. As the above analysis shows, the entropy of the subsystems may be
positive while the entropy of the combined system is zero. For systems with more than
two parts, or for systems in mixed states, quantifying the entanglement is a far more
difficult exercise, with many subtleties and as-yet unresolved issues.
The equality of the entropies of the subsystems of a pure bipartite system is known as
the ArakiLieb identity. It follows from an even stronger result: for a pure compound
system, the eigenvalues of the
reduced states of the subsystems are equal. This can be
proven as follows. Let |A be the eigenstates of A :
A |A = |A .
(A.37)
Since these form an orthonormal set (see Box 1.1) we can write the state of the compound
system using this basis for system A as
|AB =
(A.38)
|A |B ,
where
|B
A |AB / .
407
Exercise A.10 From this definition of |B , show that |B forms an orthonormal set,
and furthermore that
B |B = |B .
(A.39)
Thus the eigenvalues of the reduced states of the two subsystems are equal. The
decomposition in Eq. (A.38), using the eigenstates of the reduced states, is known as the
Schmidt decomposition.
Note that the orthonormal set |B need not be a complete basis for system B, since
the dimension of B may be greater than the dimension of A. If the dimension of B is less
than the dimension of A, then it also follows that the rank of A (that is, the number of
non-zero eigenvalues it has) is limited to the dimensionality of B. Clearly, for a
purification of A (as defined above), the dimensionality of B can be as low as the rank of
A , but no lower.
Consider an operator Q having the real line as its spectrum. This could represent the
position of a particle, for example. Because of its continuous spectrum, the eigenstates |q
of Q are not normalizable. That is, it is not possible to have q|q = 1. Rather, we use
improper states, normalized such that
dq|qq| = 1.
(A.40)
Squaring the above equation implies that the normalization for these states is
q|q = (q q ).
The position operator is written as
(A.41)
=
Q
dq|qqq|.
(A.42)
Here we are using the convention that the limits of integration are to unless
indicated otherwise.
A pure quantum state | in the position representation is a function of q,
(q) = q|,
(A.43)
commonly called the wavefunction. The probability density for finding the particle at
position q is |(q)|2 , and this integrates to unity. The state | is recovered from the
wavefunction as follows:
| = dq|qq| = dq (q)|q.
(A.44)
It is worth remarking more about the nature of the continuum in quantum mechanics.
The probability interpretation of the function (q) requires that it belong to the set
L(2) (R). That is, the integral (technically, a Lebesgue integral) |(q)|2 dq must be finite,
so that it can be set equal to unity for a normalized wavefunction. Although the space of
L(2) (R) functions is infinite-dimensional, it is a countable infinity. That is, the basis states
for the Hilbert space H = L(2) (R) can be labelled by integers; an example basis is the set
408
Appendix A
A.3.2 Momentum
does represent the position of a particle, then its momentum is
It turns out that, if Q
represented by another operator with the real line as its spectrum, P . Using = 1, the
eigenstates for P are related to those for Q by
q|p = (2 )1/2 eipq .
(A.45)
Here the normalization factor is chosen so that, analogously to Eqs. (A.40) and (A.41), we
have
dp|pp| = 1,
p|p = (p p ).
(A.46)
Exercise
A.11 Show Eq. (A.46), using the position representation and the result that
dy eiyx = 2 (x).
The momentum-representation wavefunction is thus simply the Fourier transform of the
position-representation wavefunction:
(A.47)
(p) = p| = (2 )1/2 dq eipq (q).
From the above it is easy to show that in the position representation P acts on a
wavefunction identically to the differential operator i /q. First, in the momentum
representation,
P = dp|ppp|.
(A.48)
Thus,
q|P | =
dp
= (2)
dq q|ppp|q q |
dp
dq peip(qq ) (q ).
(A.49)
(A.50)
Now peip(qq ) = i eip(qq ) /q , so, using integration by parts and the fact (required by
normalization) that (q) vanishes at , we obtain
1
q|P | = i(2)
dp dq eip(qq ) (q )
(A.51)
q
= i
(q).
q
(A.52)
and P :
It is now easy to find the commutator between Q
P ]| = q|[Q,
P ] dq (q )|q
q|[Q,
= q(i)
= i(q) = iq|.
409
(A.53)
(A.54)
(A.55)
Now (q) here is an arbitrary function, apart from the assumption of differentiability and
vanishing at . Thus it must be that
P ] = i.
[Q,
(A.56)
The fact that the commutator here is a c-number makes this an example of a canonical
commutation relation.
(A.59)
(Q) = /2.
2
(A.60)
Note that the variance does not equal , as one might expect from Eq. (A.58), because
(q) = |(q)|2 .
The Fourier transform of a Gaussian is also Gaussian, and in the momentum
representation
(p) = (/ 2 )1/4 exp iq0 p (p p0 )2 2 /2 .
(A.61)
From this it is easy to show that
P = p0 ,
(A.62)
(P ) = 1/(2 ).
2
(A.63)
410
Appendix A
1 Q
P
a =
+i
2
(A.66)
(A.67)
(A.68)
(A.69)
(A.70)
411
(A.71)
Thus we have derived the eigenvalues of the harmonic oscillator as n + 12 . The
corresponding unnormalized eigenstates are |n , which we denote |n when normalized.
If the Hamiltonian (A.64) refers to a particle, these are states with an integer number of
elementary excitations of the vibration of the particle. They are therefore sometimes called
vibron number states, that is, states with a definite number of vibrons. If the harmonic
oscillation is that of a sound wave, then these states are called phonon number states. If
the oscillator is a mode of the electromagnetic field, they are called photon number states.
Especially in the last case, the ground state |0 is often called the vacuum state.
The operator N = a a is called the number operator. Because a raises the number of
excitations by one, with
|n (a )n |0,
(A.72)
it is called the creation operator. Similarly, a lowers it by one, and is called the
annihilation operator. To find the constants of proportionality, we must require that the
number states be normalized, so that
n|m = nm .
(A.73)
= nn|n = n.
n|a a|n
(A.74)
n|a a|n
= |,
(A.75)
where | = a|n
|n 1. Thus the constant of proportionality must be such that
| = a|n
= ei n|n 1
(A.76)
for some phase . We choose the convention that = 0, so that
a|n
= n|n 1.
Similarly, it can be shown that
a |n =
n + 1|n + 1.
(A.77)
(A.78)
Exercise A.15 Show this, and show that the above two relations are consistent with
Show also that the normalized number state is given by
|n being an eigenstate of a a.
|n = (n!)1/2 (a )n |0.
Note that a acting on the vacuum state |0 produces nothing, a null state.
412
Appendix A
a|
= |,
(A.79)
where is a complex number (because a is not an Hermitian operator). There are no such
eigenstates of the creation operator a .
Exercise A.17 Show this.
Hint: Assume that there exist states | such that a | = | and consider the inner
product n|(a )n+1 |. Hence show that the inner product of | with any number state is
zero.
It is easy to find an expression for | in terms of the number states as follows. In
general we have
| =
cn |n.
(A.80)
n=0
Since a|
= | we get
ncn |n 1 =
cn |n.
n=0
(A.81)
n=0
By equating the coefficients of the number states on both sides we get the recursion
relation
cn ,
cn+1 =
(A.82)
n+1
(A.83)
| = exp ||2 /2
|n.
n!
n
The state | := 0 is the same state as the state |n := 0. For finite the coherent state
has a non-zero mean photon number:
= | (|) = ||2 .
(A.84)
|a a|
The number distribution (the probability of measuring a certain excitation number) for a
coherent state is a Poissonian distribution of mean ||2 :
2 n
2
||2 ||
n = |n|| = e
.
(A.85)
n!
This distribution has the property that the variance is equal to the mean. That is,
2 2
a a = ||2 .
(a a)
(A.86)
Exercise A.18 Verify this, either from the distribution (A.85) or directly from the coherent
state using the commutation relations for a and a .
|Q|
=
2 Re[],
|P | = ( 2/ )Im[],
2 | = 2 /2,
|(Q)
|(P ) | = 1/(2 ),
P + P Q|
|Q
= 0.
2
413
(A.87)
(A.88)
(A.89)
(A.90)
(A.91)
(A.92)
from which it follows that |||2 = e|| . If and are very different (as they
would be if they represent two macroscopically distinct fields) then the two coherent
states are very nearly orthogonal. Another consequence of their non-orthogonality is that
the coherent states form an overcomplete basis. Whereas for number states we have
|nn| = 1,
(A.93)
2
d2 || = 1.
This has applications in defining the trace, for example
1
d2 ||.
Tr[] =
(A.94)
(A.95)
H = a a.
(A.96)
Here we have dropped the 1/2 from the Hamiltonian (A.67) since it has no physical
consequences (at least outside general relativity). The amplitude || of the states remains
the same; only the phase changes at rate (as expected):
exp(iH t)| = |eit .
(A.97)
| = D()|0,
(A.98)
414
Appendix A
where
D()
= ea a = ei(ia i a)
(A.99)
is called the displacement operator. This is easiest to see as follows. First, note that if we
(A.100)
Exercise A.21 Show this, by analogy with the Heisenberg equations of motion.
we see that O = a + is a solution to Eq. (A.100).
Now, applying this to O 0 = a,
D ()a D()|0
= D()(
a + )|0 = D()|0,
a D()|0
= D()
(A.101)
where
|, r, = D()|r,
,
(A.102)
|r, = exp r e2i a 2 e2i a 2 2 |0
(A.103)
(A.104)
415
three most commonly used distributions, called the P , Q and W distributions (or
functions).
a )] = d2 P (, )fn (, ),
Tr[fn (a,
(A.106)
where fn is a normally ordered expression? The answer is yes, but in general P is an
extremely singular function (i.e. more singular than a -function). The relation between
the P function (as it is called) and is
= d2 P (, )||.
(A.107)
Thus, if P is only as singular as a -function, then is a mixture of coherent states.
Exercise A.22 Assuming a non-singular P function, verify Eq. (A.106) from Eq. (A.107).
We write, for example, P (, ) rather than P (), to avoid implying (wrongly) that these functions are analytical functions in
the complex plane.
416
Appendix A
where fa is an antinormally ordered expression? Again the answer is yes. Moreover, the
Q function (as it is called) is always smooth and positive, and is given by
Q(, ) = 1 ||.
(A.109)
W (, ) = 2 d2 Tr exp[(a ) (a )] .
(A.112)
The Wigner function is always a smooth function, but it can take negative values. It was
originally defined by Wigner as a function of position q and momentum p. From
Eq. (A.66), with = 1, these are related to by
"
1 !q
=
+ ip .
(A.113)
2
In terms of these variables (using = x ik),
1
q) + ix(P p)] .
dk
dx
Tr
exp[ik(
Q
(A.114)
W (q, p) =
(2)2
Note that the characteristic length of the harmonic oscillator does not enter into this
expression.
A particularly appealing feature of the Wigner function is that its marginal distributions
are the true probability distributions. That is,
dq W (q, p) = (p) = p||p,
(A.115)
dp W (q, p) = (q) = q||q.
(A.116)
417
exp( 1 [A,
B]).
exp(A + B)
B)
2
Using this, the Wigner function can be rewritten as
1
ik(Qq)
ix(P p) ikx/2
W (q, p) =
dk
dx
Tr
e
e
e
(2)2
1
ix(P p) ik(Qq)
+ikx/2
dk
dx
Tr
e
=
e
e
.
(2)2
From this, it is easy to prove the following useful operator correspondences:
i
Q q +
W (q, p),
2 p
i
Q q
W (q, p),
2 p
i
W (q, p),
P p
2 q
i
P p +
W (q, p).
2 q
(A.118)
(A.119)
(A.120)
(A.121)
(A.122)
(A.123)
(A.124)
Exercise A.26 Show these. This means showing, for example, that
dk
i
dx q +
Tr eix(P p) eik(Qq) e+ikx/2 .
2 p
(A.125)
Note that here is not restricted to being a state matrix. It can be an arbitrary operator
with Wigner representation W (q, p), provided that the integrals converge and boundary
terms can be ignored.
Appendix B
Stochastic differential equations
(B.1)
Here, the time argument of X has been omitted, and are arbitrary real functions, and
(t) is a rapidly varying random process. This process, referred to as noise, is continuous
in time, has zero mean and is a stationary process. The last descriptor means that all of its
statistics, including in particular its correlation function
E[ (t) (t + )],
(B.2)
Note that Eq. (B.3) implies that [] = [X]T 1 and [] = [X]T 1/2 , where here [A]
denotes the dimensionality of A and T is the time dimension.
We are interested in the case of Markovian SDEs, for which the correlation time of the
noise must be zero. That is, we can replace Eq. (B.3) by
d E[ (t) (t + )] = 1,
(B.4)
418
419
for all > 0. In this limit, (t) is called Gaussian white noise, which is completely
characterized by the two moments
E[ (t) (t )] = (t t ),
E[ (t)] = 0.
(B.5)
(B.6)
(B.8)
and, further, that the stochastic term (t) were independent of the system at the same time,
then one would derive the expected increment in X from t to t + dt to be
E[dX] = (x)dt.
(B.9)
The second assumption here seems perfectly reasonable since the noise is not correlated
with any of the noise which has interacted with the system in the past, and so would be
expected to be uncorrelated with the system. Applying the same arguments to f yields
E[df ] = f (x)(x)dt.
(B.10)
(B.11)
That is to say, the stochastic term has not introduced any noise into the variable X.
Obviously this result is completely contrary to what one would wish from a stochastic
equation. The lesson is that it is invalid to make simultaneously the following three
assumptions.
1. The chain rule of standard calculus applies (Eq. (B.7)).
2. The infinitesimal increment of a quantity is equal to its rate of change multiplied by dt (Eq. (B.8)).
3. The noise and the system at the same time are independent.
With a Stratonovich SDE the first assumption is true, and the usual explanation [Gar85] is
that the second is also true but that the third assumption is false. Alternatively (and this is
the interpretation we adopt), one can characterize a Stratonovich SDE by saying that the
second assumption is false (or true only in an implicit way) and that the third is still true.
420
Appendix B
In this way of looking at things, the fluxion X in a Stratonovich SDE is just a symbol
that can be manipulated using the usual rules of calculus. It should not be turned into a
is not equal to dE[X]/dt in general. This
ratio of differentials dX/dt. In particular, E[X]
point of view is useful for later generalization to jump processes in Section B.6, where
one can still consider starting with an SDE containing non-singular noise, and then taking
the singular limit. In the jump case, the third assumption is inapplicable, so the problem
must lie with the second assumption. Since the term Stratonovich is restricted to the case
of Gaussian white noise, we will also use a more general terminology, referring to any
SDE involving X as an implicit equation.
A different choice of which postulates to relax is that of the Ito stochastic calculus.
With an Ito SDE, the first assumption above is false, the second is true in an explicit
manner and the third is also true (for Gaussian white noise, but not for jumps). The Ito
form has the advantage that it simply allows the increment in a quantity to be calculated,
and also allows ensemble averages to be taken easily. It has the disadvantage that one
cannot use the usual chain rule.
(B.12)
(B.13)
(B.14)
t0
then this has all of the properties of a Wiener process. That is, if we define W (t) =
W (t + t) W (t), then this is independent of W (s) for s < t, and has a Gaussian
distribution with zero mean and variance t:
Pr[W (t) (w, w + dw)] = [2 t]1/2 exp w2 /(2 t) dw.
(B.15)
It is actually quite easy to see these results. First the independence of W (t) from W (s)
for s < t follows simply from Eq. (B.5). Second, it is easy to show that
E W (t)2 = t,
(B.16)
E[W (t)] = 0.
(B.17)
421
strictly (t) does not exist. This is another way of seeing why stochastic calculus is a
tricky business and why we have to worry about the Ito versus Stratonovich definitions.
In Eq. (B.12) we have introduced a convention of indicating Ito equations by an explicit
representation of an infinitesimal increment (as on the left-hand side of Eq. (B.12)),
whereas Stratonovich equations will be indicated by an implicit equation with a fluxion on
the left-hand side (as in Eq. (B.1)). If an Ito (or explicit) equation is given as
dX = a(X)dt + b(X)dW (t),
then the corresponding Stratonovich equation is
X = a(X) 1 b (X)b(X) + b(X) (t).
2
(B.18)
(B.19)
(B.20)
However, the nonsense result (B.11) is avoided because the chain rule does not apply to
calculating df (X). The actual increment in f (X) is simple to calculate by using a Taylor
expansion for f (X + dX). The difference from the usual chain rule is that second-order
infinitesimal terms cannot necessarily be ignored. This arises because the noise is so
singular that second-order noise infinitesimals are as large as first-order deterministic
infinitesimals. Specifically, the infinitesimal Wiener increment dW (t) can be assumed to
be defined by the following Ito rules:
E[dW (t)2 ] = dt,
(B.21)
E[dW (t)] = 0.
(B.22)
These can be obtained from Eqs. (B.16) and (B.17) simply by taking the infinitesimal
limit d.
Note that there is actually no restriction that dW (t) must have a Gaussian distribution.
As long as the above moments are satisfied, the increment W (t) over any finite time will
be Gaussian from the central limit theorem. By a similar argument, it is actually possible
to omit the expectation value in Eq. (B.21) because, over any finite time, a time average
effects an ensemble average of what is primarily a deterministic rather than stochastic
quantity. This can be seen as follows. Consider the variable
=
N1
[W (tj )]2 ,
(B.23)
j =0
= t,
t
( )2 2 =
.
N/2
(B.24)
(B.25)
422
Appendix B
(B.26)
Specifically, with dX given by Eq. (B.18), and using the rule dW (t) dt,
df (X) = f (X)a(X) + 12 f (X)b(X)2 dt + f (X)b(X)dW (t).
2
(B.27)
With this definition, and with f (X) = X2 , one finds that the expected increase in the
variance of X in a time dt is
E[dX(t)2 ] d(E[X(t)])2 = b(x)2 dt.
(B.28)
That is to say, the effect of the noise is to increase the variance of X. Thus, the correct use
of the stochastic calculus evades the absurd result of Eq. (B.11).
(B.29)
Assuming that the chain rule of standard calculus applies, and that the noise at time t is
independent of the system at that time, we have shown that naively turning this from an
equation for the rate of change of X into an equation for the increment of X,
X(t + dt) = X(t) + (X) (t)dt,
(B.30)
(B.32)
Note that (t) is assumed constant while X(s) changes. This is an expression of the fact
that the noise (t) cannot in reality be -correlated. As emphasized above, equations of the
Stratonovich form arise naturally only when (t) is a physical (non-white) noise source,
and the idealization to white noise is made later. Thus the physical noise will have some
finite correlation time over which it remains relatively constant. This idealization is valid
if the physical correlation time is much smaller than the characteristic evolution time of
the system.
423
We now expand Eq. (B.31) to second order in dt. As in the Ito chain rule, this is all that
is necessary. The result, using Eq. (B.32), is
1
(B.35)
(B.36)
(B.37)
1
F f X(t)
f Xj (t) ,
M j =1
(B.38)
from the j th run and M is the total number of runs. The error
in the estimate F f X(t) can be estimated by the usual statistical formula
9
2
F [f X(t) ]2 F f X(t)
F f X(t)
=
.
(B.39)
M
424
Appendix B
Thus M has to be chosen large enough for this to be below some acceptable level.
Two-time averages such as correlation functions, and the uncertainties in these estimates,
may be determined in a similar way.
X(tj +1 ) = X(tj ) + a X(tj ) t + b X(tj ) t Sj ,
(B.40)
where t is a very small increment, with tj +1 tj = t, and Sj is a random number with a
standard normal distribution1 generated by the computer for this time step. The Sj +1 for
the next time step is a new number, and the numbers in one run should be independent of
those in any other run. If one were to use a more sophisticated integration routine than the
Euler one, then the Stratonovich equation may be the one needed. See Ref. [KP00] for a
discussion.
In some cases, it is possible to obtain analytical solutions to a SDE. By this we mean a
closed integral form. Of course, this integral will not evaluate to a number, because it will
contain the noise term (t). However, it can be manipulated so as to give moments easily.
Again, the question arises, which equation is actually integrated in these cases, the Ito one
or the Stratonovich one? Here the answer is that in practice it does not matter. The only
cases in which an analytical solution is possible are those in which the Ito equation has
been (perhaps by an appropriate change of variable) put in the form
dX = a(t)dt + b(t)dW,
(B.41)
that is, where a and b are not functions of X. In this case the Stratonovich equation is
X = a(t) + b(t) (t).
(B.42)
That is, it looks the same as the Ito equation, so one could naively integrate it instead, to
obtain the solution
t
t
X(t) = X(0) +
a(s)ds +
b(s) (s)ds.
(B.43)
0
That is, a Gaussian distribution with mean zero and variance unity.
(B.44)
425
x)
b(X)2 dt.
2 (x)2
Exercise B.3 Convince yourself that, for an arbitrary smooth function f (X),
(X x) f (X) =
[(X x)f (x)].
x
x
(B.45)
(B.46)
(B.47)
(B.48)
1 2
2
d(x) = [a(x)dt + b(x)dW (t)] +
b(x)
dt
(x).
(B.49)
x
2 (x)2
If (x; t) = (X(t) x) at some time, then by construction this will remain true for all
times by virtue of the stochastic equation (B.49). However, this equation (which we call a
stochastic FPE) is more general than the SDE (B.45), insofar as it allows for initial
uncertainty about X. Moreover, it allows the usual FPE to be obtained by assuming that
we do not know the particular noise process dW driving the stochastic evolution of X and
(x). Replacing dW in Eq. (B.49) by its expectation value gives the (deterministic) FPE
'
(
1 2
2
(x) = a(x) +
b(x)
(x).
(B.50)
x
2 (x)2
Note that this (x) is not the same as that appearing in Eq. (B.49) because we are no
longer conditioning the distribution upon knowledge of the noise process. In Eq. (B.50),
the term involving first derivatives is called the drift term and that involving second
derivatives the diffusion term.
426
Appendix B
explicit solution. Similarly, if one has an equation driven by physical noise that one then
idealizes as a point process (that is, a time-series of -functions), one also ends up with an
implicit equation that one has to make explicit. The implicitexplicit relation is more
general than the ItoStratonovich one for two reasons. First, for point-process noise the
defining characteristic of an Ito equation, namely that the stochastic increment is
independent of the current values of the system variables, need not be true. Secondly,
when feedback is considered, this Ito rule fails even for Gaussian white noise. That is
because the noise which is fed back is necessarily correlated with the system at the time it
is fed back, and cannot be decorrelated by invoking Ito calculus.
Although point-process noise may be non-white (that is, it need not have a flat noise
spectrum), it must still have an infinite bandwidth. If the correlation function for the noise
were a smooth function of time, then there would be no need to use any sort of stochastic
calculus; the normal rules of calculus would apply. But, for any noise with a singular
correlation function, it is appropriate to make the implicitexplicit distinction. We write a
general explicit equation (in one dimension) as
dX = k(X)dM(t).
(B.51)
(B.52)
dN(t) = dN (t).
(B.53)
Here (X) is a positive function of the random variable X (here assumed known at time t).
Equation (B.52) indicates that the mean of dN (t) is of order dt and may depend on the
system. Equation (B.53) simply states that dN(t) equals either zero or one, which is why
it is called a point process. From the stochastic evolution it generates it is also known as a
jump process. Because dN is infinitesimal (at least in its mean), we can say that all secondand higher-order products containing dt are o(dt). This notation means that such products
(like dN dt, but not dN 2 ) are negligible compared with dt. Obviously all moments of
dN (t) are of the same order as dt, so the chain rule for f (X) will completely fail.
Unlike dW , which is independent of the system at the same time, dN does depend on
the system, at least statistically, through Eq. (B.52). In fact, we can use the above
equations to show that
E[dN(t)f (X)] = E[(X)f (X)]dt,
(B.54)
427
(B.55)
(B.56)
Equation (B.55) is an implicit equation in that it gives the increment in X only implicitly.
It has the advantage that f (X) would obey an implicit equation as given by the usual
chain rule,
f (X) = f (X) (X)(t).
(B.57)
Notice that the third distinction between Ito and Stratonovich calculus, namely that based
on the independence of the noise term and the system at the same time, has not entered
this discussion. This is because, even in the explicit equation (B.51), the noise may
depend on the system. The independence condition is simply a peculiarity of Gaussian
white noise. The implicitexplicit distinction is more general than the StratonovichIto
distinction. As we will show below, the relationship between the Stratonovich and Ito
SDEs can be easily derived within this more general framework.
The general problem is to find the explicit form of an implicit SDE with arbitrary noise.
For implicit equations, the usual chain rule (B.57) applies, and can be rewritten
f = f (X) (X)(t) f (X) (t),
(B.58)
where this equation defines (f ). Now, in order to solve Eq. (B.55), it is necessary to find
an explicit expression for the increment in X. The correct answer may be found by
expanding the Taylor series to all orders in dM. This can be written formally as
(B.60)
x|x=X(t) .
= exp (x)dM(t)
x
Here we have used the relation
dM(t)
d
,
X(s) = X(s)
ds
dt
s=t
(B.61)
which is the explicit meaning of the implicit Eq. (B.55). Note that (t) is assumed to be
constant, while X(s) is evolved, for the same reasons as explained following Eq. (B.32). If
the noise (t) is the limit of a physical process (which is the limit for which Eq. (B.55) is
intended to apply), then it must have some finite correlation time over which it remains
relatively constant. The noise can be considered -correlated if that time can be considered
to be infinitesimal compared with the characteristic evolution time of the system X.
The explicit SDE is thus defined to be
(B.63)
428
Appendix B
This expression will converge for all (X) for dM = dN or dM = dW , and is compatible
with the chain-rule requirement (B.58) for the implicit form. This can be seen from
calculating the increment in f (X) using the explicit form:
df = f X(t) + dX(t) f X(t)
= f exp (x)dM(t)
x|x=X(t) f X(t)
x
= exp (x)dM(t)
f (x)|x=X(t) f X(t)
x
(B.64)
1 f |f =f (X(t)) ,
= exp (f )dM(t)
f
as expected from Eq. (B.58). This completes the justification for Eq. (B.62) as the correct
explicit form of the implicit Eq. (B.55).
For deterministic processes ( = 0), there is no distinction between the explicit and
implicit forms, since only the first-order expansion of the exponential remains with dt
infinitesimal. There is also no distinction if (x) is a constant. For Gaussian white noise,
the formula (B.62) is the rule given in Section B.3 for converting from Stratonovich to Ito
form. That is, if the Stratonovich SDE is Eq. (B.55) with dM(t) = dW (t), then the Ito
SDE is
dX(t) = (X)dW (t) + 12 (X) (X)dt.
(B.65)
Exercise B.5 Show this, using the Ito rule dW (t)2 = dt.
This rule implies that it is necessary to expand the exponential only to second order. This
fact makes the inverse transformation (Ito to Stratonovich) easy. For the jump process, the
rule dN(t)2 = dN (t) means that the exponential must be expanded to all orders. This gives
1 x(t).
(B.66)
dX(t) = dN(t) exp (x)
X
In this case, the inverse transformation would not be easy to find in general, but there
seems no physical motivation for requiring it.
j (t),
(B.67)
X i (t) = ij X(t)
then the explicit form is
dXi (t) = exp kj (X)dMj (t)
1 Xi (t).
Xk
(B.68)
429
(B.69)
(B.70)
References
430
References
431
432
References
References
433
434
References
[CM87]
[CM91]
[CMG07]
[Con90]
[Coo56]
[CR98]
[CRG89]
[CSVR89]
[CT06]
[CW76]
[CWJ08]
[CWZ84]
[DA07]
[Dat95]
[Dav76]
[DCM92]
[DDS+ 08]
[DGKF89]
[DHJ+ 00]
[Die88]
Coherent States: Past, Present, and Future (D. H. Feng, J. R. Klauder, and
M. R. Strayer, eds.), World Scientific, Singapore, p. 75, 1994.
C. M. Caves and G. J. Milburn, Quantum-mechanical model for
continuous position measurements, Phys. Rev. A 36, 5543, (1987).
S. L. Campbell and C. D. Meyer, Generalized inverses of linear
transformations, Dover Publications, New York, 1991.
R. L. Cook, P. J. Martin, and J. M. Geremia, Optical coherent state
discrimination using a closed-loop quantum measurement, Nature 446,
774, (2007).
J. B. Conway, A course in functional analysis, 2nd edn, Springer, New
York, 1990.
L. N. Cooper, Bound electron pairs in a degenerate Fermi gas, Phys. Rev.
104, 1189, (1956).
A. N. Cleland and M. L. Roukes, Nanostructure-based mechanical
electrometry, Nature 392, 160, (1998).
C. Cohen-Tannoudji, J. Dupont Roc, and G. Grynberg, Photons and
atoms: Introduction to quantum electrodynamics, Wiley-Interscience,
New York, 1989.
H. J. Carmichael, S. Singh, R. Vyas, and P. R. Rice, Photoelectron
waiting times and atomic state reduction in resonance fluorescence, Phys.
Rev. A 39, 1200, (1989).
T. M. Cover and J. A. Thomas, Elements of information theory, 2nd edn,
Wiley-Interscience, New York, 2006.
H. J. Carmichael and D. F. Walls, Proposal for the measurement of the
resonant Stark effect by photon correlation techniques, J. Phys. B 9, L43,
(1976).
J. Combes, H. M. Wiseman, and K. Jacobs, Rapid measurement of
quantum systems using feedback control, Phys. Rev. Lett. 100, 160503,
(2008).
M. J. Collett, D. F. Walls, and P. Zoller, Spectrum of squeezing in
resonance fluorescence, Opt. Commun. 52, 145, (1984).
D. DAlessandro, Introduction to quantum control and dynamics,
Chapman & Hall, London, 2007.
S. Datta, Electronic transport in mesoscopic systems, Cambridge
University Press, Cambridge, 1995.
E. B. Davies, Quantum theory of open systems, Academic Press, London,
1976.
J. Dalibard, Y. Castin, and K. Mlmer, Wave-function approach to
dissipative processes in quantum optics, Phys. Rev. Lett. 68, 580, (1992).
S. Deleglise, I. Dotsenko, C. Sayrin, J. Bernu, M. Brune, J.-M. Raimond,
and S. Haroche, Reconstruction of non-classical cavity field states with
snapshots of their decoherence, Nature 455, 510, (2008).
J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis, State-space
solutions to standard H2 and H control problems, IEEE Trans.
Automatic Control 34, 831, (1989).
A. C. Doherty, S. Habib, K. Jacobs, H. Mabuchi, and S. M. Tan, Quantum
feedback control and classical control theory, Phys. Rev. A 62, 012105,
(2000).
D. Dieks, Overlap and distinguishability of quantum states, Phys. Lett. A
126, 303306, (1988).
References
435
436
References
References
437
438
References
References
439
440
References
[KU93] M. Kitagawa and M. Ueda, Squeezed spin states, Phys. Rev. A 47, 5138,
(1993).
[Lan08] P. Langevin, Sur la theorie du mouvement brownien, Comptes Rendus
Acad. Sci. (Paris) 146, 550, (1908), English translation by D. S. Lemons
and A. Gythiel, Am. J. Phys. 65, 1079, (1997).
[Lan88] R. Landauer, Spatial variation of currents and fields due to localized
scatterers in metallic conduction, IBM J. Res. Dev. 32, 306, (1988).
[Lan92] , Conductance from transmission: common sense points, Phys.
Scripta T42, 110, (1992).
[LBCS04] M. D. LaHaye, O. Buu, B. Camarota, and K. C. Schwab, Approaching the
quantum limit of a nanomechanical resonator, Science 304, 74, (2004).
[LCD+ 87] A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and
W. Zwerger, Dynamics of the dissipative two-state system, Rev. Mod.
Phys. 59, 1, (1987).
[Lin76] G. Lindblad, On the generators of quantum dynamical semigroups,
Commun. Math. Phys 48, 119, (1976).
[LJP+ 03] W. Lu, Z. Ji, L. Pfeiffer, K. W. West, and A. J. Rimberg, Real-time
detection of electron tunneling in a quantum dot, Nature 423, 422, (2003).
[Llo00] S. Lloyd, Coherent quantum feedback, Phys. Rev. A 62, 022108, (2000).
[LM02] A. I. Lvovsky and J. Mlynek, Quantum-optical catalysis: Generating
nonclassical states of light by means of linear optics, Phys. Rev. Lett. 88,
250401, (2002).
[LMPZ96] R. Laflamme, C. Miquel, J. P. Paz, and W. H. Zurek, Perfect quantum
error correcting code, Phys. Rev. Lett. 77, 198, (1996).
[LR91] P. Lancaster and L. Rodman, Solutions of continuous and discrete time
algebraic Riccati equations: A review, The Riccati Equation (S. Bittanti,
A. J. Laub, and J. C. E. Willems, eds.), Springer, Berlin, p. 11, 1991.
[LR02] A. P. Lund and T. C. Ralph, Nondeterministic gates for photonic
single-rail quantum logic, Phys. Rev. A 66, 032307, (2002).
[Lud51] G. Luders, Concerning the state-change due to the measurement process,
Ann. Phys. (Leipzig) 8, 322, (1951), English translation by K. Kirkpatrick
in Ann. Phys. (Leipzig), 15, 633, (2006).
[LWP+ 05] N. K. Langford, T. J. Weinhold, R. Prevedel, K. J. Resch, A. Gilchrist,
J. L. OBrien, G. J. Pryde, and A. G. White, Demonstration of a simple
entangling optical gate and its use in Bell-state analysis, Phys. Rev. Lett.
95, 210504, (2005).
[LZG+ 07] C.-Y. Lu, X.-Q. Zhou, O. Guhne, W.-B. Gao, J. Zhang, Z.-S. Yuan,
A. Goebel, T. Yang, and J.-W. Pan, Experimental entanglement of six
photons in graph states, Nature Phys. 3, 91, (2007).
[Mab08] H. Mabuchi, Coherent-feedback quantum control with a dynamic
compensator, Phys. Rev. A 78, 032323, (2008).
[Maj98] F. G. Major, The quantum beat: The physical principles of atomic clocks,
Lecture Notes in Physics, vol. 400, Springer, New York, 1998.
[MB99] G. J. Milburn and S. L. Braunstein, Quantum teleportation with squeezed
vacuum states, Phys. Rev. A 60, 937, (1999).
[MCD93] K. Mlmer, Y. Castin, and J. Dalibard, Monte Carlo wave-function
method in quantum optics, J. Opt. Soc. Am. B 10, 524, (1993).
[MCSM] A. E. Miller, O. Crisafulli, A. Silberfarb, and H. Mabuchi, On the
determination of the coherent spin-state uncertainty level, private
communication (2008).
References
441
442
References
References
443
444
References
References
445
446
References
References
447
448
References
Index
absorption
stimulated, 107
actuator, 283f, 296f
algebra
Lie, 318, 319, 324
algorithm
quantum Fourier transform, 396
quantum phase estimation, 396
Shors, 341, 396
amplifier
operational, 196
ancillae, 21, 91
n-photon entangled, 388
qubit, 375
single-photon, 380, 387
vacuum field, 185, 186, 310
anticommutator, 110
apparatus
classical, 2, 98
quantum, 15, 25, 97, 98
approximation
Born, 99, 101, 104, 109, 116
Markov, 99, 105, 117
rotating-wave (RWA), 43, 104, 108, 109, 140,
303, 333, 337
Arthurs and Kelly model, 23
atom, 15
alkali, 261
hydrogen, 10
radiative decay of, 102
rubidium, 46
Rydberg, 133
three-level, 42
two-level, 16, 102, 128, 172, 259
atom lasers, 267
back-action
classical, 6, 33, 280, 282, 289
449
450
cavity QED, 42, 50, 133, 270
weakly driven, 272
chain rule
Ito, 422
Stratonovich, 419
channels
classical, 341
noisy, 354
quantum, 347
noisy, 355, 396
circuit diagram
quantum, 343
circuit QED, 139
clocks
atomic, 48, 96
co-operativity parameter, 273
single-atom, 277
codes
for detected errors, 359
linear, 359
quantum, 357
bit-flip, 357
erasure, 368
universal, 358, 368
redundancy, 356
stabilizer, 357, 360
generators of, 358
Steane, 359
coherence function
first-order, 165, 168, 194
second-order, 156, 270
communication
classical, 343, 345
quantum, 52, 341
commutator, 399
completeness
of a basis, 398
of conditional probabilities, 8
of configuration vector, 279, 348
of probability operators, 20
configuration, 1, 279
quantum, 308
control, xiii
bangbang, 396
bilinear, 312
closed-loop, see feedback control
feedback, see feedback control
learning, xiv
open-loop, xiv, 396
quantum, xiii
single-qubit, 389, 397
controllability, 287
operator, 320
controller, 283f, 296f
Index
cooling, 376
indirect, 379
Cooper-pair box, 138
correlation function, 423
direct detection, 155, 270
in-loop, 241
environment, 105, 116
Heisenberg picture, 183, 185, 186
heterodyne, 168
homodyne, 165
in-loop, 248
noise, 418
QPC current, 205, 212
reduced, 205
correlations
quantum, 234
squeezed light, 230
without correlata, 29
cost function, 54, 68, 87n2
additive, 282
quantum, 311
additive and quadratic, 295
quantum, 328
arbitrariness of, 301
cheap control, 299
terminal, 295
Coulomb blockade, 114
cryptography
classical, 341
quantum, 341
de Broglie wavelength, 123
decoherence, 15, 97, 121, 123, 125, 353
charge qubit, 374
double quantum dot, 208
quantum optical, 130
qubit, 138, 356, 362
decomposition
non-uniqueness of, 402
Schmidt, 407
density matrix, see state matrix
density operator, see state matrix
detectability, 290, 302
potential, 323, 325
detection
adaptive, 175
balanced, 84
direct, xiii, 154, 172
Heisenberg picture, 182
dyne, 83, 168, 309
Heisenberg picture, 186
adaptive, 83, 391, 392
single-photon, 391
effective bandwidth of, 201
Index
finite bandwidth, 195, 215
heterodyne, 83, 166, 180, 335
Heisenberg picture, 185
homodyne, xiii, 83, 158, 178, 334, 352
Heisenberg picture, 184
imperfect, 190
inefficient, 190, 195, 222, 310
spectral, 178
with a noisy input field, 191
with dark noise, 194, 215
diffusion
anomalous, 112
momentum, 112, 123
quantum, xiii, 161, 168, 206, 210,
364
discrepancies
between observers, 292
eliminating, 293
discrimination
quantum state, 80, 85
experimental, 89, 92
unambiguous, 90, 92
distinguishability, 54
Dolinar receiver, 80
duality
detectabilitystabilizability, 291
observabilitycontrollability, 292
dynamical decoupling, 396
dynamics
free particle, 287, 291
linear, 284
uncertainties in, 339
effect, see probability operator
einselection, 122, 125
approximate, 123
Einstein summation convention,
169
electromagnetic field, see field
electromechanical systems
nano, 136
quantum, 136, 193
electron
harmonically trapped, 302
in a double quantum dot, 207, 372
in a quantum dot, 113, 201
spin-polarized, 116
emission
spontaneous, 102, 106, 361
stimulated, 107
ensembles
coherent state, 351
ignorance interpretation of, 125, 126
non-orthogonal, 125, 402
451
orthogonal, 126, 127, 346, 402
physically realizable, 126, 127, 129, 175, 327,
330f
preferred, 124, 125
pure state, 125, 401, 402
stationary, 173
uniform, 346
stochastically generated, 153, 423
uniform Gaussian, 326, 329f
entanglement, 15, 405
atomfield, 106, 275, 276
continuous variable, 348
measure of, 406
systemapparatus, 16, 28
systemenvironment, 99
entropy, 12, 36, 402, 406
environment, 15, 97, 354
bosonic, 113
fermionic, 113
equations
Belavkin, 309
Bellman, 283
quantum, 311
Bloch, 107, 128
FokkerPlanck, 253, 282, 424
deterministic, 425
stochastic, 196, 197, 425
Heisenberg, 403
KushnerStratonovich, 198, 280
superoperator, 200
Langevin, see Langevin equation
linear matrix, 286
master, see master equation
Maxwell, 107
OrnsteinUhlenbeck, 253, 284, 302
Riccati, 290, 297
algebraic, 292, 297t
stabilizing solutions of, see stabilizing
Riccati solutions
Schrodinger, 402
stochastic, 151, 162, 168
stochastic differential, see SDE
Zakai, 198
superoperator, 199
error
mean-square, 54
state-discrimination, 86
error correction
classical, 357
conditions for, 363
continuous, 368
for detected errors, 359
for monitored errors, 362
quantum, 353, 356
452
error correction (cont.)
without measurement, 375
continuous, 376
error syndrome, 357
errors
bit-flip, 3, 354
classical, 353
detected, 359
continuously occurring, 361
irreversible
detected, 359
inherently, 359
phase-flip, 356
quantum, 353
detected, 359
monitored, 362
rate of, 374
reversible
detected, 359
estimate
best, 52
biased, 52
maximum-likelihood, 54
optimal, 68
BraunsteinCaves, 59, 62, 75
CramerRao, 56
unbiased, 54, 73
estimation
Heisenberg limit for, 67, 397
maximum-likelihood, 95
parameter
adaptive, 76
phase, 65
adaptive, 80, 83, 397
phase difference, 68
quantum parameter, xi, 51
spatial displacement, 62
standard quantum limit for, 67, 68, 71, 83
time difference, 96, 260
estimator, 283f, 296f
robust, 340
evolution
completely positive, 119
diffusive, 161
discontinuous, 150
Heisenberg picture, 30, 141, 402
interaction frame, 99, 104, 116, 403
irreversible, 97
non-selective, 8, 25
continuous, 148, 149
non-unitary, 150, 273
reversible, 402
rotating frame, 404
Schrodinger picture, 30, 402
Index
selective, 25
discontinuous, 148
unitary, 15, 149, 403
factor
photocurrent scaling, 189
quality
measurement, 232
resonator, 136, 243
feedback
for noise reduction, 230
Heisenberg picture, 217
more robust than feedforward, 236
negative, 225, 235
optical beam
with linear optics, 217, 220
with nonlinear optics, 231
with QND measurements, 231, 233
positive, 230, 255, 258
proportional, 301
quantum, 28, 81, 83, 216
all-optical, 266
coherent, 266, 267
globally optimal, 96
locally optimized, 96
semiclassical, 228
stability of, 223
usage of the term, 237
feedback control
deep quantum, 270
anti-decoherence, 241, 267
direct detection, 238
for atom lasers, 267
for cooling, 268, 330, 338
for Dicke-state preparation, 339
for gravitational-wave detection, 267
for linear systems
Markovian, 301, 306
optimal, 307
for noisy channels, 396
for rapid purification, 396
for spin-squeezing, 263, 339
Heisenberg picture, 243, 249
homodyne detection, 246
in cavity QED, 276
experimental, 271, 276
linear, 251
Markovian, 254, 256, 336
linear exponential quadratic Gaussian, 340
linear Gaussian
stability of, 298
linear quadratic Gaussian, 296, 328
asymptotic, 297
stability of, 298
Index
Markovian, 240, 301, 335
for cooling, 302
for error correction, 362
limitations of, 337, 339
non-Markovian, 238
for error correction, 368
optimal, 215, 282, 311
for error correction, 368
with control constraints, 300
with time delay, 300
practicalities of, 242
quantum, xi, xiii, 216, 237
risk-sensitive, 339
semiclassical, 270
state-based, 269, 270, 283, 311
using QND measurements, 259
with inefficient detection, 243
with time delay, 243
feedforward
electro-optic, 390, 394
gain of, 236
less robust than feedback, 236
quantum, xi
fidelity, 40
error correction, 371, 378
teleportation, 346, 351, 353
field
classical, 106
coherent, 156
continuum, 15, 102, 141
driving, 43
electric, 46
in-loop, 226
input, 143, 181
magnetic, 263
mean, 217
microwave, 43, 139
output, 181
radiated, 15
single-mode, 16, 42, 83, 130, 391, 411
squeezed, 145
thermal, 145
two-mode, 69
vacuum, 143
white-noise, 145, 191, 249
field-programmable gate array, 85
filter
bandwidth of, 195
low-pass, 195
optical, 178
filtering
classical, xiii, 280
Kalman, 290, 293
linear low-pass, 369
453
quantum, xiii, 309
Wonham, 369
frequency
atomic, 46, 48
cavity resonance, 43
detuning, 43, 55, 106, 134
Josephson, 139
local oscillator, 83, 158, 166
mechanical, 136
microwave, 43
optical, 104
Rabi, 48, 106
single-photon, 43, 272
resonance, 178
sideband, 178
uncertainty in, 106
gain
feedback, 222, 223
optimal, 230, 302, 304
trade-off with bandwidth, 224
open-loop, 237
teleportation, 351
gate
C-NOT, 354
CS, 383
entangling, 382
Hadamard, 394
logic, 354
non-deterministic, 380
NS, 383
Toffoli, 375
group, 119, 319
Lie, 319
Pauli, 357
Haar measure, 40
Hamiltonian, 309, 402
dipole-coupling, 102
driving, 106, 109, 146
effective, 134, 136, 140
feedback, 242, 244, 249, 335
free, 404
interaction, 404
JaynesCummings, 44
quadratic, 315
time-dependent, 403
Heisenberg cut, 15, 28, 97
Hilbert space, 398
infinite-dimensional, 19, 308, 398,
408
tensor-product, 405
three-dimensional, 12
two-dimensional, 16, 402
454
Holevo variance, 74
HongOuMandel interference, 390
Husimi function, see Q function
identity
resolution of, 20
iff, 169
inequality
linear matrix, 294, 328, 334
information
classical, 52
Fisher, 54, 72, 74, 75
Shannons theory of, 341
information processing
quantum, 341
NMR, 266
intensity
saturation, 261
interferometry, 68
adaptive, 76, 397
MachZehnder, 68
multi-pass, 397
Ramsey, 47, 54, 56, 72, 133
single-photon, 397
jumps
classical, 426
quantum, xiii, 50, 120, 150, 204, 210, 273,
274, 362
knowledge
complete, 2, 8
incomplete, 2, 4, 8, 9, 405
increase of, 32, 36
maximal, 8, 9, 406
Lamb shift, 105
LandauerButtiker theory, 113
Langevin equations, 141, 281, 418
damped cavity, 188
damped electron oscillator, 303
quantum, 141, 144
linear, 315
non-Markovian, 245
Lie algebra, 383
limit
Doppler, 268
Heisenberg, 75, 264, 397
shot noise, 219
standard quantum, 67, 71, 75, 219, 331
ultimate quantum, 67, 74
lower bound
BraunsteinCaves, 59
Index
CramerRao, 56, 63
Helstrom, 88, 89, 96
HelstromHolevo, 52, 83
IvanovicDieksPeres, 91, 92
maps
classical
positive, 7
quantum
completely positive, see operations, 20,
119
stochastic, 38
master equation
BornMarkov, 99, 101
Brownian motion, 112, 121
caricature of, 123
high-temperature, 112
error correction, 376
feedback, 240, 247, 336
integro-differential, 101
Lindblad, 105, 119, 150, 309
Markovian, 101, 120
radiative damping, 102, 130
Redfield, 101
resonance fluorescence, 106, 128,
172
spinboson, 109, 374
high-temperature, 111, 127
stochastic, 155, 161
invariance of, 170
non-Markovian, 239
time-dependent, 101, 110
matrices
controllability, 288, 319
correlation, 169
covariance, 284
conditioned, 292, 294
quantum, 313
unconditioned, 286, 294
diffusion, 284
quantum, 316
drift, 284
quantum, 316
Hurwitz, 285
positive definite, 285
positive semi-definite, 169, 285
properties of, 285
pseudoinverse of, 285
rank of, 285
spectral-norm bounded, 169
stable, 285
symplectic, 313
unravelling, 310
Index
measurement strength
optimal, 330
SET, 374
measurements, 51
Heisenberg picture, 181
accidental, 359
adaptive, 76, 79, 175, 190, 391
experimental, 83
back-action-evading, 35, 38
Bell, 345, 395
canonical, 63, 65, 72, 397
classes of, 35, 41
closure of, 41
classical, 2
binary, 2, 37
ideal, 3
non-disturbing, 2, 46
petrol and match, 7
complementary, 26, 32
complete, 35, 37, 187, 391
constrained, 76
continuous-in-time, see monitoring
covariant, 60
efficient, 35
generalized, xii, 15, 97
Heisenberg picture, 29
incomplete, 37, 79, 392
inefficient, 32, 34, 35
minimally disturbing, 35, 40
non-projective, 15, 22
of an observable, 35, 38
orthodox, 41
position, 19, 23, 27, 32
projective, xii, 10, 35, 41
quantum, xi, xii
binary, 16, 25, 31, 45, 121
quantum-non-demolition, 39, 203, 229, 231,
259
repeated, 11, 22
sharp, 35, 37
simultaneous, 11, 14, 15, 23
single-qubit, 389, 397
Type I, 41
Type II, 41
unsharp, 37
von Neumann, 10, 35, 41
mesoscopic electronics, 113, 201
meter, 15, 53, 97
mode shape, 392
model reduction, xiv
modulator
acousto-optic, 84
electro-optic, 84, 221, 353
momentum kicks, 27, 32
455
monitoring, xiii, 97, 149
Heisenberg picture, 181
in mesoscopic electronics, 201
MoorePenrose inverse, 321
noise
1/f , 254n2, 374
amplifier, 304
binary, 3
classical, 8
dark, 49, 194
electronic, 85, 194, 374
Gaussian, 6
Gaussian white, 418, 419
input field, 191
Johnson, 196, 303, 374
measurement, 2, 15, 18, 19, 24, 31,
288
quantum, 83, 321
non-white, 419, 422
point-process, 425
preparation, 405
process, 284, 353
pure, 289
quantum, 316
quantum, 9, 26, 32, 230, 245,
399
random telegraph, 374
reduction by feedback, 230
shot, 184, 185
sub-shot, 217
technical, 85
vacuum, 166, 218
non-contextuality, 12
nonlocality, 9
observability, 291
Lloyds concept of, 324
potential, 323
observables, 10, 20, 398
apparatus, 27
quantum-non-demolition, 28, 39
quorum of, 95
simultaneously measurable, 11, 14
Ohmic contact, 115
operationalism, 9
operations, 20, 32
Kraus representation of, 21
trace-preserving, 22
operator algebra, 69
operator ordering
antinormal, 116, 415
normal, 116, 143, 415
symmetric, 314, 416
456
operators, 398
angular momentum, 69, 259
annihilation, 115, 411
fermionic, 113
canonically conjugate, 53
complementary, 26, 31
complex current, 310
creation, 115, 411
fermionic, 113
displacement, 31, 42, 60, 352, 414
fluctuation, 218
Fourier-transformed, 218
Hamiltonian, see Hamiltonian
Hermitian, 11, 398
input field, 143, 181
Lindblad, 119
linear, 315
measurement, 16, 19
unitary rearrangement of, 25
momentum, 399, 408
non-Hermitian, 410
normal, 11, 189
number, 42, 411
outcome, 27, 29
output field, 181
Pauli, 103, 208, 342
phase, 65
photocurrent, 184
photon flux, 182, 217
linearized, 218
position, 65, 399, 407
positive semi-definite, 401
probability, see probability operator
projection, 10, 401
rank-1, 10, 16, 401
real current, 321
unitary, 403
optical nonlinearity, 231, 234, 252
Kerr, 383
measurement-induced, 382
oscillators
anharmonic
coupled, 272, 275
quantum, 107
harmonic
amplitude of, 413
classical, 107, 302, 410
coupled, 274, 275
phase of, 413
quantum, 131, 330, 410
local, 80, 83
mode-matched, 189
optical parametric, 252, 327,
334
Index
P function, 415
pacifiability, 299, 307, 332, 335
parametric down-conversion, 234, 327, 352
parametric driving, 252
threshold for, 253
partial trace, 406
perturbation theory
second-order, 134, 140
phase
harmonic oscillator, 65
quantum
absolute, 399
relative, 399
sideband, 84
phase shift
optical, 70
atom-induced, 261
phase-space
classical, 405
quantum, 312, 407
phonons, 411
photocurrent
direct detection, 155
linearized, 217
heterodyne, 168
homodyne, 161
photons, 16, 70, 77, 411
antibunching of, 174, 275
bunching of, 174, 275
detection of, 150, 154, 188
demolition, 15, 22, 34, 78
non-demolition, 45, 50
emission of, 154
loss of, 131
polarization of, 89
sources of single, 390
thermal, 46
photoreceiver, 196
Plancks constant, 309
plant (engineering), 279
Poisson bracket, 312
POM, see probability-operator-valued
measure
POVM, see probability-operator-valued
measure
preparation, 51, 401
cluster state, 389
non-uniqueness of, 402
off-line, 388
single-qubit, 397
principle
certainty equivalence, 296
separation, 282, 339
quantum, 311, 339
Index
probability
forward, 5
subjective, 2
probability amplitude
classical, 57
quantum, see wavefunction
probability distributions
Gaussian, 6, 56, 61
combining, 6, 289
marginal, 416
ostensible, 163, 171
Poissonian, 412
quasi-, 414
probability operator, 19, 33
non-projective, 22
probability-operator-valued measure, 20
Gaussian, 23
process
diffusion, 425
innovation, 281
jump, 426
point, 151, 217, 426
random telegraph, 205
stationary, 156, 418
Wiener, 280, 420
complex, 167, 169, 310
derived from point process, 160
programming
semi-definite, xiv, 334
projection postulate, 10, 15, 19
purification, 126, 406, 407
purity, 401
Q function, 24, 190, 415
q-numbers, 398
quadratures
amplitude, 218
input field, 184
output field, 163, 166
phase, 218
QND measurement of, 229, 231
system, 159, 166, 188
variance of, 252
quantum computing, xi, 341
linear-optical, 380, 382, 390
nonlinear-optical, 379
one-way, 389
solid-state, 138, 372
universal, 382, 395
quantum dot
double, 136, 207
P in Si, 372
single, 113, 118, 201
quantum measurement problem, xiii, 97, 98, 123
457
quantum mechanics
interpretation of, 9
quantum optics, xiii, 107, 150, 154,
217
quantum point contact, 201
quantum steering, 126
quantum trajectories, xiii, 148, 151
linear, 162, 163, 168
non-Markovian, 195, 215, 247
used for simulations, 152
quantum watched-pot effect, 210
qubits, 103, 342
charge, 372
electronic, 136
entangled, 344
fungibility of, 342
photonic, 379
conversion of, 394
dual-rail, 382, 387
single-rail, 387, 390
spin, 342
superconducting, 138, 342
rate
damping
momentum, 112
oscillator, 136
decoherence, 138
dephasing, 111
error, 361, 370
injection, 117
radiative decay, 104, 108
tunnelling, 115, 118
relations
anticommutation, 113
Pauli, 103
commutation
bosonic, 102
canonical, 312, 409
free-field, 182, 219, 234
in-loop field, 227, 228
Pauli, 103
vacuum, 185
fluctuationdissipation, 317
fluctuationobservation, 322, 333
gainbandwidth, 224
inputoutput, 181
ItoStratonovich, 422
uncertainty, 399
angular momentum, 259
free-field, 219, 227, 234
Heisenberg, 14, 15, 399, 409
SchrodingerHeisenberg, 313
timeenergy, 95, 106, 115
458
representation
Bloch, 103, 173, 208, 346
Schwinger, 69
reservoir, see environment, 97
resistance
quantum of, 113
tunnel junction, 115
response function
linear, 222
robustness
to decoherence, 98, 125, 130, 132
to parameter uncertainty, 236, 339,
340
SDE, 418
analytically solved, 424
explicit, 239, 249, 421, 426
general, 426
implicit, 239, 249, 420, 426
Ito, 420, 422, 427
numerically solved, 423
quantum, xiii
Stratonovich, 419, 422, 427
semigroup
quantum dynamical, 119
sharpness, 74
single-electron transistor (SET),
374
spectrum
direct detection
linearized, 219
heterodyne, 168, 181
homodyne, 166, 179, 219, 253
in-loop, 225
in-loop QND, 229
Lorentzian, 205, 254
noise, 419
white, 193
non-Lorentzian, 141
out-of-loop, 225, 226
power, 168
Mollow, 178, 179, 180f, 181
probe absorption, 140
QPC current, 205, 213
quadrature, 219
spin, 16, 103, 127, 402
in bosonic environment, 109
spin-squeezing, 259
conditional, 262
Heisenberg-limited, 260, 264
measure of, 260
via feedback, 264
squashing, 228, 230
linewidth-narrowing by, 268
Index
squeezing
free-field, 219, 234
in-loop, 217, 226, 230
integrated, 254
intracavity, 253
low-frequency, 254
via feedback, 234
experimental, 234
stability
for dynamical systems, 286
for quantum dynamical systems, 317
Nyquist criterion for, 223
of linear Gaussian control, 298
of linear quadratic Gaussian control, 298
stabilizability, 287
stabilizing Riccati solutions, 293, 297, 307, 325
state collapse, xii, 10, 26, 126
objective, xiii
state matrix, 9, 401
state reduction, see state collapse
state vector, 9
state vector (engineering), 279
states
Bell, 344, 394
classical, 1, 4, 34, 279
conditional, 4, 280, 308
consistent, 2, 50
Gaussian, 6, 284
posterior, 4, 81
prior, 3, 5
unconditional, 5, 8
unnormalized, 7
coherent, 24, 42, 83, 106, 109, 132, 138, 411
coherent spin, 260
combined (classical and quantum), 197
EPR, 348
ground, 410
minimum uncertainty, 409, 410, 413
nonclassical optical, 240, 252
number, 41, 132, 410
quantum, 9, 34, 308
conditional, 10, 15, 25, 148, 257, 269,
271
consistent, 9, 50
fiducial, 17, 52
Gaussian, 23, 252, 314, 409, 417
improper, 13, 407
magnetic, 261
minimum uncertainty, 23
mixed, 12, 401
pure, 398
reduced, 406
thermal, 46, 104, 107, 110
unconditional, 12, 25, 148
Index
unnormalized, 33, 81, 153, 162, 163, 173,
406
Rydberg, 46
Schrodingers cat, 132, 134, 241
semiclassical, 132
spin-squeezed, 260
squeezed, 63, 253, 414
two-mode, 349
SusskindGlogower phase, 66
truncated phase, 393
vacuum, 104, 163, 411
statistical distance, 55
quantum, 57
statistics
nonclassical optical, 241
stationary, 418
sub-Poissonian, 241
subspace
quantum, 342
subsystem
quantum, 267
superconducting systems, 138
superoperator, 20
A, 240
D, 105
G, 152
H, 152
J , 20
Lindbladian, 120
Liouvillian, 120
superposition, 399
system (engineering), 279
systems
classical, 1, 278
linear, 283
multipartite, 404
quantum, 8, 308
linear, 251, 312
open, 97
super, 196
teleportation
classical, 347, 351
dual-rail, 394
gate, 385
non-deterministic, 387
quantum, 343, 385
Heisenberg picture, 349
continuous variable, 347
experimental, 347, 352
single-rail, 387, 394
theorem
ArakiLieb, 406
BakerCampbellHausdorff, 348, 377, 417
459
Bayes, see Bayesian inference, 4
Bells, 9
GelfandNaimarkSegal, 21, 406
Gleasons, 12
polar decomposition, 40
SchrodingerHJW, 126, 326
spectral, 10, 11
WongZakai, 419
time
dead, 277
delay
between emission and detection, 150
electronic, 222
in feedback loop, 150, 177, 239, 278,
300, 304
tomography
quantum, 94
transform
Fourier, 408
of a correlation function, 419
of a Gaussian, 409
of exponential decay, 141
of field quadratures, 218
optical, 389
gauge, 170
Girsanov, 281
Laplace, 223, 308
unitary, 381
trapped ions, xiii, 268
tunnelling
coherent, 110, 136, 208
dot-to-dot, 207
electron, 114
Josephson, 139
source-to-drain, 201
unravellings, 158
continuous, 169
general, 309
double quantum dot, 210
optimal for feedback, 333
resonance fluorescence, 172
variables
binary, 37
classical, 1, 2, 5, 8, 86
quantum, 16, 25, 31, 42, 121
continuous
classical, 1, 6
quantum, 13, 19, 23, 27, 32, 123, 347,
407, 408
discrete, 1
hidden, 8, 215
system, 1
460
vector
Bloch, 103, 128
complex current, 170, 186, 310
configuration, 279
quantum, 308
vibrons, 411
voltage
bias, 115, 139
Index
von Neumann chain, 15, 28, 97
W function, see Wigner function
wave-plate, 382
wavefunction
momentum representation, 408
position representation, 407
Wigner function, 252, 314, 416