Dark Sky Dark Matter
Dark Sky Dark Matter
Dark Sky Dark Matter
Observational Astrophysics
R E White (ed)
Stellar Astrophysics
R J Tayler (ed)
Forthcoming titles
J M Overduin
Universität Bonn, Bonn, Germany and
Waseda University, Tokyo, Japan
P S Wesson
University of Waterloo, Waterloo, Canada
Preface ix
1 The dark night sky 1
1.1 Olbers’ paradox 1
1.2 A short history of Olbers’ paradox 2
1.3 The paradox now: stars, galaxies and Universe 5
1.4 The resolution: age versus expansion 9
1.5 The data: optical and otherwise 10
1.6 Conclusion 13
2 The modern resolution and energy 16
2.1 Big-bang cosmology 16
2.2 The bolometric background 16
2.3 From time to redshift 20
2.4 Matter and energy 22
2.5 The expansion rate 24
2.6 The static analogue 27
2.7 A quantitative resolution 29
2.8 Light at the end of the Universe? 35
3 The modern resolution and spectra 41
3.1 The spectral background 41
3.2 From bolometric to spectral intensity 42
3.3 The delta-function spectrum 44
3.4 Gaussian spectra 50
3.5 Blackbody spectra 51
3.6 Normal and starburst galaxies 54
3.7 Back to Olbers 61
4 The dark matter 66
4.1 From light to dark matter 66
4.2 The four elements of modern cosmology 67
4.3 Baryons 68
4.4 Cold dark matter 73
4.5 Neutrinos 76
vi Contents
10 Conclusions 192
A Bolometric intensity integrals 196
A.1 Radiation-dominated models 196
A.2 Matter-dominated models 197
A.3 Vacuum-dominated models 199
B Dynamics with a decaying vacuum 202
B.1 Radiation-dominated regime 202
B.2 Matter-dominated regime 203
B.3 Vacuum-dominated regime 204
C Absorption by galactic hydrogen 206
Index 209
Preface
The darkness of the night sky awed our ancestors and fascinates modern
astronomers. It is a fundamental problem: why is the night sky dark, and just how
dark is it? This book examines these overlapping problems from the viewpoints
of both history and modern cosmology.
Olbers’ paradox is a vintage conundrum that was known to other thinkers
prior to its formulation by the Prussian astronomer in 1823. Given that the
Universe is unbounded, governed by the standard laws of physics, and populated
by light sources of constant intensity, the simple cube law of volumes and
numbers implies that the sky should be ablaze with light. Obviously this is not so.
However, the paradox does not lie in nature but in our understanding of physics.
A Universe with a finite age, such as follows from big-bang theory, necessarily
has galaxies of finite age. There is therefore a kind of imaginary spherical surface
around us which depends on the age and the speed of light. That is, we can only
see some of the galaxies in the Universe, and this is the main reason why the night
sky is dark. Just how dark can be calculated using the astrophysics of galaxies
and their stars, and the dynamics of relativistic cosmology.
We know from the dynamics of individual galaxies and clusters of galaxies
that the majority of the matter which exerts gravitational forces is not detectable
by conventional telescopes. This dark matter could, in principle, have many
forms, and candidates include various types of elementary particles as well as
vacuum fluctuations, black holes and others. Most of these candidates are unstable
to decay and produce photons. So dark matter does not only affect the dynamics
of the Universe, but the intensity of intergalactic radiation as well. Conversely, we
can use observations of background radiation to constrain the nature and density
of dark matter. Thus does Olbers’ problem gain new importance.
Modern cosmology is a cooperative endeavour, and we would like to
acknowledge some of the people who have helped to shape our ideas about
the Universe and what it may contain. It is a pleasure to thank H-J Fahr and
K-I Maeda for hospitality at the Institut für Astrophysik und Extraterrestrische
Forschung in Bonn and at Waseda University in Tokyo, where much of this
book was written with support from the Alexander von Humboldt Foundation
and the Japan Society for the Promotion of Science. Many of the results we
will present on astrophysics, general relativity and particle physics have been
ix
x Preface
derived in collaboration with colleagues over a number of years. For sharing their
expertise we are particularly indebted to S Bowyer, T Fukui, W Priester, S Seahra
and R Stabell. Finally, our motivation for carrying out these calculations is based
simply on a desire to understand the Universe. This motivation is infectious, and
in this regard we thank especially the late F Hoyle and his colleage D Clayton.
We hope, in turn, that this book will inspire and educate others who wonder about
the dark night sky.
It is a fundamental observation, which almost anyone can make, that the night sky
is dark. But why?
It might be thought that to a person standing on the side of the Earth that
faces away from the Sun the sky must necessarily be dark. However, on even a
moonless night there is a faint but perceptible light from stars. Consider a situation
in which there are no stars in the line of sight; for example, the hypothetical case
of someone living on a planet at the edge of the Milky Way, who looks out into
intergalactic space. Even in this situation, the night sky would not be perfectly
dark because there are many other luminous galaxies in the Universe. It is at this
point that the fundamental nature of the problem of the dark night sky begins to
become apparent. For modern cosmology implies that the population of galaxies
is uniform in density (on average) and unbounded. Given these conditions,
and assuming that conventional geometry holds, the number of galaxies visible
increases as the cube of the distance. This means that along the line of sight, the
galaxies become increasingly crowded as the distance increases. In fact, under the
conditions formulated, any given line of sight must end on a remote galaxy. Thus,
the view in all directions must ultimately be blocked by overlapping galaxies;
and the light from these, propagating towards us from the distant reaches of the
Universe, should make the night sky bright.
Clearly this does not happen in reality, and the reason why has been a subject
of controversy among astronomers for centuries. In 1823, Olbers published a
paper that drew attention to this problem [1], which has since come to be known
as Olbers’ paradox. However, the concept of a paradox within modern physics
implies a failure in terms of one or more of reasoning, theory or observational
data. Below, we will see how a scrutiny of all three of these has recently enabled
us to resolve Olbers’ paradox.
1
2 The dark night sky
H W M Olbers was a German astronomer who was born in 1758 and died in
1840. During his lifetime he was mainly notable for his research on the solar
system. Thus, a comet that returned in 1815 (and again in 1887) was called
Olbers’ comet. He was also involved in the search for a hypothetical missing
planet in the gap between the orbits of Mars and Jupiter. An object (Ceres) was
discovered on 1 January 1801 by G Piazzi, and its orbit was worked out by the
great mathematician K F Gauss and found to lie in the expected region. However,
an unexpected second object (Pallas) was discovered in March of 1802 by Olbers
with an orbit at almost the same distance from the Sun. This was followed in later
years by the finding of yet more objects, including a second (Vesta) by Olbers in
March of 1807. In this way Olbers was associated with the discovery of what later
came to be known as the asteroid belt.
The work for which Olbers is mainly remembered today was first published
in English in 1826 under the title ‘On the Transparency of Space’ [2]. It was then
apparently forgotten for a good many years, only to be recalled in more recent
A short history of Olbers’ paradox 3
actually does not figure directly in the expression for the intensity of intergalactic
light in the case where the Universe is uniform; and even if this were not the case,
space could hardly be so far from Euclidean or flat as to offer a gravitational kind
of resolution of Olbers’ paradox. In the same vein, no compelling evidence has
emerged that the laws of physics are significantly different in remote parts of the
Universe from those nearby, so that way of solving the problem is hardly ever
mentioned now.
Thus, the most up-to-date version of Olbers’ paradox, slimmed down to
avoid mentioning what modern astrophysics tells us can be taken for granted, is
as follows. In a Universe consisting of galaxies that are of infinite age and static,
the accumulated light would be so intense as to make the night sky bright and not
dark as observed.
In this form, it is obvious that one or both of the remaining assumptions are
wrong. It is actually the case, as will be seen later, that both are wrong. The
galaxies have a finite age, which means that the amount of light which they have
emitted into space is limited. And the galaxies are receding from each other due to
cosmic expansion, which means that their light has been reduced in intensity. But
this is merely to scratch the surface of a solution which, to be understood in depth,
requires a critical look at several areas of modern astrophysics and cosmology. In
particular, it is superficial to know only that the two assumptions in this version
of the paradox are wrong. As in other scientific studies, this just leads to the
important question of quantification: which assumption is more wrong? Or more
precisely: what is the exact relative importance of the two assumptions for the
darkness of the night sky?
The person often associated with this question is E R Harrison. In 1964
he discussed the timescales associated with stars and the non-static nature of
the Universe, and concluded that the latter was not of major importance for the
resolution of Olbers’ paradox [7]. He published several more articles thereafter,
and in 1981 in a popular book he restated his view that, in a Universe consisting
of luminous sources, the fact that they have finite ages is far more important than
whether they are static or not [8]. But while Harrison’s view was supported by
a few notable cosmologists, it was not universally accepted. This was mainly
because, following Bondi’s earlier discussion, the motions of the galaxies had
somehow become accepted as the factor in resolving Olbers’ paradox. Also,
Harrison’s technical papers on the subject were not easy to follow, and this
delayed a proper evaluation of the age factor. Thus, while some headway was
made in the 1960s, 1970s and early 1980s towards an objective treatment of the
paradox, controversy and confusion continued to prevail. A survey of modern
astronomy textbooks carried out in 1987 showed how divided opinion was. About
30% agreed with Harrison in saying that the failure of the infinite-age assumption
was most important. While about 50% attributed the resolution of the paradox to
the failure of both the infinite-age assumption and the static assumption, though
without a proper assessment of which was the more important. And about 20% of
the textbooks surveyed mentioned only the failure of the assumption that galaxies
The paradox now: stars, galaxies and Universe 5
are static. This state of confusion prompted the appearance in 1987 of an article by
Wesson et al [9] that introduced a new method, to be discussed later, of tackling
the problem of the dark night sky. Using this, they gave an exact quantitative
assessment of the relative importance of the age and motion factors, and thereby
resolved Olbers’ paradox in a definitive fashion.
To complete this section, it is interesting to note that while Olbers’ paradox
has through history been a subject of mainly theoretical attention, in recent years
it has also entered an observational phase that is likely to become of increasing
importance in the future. Observational attempts to detect the faint intergalactic
light have in the past produced useful but non-definitive data. However, recent
advances in telescope technology have brought us to the threshold of detection
[10]. This prospect is very interesting, since actual measurement would provide
an opportunity to check theoretical estimates of the light’s intensity, and thereby
the underlying principles of astrophysics and cosmology.
We saw in the preceding section that Olbers’ paradox has evolved in content
and likely resolution since it was propounded by its author in 1823. In recent
years, attention has focused on two ways in which a bright night sky can be
avoided. First, the ages of galaxies may not be infinite as originally assumed
but finite. Second, galaxies may not be static but instead have large systematic
motions. Modern astrophysics tells us that both of these things are the case. But
to understand how they affect the darkness of the night sky we need to look more
closely at our current picture of the Universe.
Stars are the main radiation-producing bodies in the Universe. An average
star converts elements of low atomic number (primarily hydrogen) to elements
of higher atomic number by means of nuclear fusion reactions. The process
by which there is a gradual change in the chemical makeup of a star during its
life, from ‘light’ to ‘heavy’ elements, is called nucleosynthesis. Combining
theoretical data on nucleosynthesis with observational data on the element
abundances in a given star (gathered mainly from spectroscopy) allows its age
to be determined. A by-product of the nuclear reactions in a star is energy, which
is emitted largely in the form of optical radiation. The energy output of a typical
star is colossal by conventional standards due to its very large mass, but the rate of
energy production per unit mass is a parameter with a more reasonable value. As a
rough average value for the energy output per unit time per unit mass of luminous
matter in the Universe, we shall take 0.5 erg s−1 g−1 . This is about one-quarter of
the value for the matter in the Sun, which reflects the fact that most stars (as seen
for instance on a Hertzsprung–Russell diagram of nearby stars) are intrinsically
less luminous than ours. As we will show later, it is the relative smallness of this
number that is one of the reasons the night sky appears dark.
Galaxies are the main mass concentrations in the Universe, each consisting
6 The dark night sky
primarily of a clump of numerous stars bound together by their own gravity. There
is a considerable range in the masses, luminosities and shapes of galaxies. The
shapes are used to classify galaxies into spirals, ellipticals and irregulars. The
masses and sizes are uncertain, because some galaxies may have halos of unseen
dark matter that are considerably larger and more massive than the parts of the
galaxies we can see. This is especially likely to be true of spirals like the Milky
Way. However, the study of the light due to galaxies involves primarily luminous
as opposed to dark matter, so we will defer discussion of dark matter to later
chapters. An average value for the density of luminous matter in the Universe
can be obtained if we assume that the galaxies are distributed uniformly in space.
Galaxies actually tend to clump together because of their mutual gravitational
interactions. But such clusters appear to have only limited sizes, with no sure
observational evidence for very large ones. So if we consider large distances
only, the distribution of galaxies can indeed be assumed to be uniform. Given
this, we can describe the distribution of luminous matter in the Universe in terms
of one parameter, namely the average density. Observational evidence indicates
that this quantity takes a value of about 4 × 10−32 g cm−3 . Like the stellar energy
output per unit mass, this is a very small number by everyday standards, and is
one of the main reasons the night sky appears dark.
Cosmology is the study of the Universe in the large; and because it has more
observational support than its rivals, we will presume the validity of the big-bang
theory of cosmology. This theory is based primarily on two assumptions. The
first, which was discussed earlier, is that the matter distribution over sufficiently
large distances can be taken to be uniform. It should be noted that the word
‘uniform’ implies that there is no identifiable centre or boundary to the matter
distribution, so models of the Universe that emerge from this theory must in some
sense be unbounded or infinite in extent. The second assumption on which the
big-bang theory is based is that the Universe is expanding. This means that the
galaxies are receding from each other (remember: no special or central one), and
in particular they are receding from the Milky Way. We infer that this is happening
because the light we receive from remote galaxies is redshifted, and the most
natural way to account for this is in terms of a Doppler shift from a receding
source. Thus, just as the sound waves from the whistle of a receding train are
received with longer wavelengths, producing a lower pitch, so are the light waves
from the stars of a receding galaxy received with longer wavelengths, producing
a redshift. (Redshift in the big-bang theory is really more subtle than this, and
involves expansion as distinct from velocity, but the two words are commonly
taken to imply each other and this will be done here also.) Redshifts and recession
speeds increase with distance, and it can be shown that the precise relationship—
called Hubble’s law—is consistent with the assumption of uniformity. Insofar as
both of the prime assumptions on which the big-bang theory is based are derived
from observations, the theory is fairly well-founded.
If these assumptions are plugged into Einstein’s theory of general relativity,
which is the appropriate theory of gravity and accelerations for large distances
The paradox now: stars, galaxies and Universe 7
and masses, then the evolution of the Universe can be worked out in detail. There
are actually several different kinds of evolution allowed by the observational input
data, and these are often called standard cosmological models. For each of these,
the distances between galaxies change somewhat differently with time. But all of
the standard models start in a big bang or state of infinite density at time zero.
Note that the big bang is not localized at a special point in space, a concept
that would violate the principle that the standard models are uniform with no
preferred spot. Rather, the big bang is like an explosion that occurs everywhere at
the same time. According to the theory, the real Universe just after the big bang
must have been very hot, and we find evidence of this today in the cooled-down
radiation of the 3 K cosmic microwave background or CMB (which should not
be confused with galactic radiation). Galaxies are thought to have condensed out
of the cooling matter not long after the big bang. Thus, the time that has elapsed
since the big bang and the age of the galaxies are expected to be approximately
equal. Insofar as the big-bang theory implies that galaxies condensed out at one
fairly well-defined epoch, it agrees with the observation that many galaxies have
nearly the same age. The age of the galaxies inferred indirectly (by matching
their motions to the standard models) agrees reasonably well with the age of the
galaxies inferred directly (by dating their oldest stars using nucelosynthesis). A
typical nucleosynthesis value is 16 × 109 years, or 16 Gyr [11]. This number
must clearly play a crucial role in the subject of Olbers’ paradox. But while it is
large, it will be seen later that it cannot offset the influence of the other two small
numbers introduced earlier.
The contents of the three preceding paragraphs may be summed up thus:
stars in galaxies produce light, but the galaxies have only been in existence for
a time of order 16 Gyr and are also expanding away from each other. How do
these last-mentioned factors, the finite age of the galaxies and the expansion of
the Universe, reduce what would otherwise be a blaze of intergalactic light?
The finite age of the galaxies implies directly that they have not had time to
populate the vast reaches of intergalactic space with enough photons to make it
bright. (We will assume there were negligible numbers of optical photons around
at the time galaxies and their stars formed, so the intensity of intergalactic light
started at essentially zero.) This is the simple way to understand this factor. But
there is another way which, while conceptually more difficult to appreciate, has
the advantage of allowing a geometrical argument against the paradox of the
bright night sky on a par with the one used originally by Olbers. The crux of
this argument is light-travel time. To appreciate the importance of this, consider
what happens when we look at a nearby galaxy, like the one in Andromeda. This
is ‘only’ about 2 million light-years away, but even so the finite speed of light
implies that it takes about 2 million years for photons from Andromeda to reach
us, which means that we see it as it was about 2 million years ago. For galaxies
further away, the light-travel time is even larger. Galaxies at very great distances
can be seen, in principle, as they were around the time of their formation about
16 Gyr ago. So, there is an imaginary spherical surface about us at origin, with
8 The dark night sky
a radius of order 16 × 109 light-years (or 16 Gly for short), where galaxies can
in principle be seen as they were at formation. (In practice it is very difficult to
observe such galaxies because they are very remote and thus appear very dim.
Also the cosmological distance associated with a light-travel time of 16 Gyr is
actually larger than 16 Gly, because the Universe has expanded in the meantime.
These points do not alter the argument in a qualitative way.) The light of our
night sky comes only from the limited number of galaxies within this imaginary
surface, because if we could peer further out we would only see the presumably
non-luminous material from which they formed.
Thus, Olbers’ paradox is avoided because, contrary to what he thought, we
do not see arbitrarily large numbers of sources as we look further away. Instead,
we effectively run out of galaxies to see at a distance of order 16 Gly. This
argument is logical, but conceptually difficult to accept for many people because
it seems to imply that we are necessarily talking about a Universe that is finite in
extent. However, this is not so. If we could by some miracle take an instantaneous
snapshot of all of the Universe as it is ‘now’, without taking into account the
lag associated with the light-travel time, we would see many more galaxies; and
depending on the kind of Universe we are living in, these could be arbitrarily
large in number and stretch to arbitrarily great distances. Thus, if we could see
all of the Universe as it is ‘now’, it might well be infinite in extent and contain an
arbitrarily large number of galaxies, and it would certainly be unbounded. (It is
possible that it could contain a large but finite number of galaxies, because space
according to general relativity can curve around and close up on itself, but such a
Universe would still have no boundaries.) However, we do not see to arbitrarily
great distances because light travels with finite speed, and this causes our view of
the Universe to be of things as they were in the past. Galaxies can only be seen
out to a distance corresponding to the time when they formed. We can sum up the
finite-age factor by saying that it directly limits the number of photons that have
been pumped into space by galaxies, or it creates an imaginary surface that limits
the portion of the Universe from which we receive photons emitted by galaxies.
On either interpretation, it is clear that intergalactic space and the night sky will
not be arbitrarily bright.
The expansion of the Universe implies two things that both reduce the
intensity of intergalactic light and are relatively easy to understand. Firstly, as the
galaxies recede from each other and the volume of intergalactic space increases,
the number of photons per unit volume and therefore the intensity of the light
decreases. Secondly, as the galaxies recede the photons they emit are redshifted,
so by Planck’s law their intrinsic energies decrease, as does the intensity of the
light. These things combine to reduce the brightness of intergalactic space and
the night sky.
The age and expansion factors described here have been much discussed.
The first has often been misunderstood because of the conceptual difficulty
involved in understanding the light-travel-time effect. The second, perhaps
because of this, has often been quoted in the literature as if it was the major or
The resolution: age versus expansion 9
only factor in avoiding Olbers’ paradox. But before this problem can be laid to
rest, a quantitative answer has to be given to the following question: Which of the
two factors, the finite lifetime of the galaxies or the expansion of the Universe, is
the more important in accounting for the dark night sky?
both the age factor and the expansion factor. If we thereafter formed the ratio of
intensities, with and without expansion, we would be able to state not only that it
is less than unity, but also its exact value. If the ratio was small compared to unity,
we would be able to conclude that expansion is the dominant factor. Whereas, if
the ratio was of order unity, we would be able to conclude that expansion is not
the dominant factor.
The procedure just outlined can be carried out by using models. If we were
sure of the parameters that describe the real Universe, it would only be necessary
to consider one model, and the calculation leading to the ratio of intensities with
and without expansion could be carried out by hand. Unfortunately, we are not
sure of the parameters that describe the real Universe, so it is necessary to consider
many models and leave the calculations of the ratios of intensities to a computer.
We will describe the detailed calculations—which have wide applicability—in the
next chapter. But they have a consensus, which is that the ratio of intensities with
and without expansion is approximately 0.5. In other words, the intensity of light
due to galaxies in an expanding Universe is reduced from that in an equivalent
static Universe by about a factor of two.
This conclusion effectively resolves Olbers’ paradox once and for all. We
previously saw that the intensity of intergalactic light in a static Universe,
where only the finite age of the galaxies is significant, is of the order 10−4 or
10−3 erg s−1 cm−2 . We now see that the intensity in an expanding Universe
with equivalent properties is merely reduced by half. If the order of magnitude
of the intensity is determined by the finite age of the galaxies, and only modified
by a number of order unity by the expansion of the Universe, then clearly we
are justified in saying that the former factor is the more important. This agrees
with previous arguments by a few respected cosmologists (notably E R Harrison),
but it disagrees with statements in several textbooks. The method just discussed
leaves no doubt about the true situation, however. The reason why the night sky
is dark has mainly to do with the finite age of the galaxies, not the expansion of
the Universe.
Olbers’ paradox has traditionally been discussed in terms of light. But of course
this is only the optical, or human-eye-receiving, band of an infinitely wide
electromagnetic spectrum. Due to absorption by the Earth’s atmosphere, data
are easy or difficult to obtain by ground-based observations, depending on the
waveband.
In recent times, we have been able to obtain better data by using rocket-
borne instruments and artificial satellites. However, technology progresses more
slowly than insight. Thus we are in the position that observations provide mainly
constraints rather than proofs of what we have discussed earlier. Specifically,
we have good constraints in the optical band on intergalactic light. (This is
The data: optical and otherwise 11
Figure 1.2. The full spectrum of the diffuse background radiation reaching our galaxy
(figure courtesy of R C Henry [13]). This is a compilation of observational measurements
and upper limits in wavebands from 1 radio to 2 microwave, 34 infrared, 5 optical,
67 ultraviolet, 8 x-ray and 9 γ -ray regions. Immense amounts of work, both
theoretical and experimental, have gone into understanding each waveband. We will
return to these in the chapters that follow, focusing on specific parts of the spectrum and
comparing them to the predicted background intensities from galaxies and other sources of
radiation.
large, ground-based telescopes. The idea behind such attempts is simple enough:
if a telescope is pointed towards an area of the sky where there are no nearby
sources, the light from the many remote galaxies in the field of view should
combine to make a considerable flux, even if individual galaxies cannot be
distinguished because their images are too faint. However, there is a major
problem to contend with: the light from remote galaxies is swamped by other
kinds of light that also enter the telescope. One of these is the light from the stars
of the Milky Way. This has to be discounted, as does light from other non-galactic
sources. The procedure to do this is complicated, but it is essential if one is to
isolate the light from external galaxies. What was probably the most thorough
ground-based search for the intergalactic light, both in terms of observations and
the correction procedure just mentioned, was made by R R Dube, W C Wickes and
D T Wilkinson, who used the Number 1 telescope at Kitt Peak in Arizona, USA
[14]. Unfortunately, after their observations had been corrected for light from
stars in the Milky Way and other places, they found that there was actually nothing
measurable left. (This means that the integrated light from many remote but
unresolved galaxies lying in the field of view of the telescope was too weak to be
identified by the detection equipment used.) Such a null result can still be useful,
though, because it provides an upper limit to the intensity of intergalactic light
that can be employed to constrain and check cosmological theory. Numerically,
the upper limit set by the three scientists is 3 × 10−4 erg s−1 cm−2 approximately.
It is doubtful if this can be improved on, or a positive identification made, using
ground-based optical telescopes. Other techniques, however, provide more hope.
One of these is to use the Hubble telescope in space, the data from which can be
combined with other ground-based observations. This technique has been used by
R Bernstein and co-workers [10]. As mentioned earlier, this method may provide
the first concrete detection of the Olbers light in the optical band, but it is still in
development.
We can also make observations from space in the infrared band. It is not
difficult to understand the logic behind this. The intergalactic radiation field
consists not only of optical photons from galaxies that are relatively nearby
and at intermediate distances, but also includes photons from galaxies far away
that have been redshifted to considerably longer infrared wavelengths by the
expansion of the Universe. Just what part of the total energy of the field is
at infrared wavelengths is hard to say, because it depends on various poorly
known cosmological factors such as the formation epoch of galaxies. But it is
probably significant. (It can be remarked in passing that theoretical estimates
of the intensity of the radiation field due to galaxies are often bolometric ones,
integrated over all wavelengths; whereas observations are made at one or a few
wavelengths.) In addition to photons that have been redshifted into the infrared,
there could also be photons of this type that were re-emitted from dust associated
with young distant galaxies. Thus, while we must certainly receive a lot of
optical photons from galaxies that are not too far away, we may also receive many
infrared photons from galaxies that are remote. This situation, wherein the flux
Conclusion 13
1.6 Conclusion
The traditional paradox named after Olbers can be stated in modern language as
follows: ‘in a Universe consisting of galaxies that are of infinite age and static,
the accumulated light would be so intense as to make the night sky bright and
not dark as observed’. We now know that both of the assumptions remaining in
this formulation are wrong. However, the ways in which the age and expansion
factors lead to a resolution of the problem are different.
• The age of the galaxies is finite, so the amount of light they have pumped
into intergalactic space has been limited. This, in combination with the
14 The dark night sky
relatively low rate of energy production and vast distances between galaxies,
has resulted in a low intensity of intergalactic light. An alternative, though
more subtle, way of regarding the age factor is as a geometrical limitation.
The fact that the galaxies formed a finite time ago means that we can only see
them out to a distance which is (roughly) of the order of their age multiplied
by the speed of light. This implies that there is an imaginary surface about us
which effectively delimits that part of the (unbounded) Universe from which
we presently receive galactic light. Since we only receive light from a finite
number of them, the light due to galaxies is limited in intensity.
• The Universe is expanding, which means that the galaxies are receding from
each other. This causes the volume of intergalactic space to increase, and
the energy density and intensity of intergalactic light to decrease. Also, the
light emitted by galaxies is redshifted by the cosmological version of the
Doppler effect. And since by Planck’s law redder photons have less energy,
the intensity of intergalactic light suffers a further decrease.
Of these two factors, the former is more important because it sets the order
of magnitude of intergalactic light, while the second only reduces it further by
about a half.
In later chapters, we will lay the calculational foundation for this conclusion,
and then proceed to see how modern data on the darkness of the night sky can be
applied to the problems of modern astrophysics. Of these problems, the largest
is that we do not see all of the matter in the Universe and do not know its
composition. However, the prime candidates involve a slow decay which feeds
photons into intergalactic space. There is an intimate connection between the
dark sky and dark matter.
References
[1] Olbers W 1823 Berliner Astronomisches Jahrbuch für das Jahr 1826 (Berlin: C F E
Späthen) p 110
[2] Olbers W 1826 Edinburgh New Phil. J. 1 141
[3] Bondi H 1952 Cosmology (Cambridge: Cambridge University Press)
[4] Jaki S L 1967 Am. J. Phys. 35 200
[5] Hoskin M 1997 Cambridge Illustrated History of Astronomy (Cambridge:
Cambridge University Press) pp 202–7
[6] Jaki S L 2001 The Paradox of Olbers’ Paradox 2nd edn (Pinckney, MI: Real View
Books) pp 1–94
[7] Harrison E R 1964 Nature 204 271
[8] Harrison E R 1981 Cosmology, the Science of the Universe (Cambridge: Cambridge
University Press)
[9] Wesson P S, Valle K and Stabell R 1987 Astrophys. J. 317 601
[10] Bernstein R A 1999 The Low Surface Brightness Universe (Astronomical Society of
the Pacific Conference Series, Volume 170) ed J I Davies, C Impey and S Phillipps
(San Francisco, CA: ASP) p 341
Conclusion 15
[11] Vandenberg D A, Bolte M and Stetson P B 1996 Ann. Rev. Astron. Astrophys. 34 461
[12] Wesson P S 1986 Sp. Sci. Rev. 44 169
[13] Henry R C 1999 Astrophys. J. 516 L49
[14] Dube R R, Wickes W C and Wilkinson D T 1979 Astrophys. J. 232 333
Chapter 2
16
The bolometric background 17
Here k(= ±1 or 0) is the curvature constant and the cosmological scale factor
R = R(t) measures the change in distance between comoving objects due
to expansion. Equation (2.2) tells us how to relate distances (integrals over
dr, dθ, dφ) and times (integrals over dt) in a Universe whose geometry fluctuates
in a manner which is driven by its matter and energy content, and described
mathematically by the function R(t).
Let us now consider the problem of adding up the light from all the galaxies
in the Universe in such a way as to arrive at their combined energy density as
received by us in the Milky Way. We begin by considering a single galaxy at
coordinate distance r whose luminosity, or rate of energy emission per unit time,
is given by L(t). When it reaches us, this energy has been spread over an area
π 2π
A = dA = [R0r dθ ][R0r sin θ dφ] = 4π R02r 2 (2.3)
θ=0 φ=0
where we use the metric (2.2) to obtain the area element dA at the present time
t = t0 (figure 2.1). The subscript ‘0’ here and elsewhere denotes quantities taken
at this time, so R0 ≡ R(t0 ) is the present value of the cosmological scale factor.
The intensity, or energy flux per unit area, reaching us from this galaxy is
given by
R(t) 2 L(t) R 2 (t)L(t)
dQ g = = . (2.4)
R0 A 4π R04r 2
Here the subscript ‘g’ denotes a single galaxy, and the two factors of R(t)/R0 are
included to take expansion into account: this stretches the wavelength of the light
in space (reducing its energy), and also spaces the photons more widely apart in
time. These are sometimes known as Hubble’s ‘energy’ and ‘number’ effects [2].
18 The modern resolution and energy
Figure 2.1. Surface area element dA and volume dV of a thin spherical shell, containing
all the light sources at coordinate distance r .
Jgµ = n g U µ (2.5)
∇µ Jgµ = 0 (2.6)
where ∇µ denotes the covariant derivative. Using the metric (2.2), one can put
this into the form
1 d 3
(R n g ) = 0 (2.7)
R 3 dt
or
−3
R
ng = n . (2.8)
R0
The bolometric background 19
dV = 4π R 2r 2 c dt (2.11)
and the latter may now be thought of as extending between look-back times t0 −t
and t0 − (t + dt), rather than distances r and r + dr .
The total energy received at the origin from the galaxies in the shell is just
the product of their individual intensities (2.4), their number per unit volume (2.8)
and the volume of the shell (2.11):
(We will use tildes throughout this book to denote dimensionless quantities taken
relative to their value at the present time t0 .) Integrating over all the spherical
shells between t0 and t0 − tf , where tf is the source formation time, we obtain the
result t0
Q = dQ = c n(t)L(t) R̃(t) dt. (2.14)
tf
Equation (2.14) defines the bolometric intensity of the extragalactic background
light (EBL). This is the energy received by us (over all wavelengths of the
20 The modern resolution and energy
electromagnetic spectrum) per unit area, per unit time, from all the galaxies which
have been shining since time tf . In principle, if we let tf → 0, we will encompass
the entire history of the Universe since the big bang. Although this sometimes
provides a useful mathematical shortcut, we will see in later sections that it is
physically more realistic to cut the integral off at a finite formation time. The
quantity Q is a measure of the amount of light in the Universe, and Olbers’
‘paradox’ is merely another way of asking why it is low.
λ R0 − R(t)
z≡ = = R̃ −1 − 1. (2.15)
λ R(t)
Differentiation gives
R0 dR R0 Ṙ dt
dz = − =− (2.16)
R2 R2
where an overdot signifies the time derivative. Inverting, we obtain
R 2 dz dz
dt = − =− . (2.17)
R0 Ṙ (1 + z)H (z)
H ≡ Ṙ/R (2.18)
which is the expansion rate of the Universe. Its value at the present time is known
as Hubble’s constant (H0). The numerical size of this latter quantity continues
to be debated by observational cosmologists. For this reason it is usually written
in the form
H0 = 100h 0 km s−1 Mpc−1 = 0.102h 0 Gyr−1 . (2.19)
Here 1 Gyr ≡ 109 yr and the uncertainties have been absorbed into a
dimensionless parameter h 0 whose value is currently thought to be in the range
0.6 h 0 0.9. We will have more to say about the observational status of h 0 in
chapter 4.
Putting (2.17) into the bolometric EBL intensity integral (2.14), and using
(2.15) to replace R̃ with (1 + z)−1 , we obtain
zf
n(z)L(z) dz
Q=c . (2.20)
0 (1 + z) H (z)
2
From time to redshift 21
Here z f is the redshift of galaxy formation, and we have written the comoving
number density and luminosity as functions of redshift rather than time. One
may think of n(z) and L(z) as describing the physics of the sources, the two
factors of (1 + z) as representing the dilution and stretching of the light signals
and H (z) as governing the evolution of the background spacetime. In practical
terms, the transition from an integral over time (2.14) to one over redshift (2.20)
is immensely valuable and might be likened to arming a traffic officer with a radar
gun rather than a stopwatch.
For some problems, and for this chapter in particular, the physics of the
sources themselves are of secondary importance, and it is reasonable to take
L(z) = L 0 and n(z) = n 0 as constants over the range of redshifts of interest.
In this case the integral (2.20) reads simply
cn 0 L 0 zf dz
Q= . (2.21)
H0 0 (1 + z)2 H̃ (z)
(This is merely Hubble’s parameter, normalized to its present value.) The form of
the function H̃ (z), which will be of central importance throughout this book, is
derived in the section that follows.
All the dimensional information in (2.21) is now contained in the constant
outside the integral. In the numerator of this constant we may identify the present
comoving luminosity density of the Universe,
0 ≡ n 0 L 0 . (2.23)
(This waveband, centred near 4400 Å, is where galaxies emit most of their light,
and is also close to the wavelength of peak sensitivity of human eyesight.) We
will use this number throughout our book. Recent data from the Sloan Digital
Sky Survey (SDSS) suggest a somewhat higher value with larger uncertainty,
(2.41 ± 0.39) × 108h 0 L Mpc−3 [6]; while a preliminary result from the Two
Degree Field (2dF) team is slightly lower, 0 = (1.82 ± 0.17) × 108h 0 L Mpc−3
[7]. If the final result inferred from large-scale galaxy surveys of this kind proves
22 The modern resolution and energy
to be significantly different from that in (2.24), then our EBL intensities (which
are proportional to 0 ) would go up or down accordingly.
Numerically, the constant factor outside the integral (2.21) sets the order of
magnitude of the integral itself, so it is of interest to see what value this takes.
Denoting it by Q ∗ we find using (2.24) that
There are two important things to note about this quantity. First, because the
factors of h 0 attached to both 0 and H0 cancel each other out, it is independent
of the uncertainty in Hubble’s constant. This is not always appreciated but
was first emphasized by Felten [8]. Second, the value of Q ∗ is very small
by everyday standards: more than a million times fainter than the bolometric
intensity produced by a 100 W bulb in an average-sized room (section 1.4). The
smallness of this number already contains the essence of the resolution of Olbers’
paradox. But to put the latter to rest in a definitive fashion, we need to evaluate
the full integral (2.20). This, in turn, requires the Hubble expansion rate H̃ (z),
which we proceed to derive next.
Here ρ is the density of the cosmic fluid and p its pressure. These two quantities,
in turn, are related by an equation of state
p = (γ − 1)ρc2 . (2.27)
Three specific equations of state are of particular relevance in cosmology and will
make regular appearances in the chapters that follow. They all have γ = constant,
as follows:
• radiation (γr = 43 ) : pr = 13 ρr c2
• dustlike matter (γm = 1) : pm = 0 (2.28)
• vacuum energy (γv = 0) : pv = −ρv c2
The first of these is a good approximation to the early Universe, when conditions
were so hot and dense that matter and radiation existed in nearly perfect
thermodynamic equilibrium (the radiation era). The second has usually been
used to model the present Universe, since we know that the energy density of
Matter and energy 23
These expressions will frequently prove useful in later chapters. They are also
applicable to cases in which several components are present, but only if these do
not exchange energy at significant rates (i.e. relative to the expansion rate), so that
each is in effect conserved separately.
3H 2(t)
ρcrit (t) ≡ . (2.35)
8π G
In particular the present value of this quantity is
3H02
ρcrit,0 ≡ = (2.78 × 1011)h 20 M Mpc−3
8π G
= (1.88 × 10−29)h 20 g cm−3 . (2.36)
This works out to the equivalent of between 4.0 protons per cubic metre (if
h 0 = 0.6) and 9.1 protons per cubic metre (if h 0 = 0.9). Using (2.34) and
(2.36) we can evaluate (2.33) at the present time as follows:
kc2 2 ρr,0 + ρm,0 + ρ − ρcrit,0
= H0 . (2.37)
R02 ρcrit,0
The physical significance of ρcrit,0 is now apparent: its value determines the
spatial curvature of the Universe. If the sum of the densities ρr,0 , ρm,0 and ρ is
exactly equal to ρcrit,0 then k = 0. The Universe in this case is flat or Euclidean.
Such a Universe is unbounded and infinite in extent. Alternatively, if the sum
The expansion rate 25
Figure 2.2. The spatial geometry of the Universe (here represented as a two-dimensional
surface) is determined by its total density, expressed in units of the critical density ρcrit,0
by the parameter tot,0 . If tot,0 = 1 then the Universe is flat (‘Euclidean’) and the
curvature parameter k can be set to zero in the metric (2.2). If tot,0 > 1 then the Universe
is positively curved (spherical) and k can be set to +1. If tot,0 < 1 then it is negatively
curved (hyperbolic) and k = −1. (Figure courtesy E C Eekels.)
of ρr,0 , ρm,0 and ρ exceeds ρcrit,0 , then k > 0 and the Universe is positively
curved. This is often referred to as a ‘k = +1 model’ because the magnitude
of the curvature constant k can be normalized to unity by choice of units for R0 .
Spatial hypersurfaces in this model are spherical in shape, and the Universe is
unbounded, but closed and finite. Finally, if ρr,0 + ρm,0 + ρ < ρcrit,0 , then the
Universe has negative curvature (k = −1) and hyperbolic spatial sections. It is
open, unbounded and infinite in extent. (Strictly speaking one can also obtain
flat and hyperbolic solutions which are finite by adopting a non-trivial topology
and suitably ‘identifying’ pairs of points; we do not pursue this here.) All three
possibilities are depicted in figure 2.2.
To simplify the notation we now rescale all our physical densities, defining
three dimensionless density parameters:
ρr,0 ρm,0 ρ c2
r,0 ≡ m,0 ≡ ,0 ≡ = . (2.38)
ρcrit,0 ρcrit,0 ρcrit,0 3H02
These parameters occupy central roles throughout the book, and we will usually
refer to them for brevity as ‘densities’ (or ‘present densities’ where this is not
26 The modern resolution and energy
clear from the context). Defining tot,0 ≡ r,0 + m,0 + ,0 , we find from
(2.37) that the curvature constant is
Thus flat models are just those with tot,0 = 1, while closed and open models
have tot,0 > 1 and tot,0 < 1 respectively. Substituting (2.39) back into (2.33),
using the definitions (2.38) and recalling from (2.15) that z = R0 /R − 1, we
obtain the expansion rate of the Universe:
galaxy luminosity and comoving number density, as discussed in section 2.3. This
is adequate for our purposes, because we are not primarily concerned here with
the physics of the galaxies themselves. We wish to obtain a definitive answer to
the question posed in chapter 1: is it cosmic expansion, or the finite age of the
Universe, which is primarily responsible for the low value of Q?
The relative importance of these two factors continues to be a subject of
controversy and confusion (see [4] for a review). In particular there is a lingering
perception that general relativity ‘solves’ Olbers’ paradox chiefly because the
expansion of the Universe stretches and dims the light it contains.
There is a simple way to test this supposition using the formalism we have
already laid out, and that is to ‘turn off’ expansion by setting the scale factor of
the Universe equal to a constant value, R(t) = R0 . Then R̃(t) = 1 from (2.13),
and the bolometric intensity of the EBL is given by (2.14) as an integral over time:
t0
Q stat = Q ∗ H0 dt. (2.43)
tf
where we have used the definition (2.22) of H̃ (z). In a static Universe, of course,
redshift does not carry its usual physical significance. But nothing prevents us
from retaining z as an integration variable. Substitution of (2.44) into (2.43) then
yields zf
dz
Q stat = Q ∗ . (2.45)
0 (1 + z) H̃ (z)
two equations are almost identical, differing only by an extra factor of (1 + z)−1
attached to the integrand in the expanding case. Since this is less than unity for
all z, we expect Q to be less than Q stat , although the magnitude of the difference
will depend on the shape of H̃ (z). The latter must have been fairly smooth over
the lifetime of the galaxies in any plausible model, so that the difference between
the two integrals cannot amount to much more than a few orders of magnitude. It
follows immediately that the intensity of the EBL in expanding models (as in their
static analogues) must be determined primarily by the lifetime of the galaxies, so
that the effects of expansion are secondary. We proceed in the next section to
make this conclusion quantitative.
The ratio of EBL intensity in an expanding de Sitter model to that in the equivalent
static model is then
Q 3/(4 ln 4) (z f = 3)
= (2.54)
Q stat 0 (z f = ∞).
The de Sitter Universe is older than other models, which means it has more time
to fill up with light, so intensities are higher. In fact, Q stat (which is proportional
to the lifetime of the galaxies) goes to infinity as z f → ∞, driving Q/Q stat to
zero in this limit. (It is thus possible to ‘recover Olbers’ paradox’ in the de Sitter
model, as noted by White and Scott [11].) Such a limit is, however, unphysical
as noted earlier. For realistic values of z f one obtains values of Q/Q stat which are
only slightly lower than those in the radiation and matter cases.
Finally, we consider the Milne model, which is empty of all forms of matter
and energy (r,0 = m,0 = ,0 = 0), making it an idealization but one which
has nevertheless often proved useful as a bridge between the special and general
theories of relativity. Bolometric EBL intensity in the expanding case is given by
(2.42) as
1+zf
Q dx 15/32 (z f = 3)
= = (2.55)
Q∗ 1 x3 1/2 (z f = ∞)
which is identical to equation (2.47) for the static radiation model. The
corresponding static result is given by (2.45) as
1+zf
Q stat dx 3/4 (z f = 3)
= = (2.56)
Q∗ 1 x 2 1 (z f = ∞)
which is the same as equation (2.52) for the expanding de Sitter model. The ratio
of EBL intensity in an expanding Milne model to that in the equivalent static
model is then
Q 5/8 (z f = 3)
= (2.57)
Q stat 1/2 (z f = ∞).
This again lies close to previous results. In fact, we have found in every case
(except the z f → ∞ limit of the de Sitter model) that the ratio of bolometric EBL
intensities with and without expansion lies in the range 0.4 º Q/Q stat º 0.7.
It may, however, be that models whose total density is neither critical nor
zero have different properties. To check this we expand our investigation to the
wider class of open and closed models. Equations (2.42) and (2.45) can be solved
analytically for these cases, if they are dominated by a single component. We
collect the solutions for convenience in appendix A, and plot them in figures 2.3–
2.5 for radiation-, matter- and vacuum-dominated models respectively. In each
32 The modern resolution and energy
Figure 2.3. Ratios Q/Q ∗ (long-dashed lines), Q stat /Q ∗ (short-dashed lines) and Q/Q stat
(unbroken lines) as a function of radiation density r,0 . Bold lines are calculated for z f = 3
while light ones have z f = ∞.
Figure 2.4. Ratios Q/Q ∗ (long-dashed lines), Q stat /Q ∗ (short-dashed lines) and Q/Q stat
(unbroken lines) as a function of matter density m,0 . Bold lines are calculated for z f = 3
while light ones have z f = ∞.
While such models are rarely considered, it is interesting to note that the same
pattern persists here. In light of (2.58), one can no longer integrate out to
arbitrarily high formation redshift z f . If one wants to integrate to at least z f , then
one is limited to vacuum densities less than ,0 < (1 + z f )2 /[(1 + z f )2 − 1].
In the case z f = 3 (shown with heavy lines), this corresponds to an upper limit
of ,0 < 16/15 (faint dotted line). More generally, for ,0 > 1 the limiting
value of EBL intensity (shown with light lines) is reached as z f → z max rather
than z f → ∞ for both expanding and static models. Over the entire parameter
space −0.5 ,0 1.5 (except in the immediate vicinity of ,0 = 1),
figure 2.5 shows that 0.4 º Q/Q stat º 0.7 as before.
When more than one component of matter is present, analytic expressions
for the bolometric intensity can be found in only a few special cases, and the
ratios Q/Q ∗ and Q stat /Q ∗ must, in general, be computed numerically. We show
the results in figure 2.6 for the case which is of most physical interest: a Universe
containing both dustlike matter (m,0, horizontal axis) and vacuum energy (,0 ,
34 The modern resolution and energy
Figure 2.5. Ratios Q/Q ∗ (long-dashed lines), Q stat /Q ∗ (short-dashed lines) and Q/Q stat
(unbroken lines) as a function of vacuum energy density ,0 . Bold lines are calculated
for z f = 3 while light ones have z f = ∞ for ,0 1 and z f = z max for ,0 > 1.
The dotted vertical line marks the maximum value of ,0 for which one can integrate to
z f = 3.
vertical axis), with r,0 = 0. This is a contour plot, with five bundles of equal-
EBL intensity contours for the expanding Universe (labelled Q/Q ∗ = 0.37, 0.45,
0.53, 0.61 and 0.69). The bold (unbroken) lines are calculated for z f = 5, while
medium (long-dashed) lines assume z f = 10 and light (short-dashed) lines have
z f = 50. Also shown is the boundary between big bang and bounce models
(bold line in top left corner), and the boundary between open and closed models
(diagonal dashed line). Similar plots could be produced for the r,0 –m,0 and
,0 –r,0 planes.
Figure 2.6 shows that the bolometric intensity of the EBL is only modestly
sensitive to the cosmological parameters m,0 and ,0 . Moving from the lower
right-hand corner of the phase space (Q/Q ∗ = 0.37) to the upper left-hand one
(Q/Q ∗ = 0.69) changes the value of this quantity by less than a factor of two.
Increasing the redshift of galaxy formation from z f = 5 to 10 has little effect, and
increasing it again to z f = 50 even less. This means that essentially all of the light
reaching us from outside the Milky Way comes from galaxies at z < 5, regardless
of the redshift at which these objects actually formed.
While figure 2.6 confirms that the night sky is dark in any reasonable
Light at the end of the Universe? 35
cosmological model, figure 2.7 shows why. It is a contour plot of Q/Q stat , the
value of which varies so little across the phase space that we have had to restrict
the range of z f -values in order to keep the diagram from being too cluttered. The
bold (unbroken) lines are calculated for z f = 4.5, the medium (long-dashed) lines
for z f = 5, and light (short-dashed) lines for z f = 5.5. The spread in contour
values is extremely narrow, from Q/Q stat = 0.56 in the upper left-hand corner
to 0.64 in the lower right-hand one—a difference of less than 15%. Figure 2.6
confirms our previous analytical results and leaves no doubt about the resolution
of Olbers’ paradox: the brightness of the night sky is determined to order of
magnitude by the lifetime of the galaxies, and is reduced by no more than a factor
of 0.6 ± 0.1 due to the expansion of the Universe.
Figure 2.7. The ratio Q/Q stat of EBL intensity in an expanding Universe to that in a static
Universe with the same values of the matter-density parameter m,0 and vacuum-density
parameter ,0 (with the radiation-density parameter r,0 set to zero). Unbroken lines
correspond to z f = 4.5, while long-dashed lines assume z = 5 and short-dashed ones have
z = 5.5.
these parameters are ones that we can, in principle, measure with our instruments.
However, it is occasionally useful to shift attention from redshift z back to time t
in order to ask questions which, while not directly connected with experiment, are
of conceptual interest. In this section we ask: do the foregoing results mean that
the brightness of the night sky is changing with time? If so, is it getting brighter
or darker? How quickly?
To investigate these questions one would like the EBL intensity expressed as
a function of time, as in (2.14). Evaluation of this integral requires a knowledge
of the scale factor R(t), which is not well constrained by observation. An
approximate expression can, however, be obtained if the Universe has a density
very close to the critical one, as suggested by observations of the power spectrum
of the CMB (see chapter 4). In this case k = 0 and (2.33) simplifies to
2
Ṙ 8π G
= (ρr + ρm + ρ ) (2.59)
R 3
where we have set ρ = ρr + ρm and used (2.34) for ρ . If we now assume that
only one of these three components is dominant at a given time, then we can make
Light at the end of the Universe? 37
time a feature only of the pure de Sitter model; or does it also arise in models
containing matter along with vacuum energy?
To answer this, we would like to find a simple expression for R(t) in models
with both dustlike matter and vacuum energy. An analytic solution does exist
for such models, if they are flat (,0 = 1 − m,0 ). Its usefulness goes well
beyond the particular problem at hand and, since we have found it derived only in
German [13], it is worth reproducing here. Taking ρm (R) and ρ from (2.32) and
using the definitions (2.38), we find that (2.59) leads to the following differential
equation in place of (2.60):
2 −3
Ṙ R
= H0 m,0
2
+ ,0 . (2.64)
R R0
where we have replaced ,0 with 1 − m,0 . This elegant formula has many
uses and deserves to be known more widely, given the importance of vacuum-
dominated models in modern cosmology. Differentiation with respect to time
gives the Hubble expansion rate:
H̃ (t) = 1 − m,0 coth 32 1 − m,0 H0t . (2.69)
This goes over to ,0 as t → ∞, a result which (as noted in section 2.5) holds
quite generally for models with > 0. Alternatively, setting R̃ = (1 + z)−1 in
(2.68) gives the age of the Universe at a redshift z:
2 −1 1 − m,0
t= sinh . (2.70)
3H0 1 − m,0 m,0 (1 + z)3
Light at the end of the Universe? 39
Figure 2.8. Plots of Q(t)/Q ∗ in flat models containing both dustlike matter (m,0 ) and
vacuum energy (,0 ). The unbroken line is the Einstein–de Sitter model, while the
short-dashed line is pure de Sitter, and long-dashed lines represent intermediate models.
The curves do not meet at t = t0 because Q(t0 ) differs from model to model, and
Q(t0 ) = Q ∗ only for the pure de Sitter case.
In particular, this equation with z = 0 gives the present age of the Universe
(t0 ). Thus a model with (say) m,0 = 0.3 and ,0 = 0.7 has an age of
t0 = 0.96H0−1 or, using (2.19), t0 = 9.5h −1 0 Gyr. Alternatively, in the limit
m,0 → 1, equation (2.70) gives back the standard result for the age of an EdS
Universe, t0 = 2/(3H0) = 6.5h −1 0 Gyr.
Putting (2.68) into the bolometric intensity integral (2.14), holding L(t) =
L 0 = constant as usual, and integrating over time, we obtain the plots of EBL
intensity Q(t)/Q ∗ shown in figure 2.8. This diagram shows that the rapidly-
brightening Universe is not a figment only of the pure de Sitter model (short-
dashed line). In a less vacuum-dominated model with m,0 = 0.3 and ,0 =
0.7, for instance, the ‘reading-room’ intensity of Q ∼ 1000 erg cm−2 s−1 is still
attained after t ∼ 290 Gyr.
In theory then, it might be thought that our remote descendants could live
under skies perpetually ablaze with the light of distant galaxies, in which the
rising and setting of their home suns would barely be noticed. That this cannot
happen in practice, as we have said, is due to the fact that galaxy luminosities
necessarily change with time as their brightest stars burn out. The finite lifetime
of the galaxies, in other words, is critical, not only in the sense of a finite
past, but a finite future. A proper assessment of this requires that we move
40 The modern resolution and energy
References
[1] Wesson P S 1999 Space–Time–Matter (Singapore: World Scientific)
[2] McVittie G C 1965 General Relativity and Cosmology (Urbana, IL: University of
Illinois Press) p 164
[3] Weinberg S 1972 Gravitation and Cosmology (New York: Wiley)
[4] Wesson P S, Valle K and Stabell R 1987 Astrophys. J. 317 601
[5] Fukugita M, Hogan C J and Peebles P J E 1998 Astrophys. J. 503 518
[6] Yasuda N et al 2001 Astron. J. 122 1104
[7] Norberg P et al 2002 Preprint astro-ph/0111011
[8] Felten J E 1966 Astrophys. J. 144 241
[9] Dicke R H 1970 Gravitation and the Universe (Philadelphia, PA: American
Philosophical Society) p 62
[10] Ellis G F R 1988 Class. Quantum Grav. 5 891
[11] White M and Scott D 1996 Astrophys. J. 459 415
[12] Alexander D R et al 1997 Astron. Astrophys. 317 90
[13] Blome H-J, Hoell J and Priester W 1997 Bergmann-Schaefer: Lehrbuch der
Experimentalphysik vol 8 (Berlin: de Gruyter) pp 311–427
Chapter 3
41
42 The modern resolution and spectra
We then return to (2.12), the bolometric intensity of the spherical shell of galaxies
depicted in figure 2.1. Replacing L(t) with dL λ in this equation gives the intensity
of light emitted between λ and λ + dλ:
This is the integrated light from many galaxies, which has been emitted at various
wavelengths and redshifted by various amounts, but which is all in the waveband
centred on λ0 when it arrives at us. We refer to this as the spectral intensity of
the EBL at λ0 . Equation (3.5), or ones like it, have been considered from the
theoretical side principally by McVittie and Wyatt [2], Whitrow and Yallop [3, 4]
and Wesson et al [5, 6].
We may convert this integral over time t to one over redshift z using (2.17)
as before. This gives
zf
c n(z)F[λ0 /(1 + z), z] dz
Iλ (λ0 ) = (3.6)
4π H0 0 (1 + z)3 H̃ (z)
which is the spectral analogue of (2.20). It may be checked using (3.1) that
bolometric intensity is just the integral of spectral intensity over all observed
∞
wavelengths, Q = 0 I (λ0 ) dλ0 . In subsequent chapters we will apply
equations (3.5) and (3.6) to calculate the intensity of the EBL, not only from
galaxies but from many other sources of radiation as well.
The static analogue (i.e. the equivalent spectral EBL intensity in a Universe
without expansion, but with the properties of galaxies unchanged) is also readily
obtained in exactly the same way as before (section 2.6). Setting R̃(t) = 1 in
(3.5), we obtain t0
c
Iλ,stat (λ0 ) = n(t)F(λ0 , t) dt. (3.7)
4π tf
Just as in the bolometric case, we may convert this to an integral over z if we
choose. The latter parameter no longer represents physical redshift (since this has
been eliminated by hypothesis), but is now merely an algebraic way of expressing
the age of the galaxies. This is convenient because it puts (3.7) into a form which
may be directly compared with its counterpart (3.6) in the expanding Universe:
zf
c n(z)F(λ0 , z) dz
Iλ,stat (λ0 ) = . (3.8)
4π H0 0 (1 + z) H̃ (z)
If the same values are adopted for H0 and z f , and the same functional forms
are used for n(z), F(λ, z) and H̃ (z), then equations (3.6) and (3.8) allow us
to compare model universes which are alike in every way, except that one is
expanding while the other stands still.
Some simplification of these expressions is obtained as before in situations
where the comoving source number density can be taken as constant, n(z) = n 0 .
However, it is not possible to go farther and pull all the dimensional content out
of these integrals, as was done in the bolometric case, until a specific form is
postulated for the source SED F(λ, z).
In the remainder of chapter 3 we take up this problem, modelling galaxy
spectra with several different SEDs and using (3.6) to calculate the resulting
optical EBL intensity. Our goals in this exercise are threefold. First, we wish
to build up experience with the simpler kinds of SEDs. These will provide a
44 The modern resolution and spectra
check of our main results in this chapter, and allow us to model a variety of less
conventional radiating sources in later chapters. Second, we would like to get
some idea of upper and lower limits on the spectral intensity of the EBL itself, and
compare them to the observational data. Third, we return to our original question
and divide Iλ (λ0 ) by its static analogue Iλ,stat (λ0 ), as given by (3.8), in order to
obtain a quantitative estimate of the importance of expansion in the spectral EBL
and the resolution of Olbers’ paradox.
The function Fp (z) is obtained in terms of the total source luminosity L(z) by
normalizing over all observed wavelengths
∞
L(z) ≡ F(λ, z) dλ = Fp (z)λp (3.10)
0
Strong evolution
9.5 Moderate evolution
Weak evolution
S97
log L ( h0 Lsun Mpc-3 )
F98
8.5
0 1 2 3 4
z
Substituting the galaxy SED (3.9) into the spectral intensity integral (3.6)
leads to
zf
c (z) λ0
Iλ (λ0 ) = δ − 1 dz (3.11)
4π H0λp 0 (1 + z)3 H̃ (z) λp (1 + z)
where we have introduced a new shorthand for the comoving luminosity density
of sources:
(z) ≡ n(z)L(z). (3.12)
For galaxies at redshift z = 0 this takes the value 0 as given by (2.24). Numerous
studies have shown that the product of n(z) and L(z) is approximately conserved
with redshift, even when the two quantities themselves appear to be evolving
markedly. So it would be reasonable to take (z) = 0 = constant. However, the
latest analyses have been able to benefit from new observational work at deeper
redshifts, and a new consensus is emerging that (z) does rise slowly but steadily
with z, peaking in the range 2 º z º 3, and falling away sharply thereafter [7].
This would be consistent with a picture in which the first generation of massive
galaxy formation occurred near z ∼ 3, being followed at lower redshifts by
galaxies whose evolution proceeded more passively.
Figure 3.1 shows the value of 0 from (2.24) at z = 0 (as taken from
Fukugita et al [8]), together with the extrapolation of (z) to five higher redshifts
46 The modern resolution and spectra
(z = 0.35, 0.75, 1.5, 2.5 and 3.5) as inferred from analysis of photometric
galaxy redshifts in the Hubble Deep Field (HDF) by Sawicki et al [9]. (Most
of the galaxies in this extremely deep sample are too faint for their Doppler
or spectroscopic redshifts to be identified.) Let us define a relative comoving
luminosity density ˜ (z) by
and fit this to the data with a cubic [log ˜ (z) = αz + βz 2 + γ z 3 ]. The resulting
least-squares best fit is shown in figure 3.1 together with plausible upper and lower
limits. We will refer to these cases as the ‘moderate’, ‘strong’ and ‘weak’ galaxy
evolution scenarios respectively.
Inserting (3.13) into (3.11) makes the latter read:
zf ˜(z) λ0
Iλ (λ0 ) = Iδ δ − 1 dz. (3.14)
0 (1 + z)3 H̃ (z) λp (1 + z)
The dimensional content of this integral has been concentrated into a prefactor Iδ ,
defined by
−1
c0 −1 λp
Iδ = = 4.4 × 10−9 erg s−1 cm−2 Å ster−1 . (3.15)
4π H0λp 4400 Å
This constant shares two important properties of its bolometric counterpart Q ∗
(section 2.3). First, it is explicitly independent of the uncertainty h 0 in Hubble’s
constant. Secondly, it is low by terrestrial standards. For example, it is well
below the intensity of the faint glow known as zodiacal light, which is caused
by the scattering of sunlight by dust in the plane of the Solar System. This is
important, since the value of Iδ sets the scale of the integral (3.14). Indeed,
existing observational bounds on Iλ (λ0 ) at λ0 ≈ 4400 Å are of the same
order as Iδ . Toller, for example, set an upper limit of Iλ (4400 Å) < 4.5 ×
−1
10−9 erg s−1 cm−2 Å ster−1 using data from the Pioneer 10 photopolarimeter
[10].
Dividing Iδ of (3.15) by the photon energy E 0 = hc/λ0 (where hc =
1.986 × 10−8 erg Å) puts the EBL intensity integral (3.14) into new units,
sometimes referred to as continuum units (CUs):
0 λ0 λ0
Iδ = Iδ (λ0 ) = = 970 CUs (3.16)
4πh H0 λp λp
where 1 CU ≡ 1 photon s−1 cm−2 Å−1 ster−1 . While both kinds of units
−1
(CUs and erg s−1 cm−2 Å ster−1 ) are in common use for reporting spectral
intensity at near-optical wavelengths, CUs appear most frequently. They are also
preferable from a theoretical point of view, because they most faithfully reflect
the energy content of a spectrum (as emphasized by Henry [11] ). A third type
The delta-function spectrum 47
of intensity unit, the S10 (loosely, the equivalent of one tenth-magnitude star per
square degree) is also occasionally encountered, but will be avoided in this book
as it is wavelength-dependent and involves other subtleties which differ between
workers.
If we let the redshift of formation z f → ∞ for simplicity, then
equation (3.14) reduces to
−2
λ0 ˜(λ0 /λp − 1)
Iδ (if λ0 λp )
Iλ (λ0 ) = λp H̃ (λ0 /λp − 1) (3.17)
0 (if λ0 < λp ).
The comoving luminosity density ˜ (λ0 /λp − 1) which appears here is given
by the fit (3.13) to the HDF data. The Hubble parameter is given by (2.40) as
H̃ (λ0 /λp − 1) = [m,0(λ0 /λp )3 + ,0 − (m,0 + ,0 − 1)(λ0 /λp )2 ]1/2 for a
Universe containing dustlike matter and vacuum energy with density parameters
m,0 and ,0 respectively.
‘Turning off’ the luminosity density evolution (so that ˜ = 1 = constant),
one can obtain three trivial special cases with
(λ0 /λp )−7/2 (Einstein–de Sitter)
Iλ (λ0 ) = Iδ × (λ0 /λp )−2 (de Sitter) (3.18)
(λ0 /λp )−3 (Milne).
This is taken at λ0 λp , where (m,0 , ,0 ) = (1, 0), (0, 1) and (0, 0)
respectively for the three models cited (section 2.7). The first of these is
the ‘ 72 -law’ which appears frequently in the particle-physics literature as an
approximation to the spectrum of EBL contributions from decaying particles. But
the second (de Sitter) probably provides a better approximation, given current
thinking regarding the values of m,0 and ,0 .
To evaluate the spectral EBL intensity (3.14) and other quantities in a general
situation, it will be helpful to define a suite of cosmological test models which
span the widest range possible in the parameter space defined by m,0 and ,0
(table 3.1). Detailed discussion of these is left to chapter 4, but we can summarize
the main rationale for each briefly as follows. The Einstein–de Sitter (EdS)
model has long been favoured on grounds of simplicity and was alternatively
48 The modern resolution and spectra
known for some time as the ‘standard cold dark matter’ or SCDM model. It
has come under increasing pressure, however, as evidence mounts for levels
of m,0 º 0.5 and, most recently, from observations of Type Ia supernovae
(SNIa) which indicate that ,0 > m,0 . The open cold dark matter (OCDM)
model is more consistent with data on m,0 and holds appeal for those who have
been reluctant to accept the possibility of a non-zero vacuum energy. It faces
the considerable challenge, however, of explaining new data on the spectrum
of CMB fluctuations, which imply that m,0 + ,0 ≈ 1. The + cold dark
matter (CDM) model has rapidly become the new standard in cosmology
because it agrees best with both the SNIa and CMB observations. However, this
model suffers from a ‘coincidence problem’, in that m (t) and (t) evolve
so differently with time that the probability of finding ourselves at a moment
in cosmic history when they are even of the same order of magnitude appears
unrealistically small. This is addressed to some extent in the last model, where
we push m,0 and ,0 to their lowest and highest limits, respectively. In the case
of m,0 these limits are set by big-bang nucleosynthesis, which requires a density
of at least m,0 ≈ 0.03 in baryons (hence the + baryonic dark matter or
BDM model). Upper limits on ,0 come from various arguments, such as the
observed frequency of gravitational lenses and the requirement that the Universe
began in a big-bang singularity. Within the context of isotropic and homogeneous
cosmology, these four models cover the full range of what would be considered
plausible by most workers.
Figure 3.2 shows the solution of the full integral (3.14) for all four test
models. The short-wavelength cut-off in these plots is an artefact of the δ-
function SED, but the behaviour of Iλ (λ0 ) at wavelengths above λp = 4400 Å
is quite revealing, even in a model as simple as this one. In the EdS case (a),
the rapid fall-off in intensity with λ0 indicates that nearby (low-redshift) galaxies
dominate. There is a secondary hump at λ0 ≈ 10 000 Å, which is an ‘echo’ of the
peak in galaxy formation, redshifted into the near infrared. This hump becomes
successively enlarged relative to the optical peak at 4400 Å as the value of m,0
drops relative to ,0 . Eventually one has the situation in the de Sitter-like model
(d), where the galaxy-formation peak entirely dominates the observed EBL signal,
despite the fact that it comes from distant galaxies at z ≈ 3. This is because a
large ,0 -term (especially one which is large relative to m,0 ) inflates comoving
volume at high redshifts. Since the comoving number density of galaxies is fixed
by the fit to ˜ (z) (figure 3.1), the number of galaxies at these redshifts has no
choice but to go up, pushing up the infrared part of the spectrum. Although the
δ-function spectrum is an unrealistic one, we will see in subsequent sections that
this trend persists in more sophisticated models, providing a clear link between
observations of the EBL and the cosmological parameters m,0 and ,0 .
Figure 3.2 is plotted over a broad range of wavelengths from the near
ultraviolet (NUV; 2000–4000 Å) to the near infrared (NIR; 8000–40 000 Å). We
have included a number of reported observational constraints on EBL intensity.
These require a bit of comment as regards their origin. Upper limits (full symbols
The delta-function spectrum 49
(a) (b)
Iλ ( CUs )
1000 T83 1000 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
(c) (d)
Iλ ( CUs )
1000 T83 1000 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
Figure 3.2. The spectral EBL intensity of galaxies whose radiation is modelled by
δ-functions at a rest frame wavelength of 4400 Å, calculated for four different cosmological
models: (a) EdS, (b) OCDM, (c) CDM and (d) BDM (table 3.1). Also shown are
observational upper limits (full symbols and bold lines) and reported detections (empty
symbols) over the waveband 2000–40 000 Å.
and bold lines) have come from analyses of OAO-2 satellite data (LW76 [12]),
ground-based telescopes (SS78 [13], D79 [14], BK86 [15]), the aforementioned
Pioneer 10 instrument (T83 [10]), sounding rockets (J84 [16], T88 [17]), the
Hopkins UVX experiment aboard the Space Shuttle (M90 [18]) and—most
recently, in the near infrared—the DIRBE instrument on the COBE satellite
(H98 [19]). The past two or three years have also seen the first widely-accepted
detections of the EBL (figure 3.2, open symbols). In the NIR these have come
from continued analysis of DIRBE data in the K-band (22 000 Å) and L-band
(35 000 Å; WR00 [20]), as well as the J-band (12 500 Å; C01 [21]). Reported
detections in the optical using a combination of Hubble Space Telescope (HST)
and Las Campanas telescope observations (B98 [22, 23]) are still preliminary but
very important, and we have included them as well.
Figure 3.2 shows that EBL intensities based on the simple δ-function
spectrum are in rough agreement with the data. Predicted intensities come in at
or just below the optical limits in the low-,0 cases (a, b) and remain consistent
with the infrared limits even in the high-,0 cases (c, d). Vacuum-dominated
50 The modern resolution and spectra
models with even higher ratios of ,0 to m,0 would, however, run afoul of
DIRBE limits in the J-band. We will find a similar trend in models with more
realistic galaxy spectra.
where λp is the wavelength at which the galaxy emits most of its light. We take
λ
p∞= 4400 Å as before, and note that integration over λ0 confirms that L(z) =
0 F(λ, z) dλ as required. Once again we can make the simplifying assumption
that L(z) = L 0 = constant; or we can use the empirical fit ˜ (z) ≡ n(z)L(z)/0
to the HDF data of Sawicki et al [9]. Taking the latter course and substituting
(3.19) into (3.6), we obtain
zf
˜(z) 1 λ0 /(1 + z) − λp 2
Iλ (λ0 ) = Ig exp − dz. (3.20)
0 (1 + z)3 H̃ (z) 2 σλ
The dimensional content of this integral has been pulled into a prefactor Ig =
Ig (λ0 ), defined by
0 λ0 λ0
Ig = √ = 390 CUs . (3.21)
32π 3 h H0 σλ σλ
Here we have divided (3.20) by the photon energy E 0 = hc/λ0 to put the result
into CUs, as before.
Results are shown in figure 3.3, where we have taken λp = 4400 Å,
σλ = 1000 Å and z f = 6. Aside from the fact that the short-wavelength
cut-off has disappeared, the situation is qualitatively similar to that obtained
using a δ-function approximation. (This similarity becomes formally exact as σλ
approaches zero.) One sees, as before, that the expected EBL signal is brightest at
optical wavelengths in an EdS Universe (a), but that the infrared hump due to the
redshifted peak of galaxy formation begins to dominate for higher-,0 models
Blackbody spectra 51
(a) (b)
Iλ ( CUs )
T83 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
(c) (d)
Iλ ( CUs )
T83 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
Figure 3.3. The spectral EBL intensity of galaxies whose spectra has been represented
by Gaussian distributions with rest-frame peak wavelength 4400 Å and standard deviation
1000 Å, calculated for the (a) EdS, (b) OCDM, (c) CDM and (d) BDM cosmologies
and compared with observational upper limits (full symbols and bold lines) and reported
detections (empty symbols).
(b) and (c), becoming overwhelming in the de Sitter-like model (d). Overall, the
best agreement between calculated and observed EBL levels occurs in the CDM
model (c). The matter-dominated EdS (a) and OCDM (b) models appear to
contain too little light (requiring one to postulate an additional source of optical or
near-optical background radiation besides that from galaxies), while the BDM
model (d) comes uncomfortably close to containing too much light. This is an
interesting situation and one which motivates us to reconsider the problem with
more realistic models for the galaxy SED.
C(z):
2πhc2 C(z)/λ5
F(λ, z) = . (3.22)
σSB exp[hc/kT (z)λ] − 1
Here σSB ≡ 2π 5 k 4 /15c2h 3 = 5.67 × 10−5 erg cm−2 s−1 K−1 is the Stefan–
Boltzmann constant. The function F is normally regarded as an increasing
function of redshift (at least out to the redshift of galaxy formation). This can, in
principle, be accommodated by allowing C(z) or T (z) to increase with z in (3.22).
The former choice would correspond to a situation in which galaxy luminosity
decreases with time while its spectrum remains unchanged, as might happen if
stars were simply to die. The second choice corresponds to a situation in which
galaxy luminosity decreases with time as its spectrum becomes redder, as may
happen when its stellar population ages. The latter scenario is more realistic and
will be adopted here. The luminosity L(z) is found by integrating F(λ, z) over
all wavelengths:
∞
2πhc2 λ−5 dλ
L(z) = C(z) = C(z)[T (z)]4 (3.23)
σSB 0 exp[hc/kT (z)λ] − 1
so that the unknown function C(z) must satisfy C(z) = L(z)/[T (z)]4 . If we
require that Stefan’s law (L ∝ T 4 ) hold at each z, then
(a) (b)
Iλ ( CUs )
T83 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
(c) (d)
Iλ ( CUs )
T83 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
Figure 3.4. The spectral EBL intensity of galaxies, modelled as blackbodies whose
characteristic temperatures are such that their luminosities L ∝ T 4 combine to produce
the observed comoving luminosity density (z) of the Universe. Results are shown
for the (a) EdS, (b) OCDM, (c) CDM and (d) BDM cosmologies. Also shown are
observational upper limits (full symbols and bold lines) and reported detections (open
symbols).
intensities and the observational detections is found for the ,0 -dominated
models (c) and (d). The fact that the EBL is now spread across a broader spectrum
has pulled down its peak intensity slightly, so that the BDM model (d) no longer
threatens to violate observational limits and, in fact, fits them rather nicely. The
zero-,0 models (a) and (b) would again appear to require some additional
source of background radiation (beyond that produced by galaxies) if they are
to contain enough light to make up the levels of EBL intensity that have been
reported.
(a) (b)
No extinction No extinction
1e+07
With extinction 1e+09 With extinction
1e+08
Fn ( Lsun Å−1 )
Fs ( Lsun Å−1 )
1e+06
1e+07
100000
1e+06
10000 100000
100 1000 10000 100000 1e+06 100 1000 10000 100000 1e+06
λ(Å) λ(Å)
Figure 3.5. Typical galaxy SEDs for (a) normal and (b) starburst type galaxies with and
without extinction by dust. These figures are adapted from figures 9 and 10 of Devriendt et
al [32], where, however, the plotted quantity is log[λF(λ)], not F(λ) as shown here. For
definiteness we have normalized (over 100 − 3 × 104 Å) such that L n = 1 × 1010 h −2
0 L
and L s = 2 × 1011 h −20 L with h 0 = 0.75. (These values are consistent with what we
will later call ‘model 0’ for a comoving galaxy number density of n 0 = 0.010h 30 Mpc−3 .)
down as far as the NUV (λ0 ∼ 2000 Å), we require SEDs which are good to
λ ∼ 300 Å in the galaxy rest-frame. For this purpose we will make use of
theoretical galaxy-evolution models, which have advanced to the point where they
cover the entire spectrum from the far ultraviolet to radio wavelengths. This broad
range of wavelengths involves diverse physical processes such as star formation,
chemical evolution and (of special importance here) dust absorption of ultraviolet
light and re-emission in the infrared. Typical normal and starburst galaxy SEDs
based on such models are now available down to ∼100 Å [32]. These functions,
displayed in figure 3.5, will constitute our normal and starburst galaxy SEDs,
Fn (λ) and Fs (λ).
Figure 3.5 shows the expected increase in Fn (λ) with λ at NUV wavelengths
(∼2000 Å) for normal galaxies, as well as the corresponding decrease for
starbursts. What is most striking about both templates, however, is their overall
multi-peaked structure. These objects are far from pure blackbodies, and the
primary reason for this is dust. This effectively removes light from the shortest-
wavelength peaks (which are due mostly to star formation), and transfers it to the
longer-wavelength ones. The dotted lines in figure 3.5 show what the SEDs would
look like if this dust reprocessing were ignored. The main difference between
normal and starburst types lies in the relative importance of this process. Normal
galaxies emit as little as 30% of their bolometric intensity in the infrared, while
the equivalent fraction for the largest starburst galaxies can reach 99%. (The
Normal and starburst galaxies 57
latter are, in fact, usually detected because of their infrared emission, despite the
fact that they contain mostly hot, blue stars.) In the full model of Devriendt et
al [32], these variations are incorporated by modifying input parameters such as
star formation timescale and gas density, leading to spectra which are broadly
similar in shape to those in figure 3.5 (though they differ in normalization and
‘tilt’ toward longer wavelengths). The results, it should be emphasized, have
been successfully matched to a wide range of real galaxy spectra. Here we will
be content with the single pair of prototypical SEDs shown in figure 3.5.
We now proceed to calculate the EBL intensity using Fn (λ) and Fs (λ), with
the
characteristic luminosities
of these two types found as usual by normalization,
Fn (λ) dλ = L n and Fs (λ) dλ = L s . Let us assume that the comoving
luminosity density of the Universe at any redshift z is a combination of normal
and starburst components
In other words, we will account for evolution in (z) solely in terms of the
changing starburst fraction f (z), and a single comoving number density n(z)
as before. L n and L s are awkward to work with for dimensional reasons, and
we will find it more convenient to specify the SED instead by two dimensionless
adjustable parameters, the local starburst fraction f 0 and luminosity ratio 0 :
f 0 ≡ f (0) 0 ≡ L s /L n . (3.31)
Observations indicate that f 0 ≈ 0.05 in the local population [31] and Devriendt
et al were able to fit their templates to a range of normal and starburst galaxies
with 40 º 0 º 890 [32]. We will allow these two parameters to vary in the
ranges 0.01 f0 0.1 and 10 0 1000. This, in combination with our
‘strong’ and ‘weak’ limits on comoving luminosity-density evolution, gives us the
flexibility to obtain upper and lower bounds on EBL intensity.
The functions n(z) and f (z) can now be fixed by equating (z) as defined
by (3.29) to the comoving luminosity-density curves inferred from HDF data
(figure 3.1), and requiring that f → 1 at peak luminosity (i.e. assuming
that the galaxy population is entirely starburst-dominated at the redshift z p of
peak luminosity). These conditions are not difficult to set up. One finds that
modest number-density evolution is required in general, if f (z) is not to over or
undershoot unity at z p . We follow [33] and parametrize this with the function
n(z) = n 0 (1 + z)η for z z p . Here η can be termed the merger parameter
since a value of η > 0 would imply that the comoving number density of galaxies
decreases with time.
58 The modern resolution and spectra
Here (z) ≡ [1/0 + (1 − 1/0 ) f 0 ] ˜ (z) and η = ln[ (z p )]/ ln(1 + z p ). The
evolution of f (z), n n (z) and n s (z) is plotted in figure 3.6 for five models: a best-
fit model 0, corresponding to the moderate evolution curve in figure 3.1 with
f 0 = 0.05 and 0 = 20, and four other models chosen to produce the widest
possible spread in EBL intensities across the optical band. Models 1 and 2 are
the most starburst-dominated, with initial starburst fraction and luminosity ratio
at their upper limits ( f 0 = 0.1 and 0 = 1000). Models 3 and 4 are the least
starburst-dominated, with the same quantities at their lower limits ( f 0 = 0.01 and
0 = 10). Luminosity density evolution is set to ‘weak’ in the odd-numbered
models 1 and 3, and ‘strong’ in the even-numbered models 2 and 4. (In principle
one could identify four other ‘extreme’ combinations, such as maximum f 0 with
minimum 0 , but these will be intermediate to models 1–4.) We find merger
parameters η between +0.4, 0.5 in the strong-evolution models 2 and 4, and
−0.5, −0.4 in the weak-evolution models 1 and 3, while η = 0 for model 0.
These are well within the usual range [34].
The information contained in figure 3.6 can be summarized in words as
follows: starburst galaxies formed near z f ∼ 4 and increased in comoving number
density until z p ∼ 2.5 (the redshift of peak comoving luminosity density in
figure 3.1). They then gave way to a steadily growing population of fainter normal
galaxies which began to dominate between 1 º z º 2 (depending on the model)
and now make up 90–99% of the total galaxy population at z = 0. This is a
reasonable scenario and agrees with others that have been constructed to explain
the observed faint blue excess in galaxy number counts [35].
We are now in a position to compute the total spectral EBL intensity, which
is the sum of normal and starburst components:
These components are found as usual by substituting the SEDs (Fn , Fs ) and
comoving number densities (3.30) into (3.6). This leads to:
zf λ0 dz
Iλn (λ0 ) = Ins ñ(z)[1 − f (z)]Fn
0 1+z (1 + z)3 H̃ (z)
zf
λ0 dz
Iλ (λ0 ) = Ins
s
ñ(z) f (z)Fs . (3.34)
0 1 +z (1 + z)3 H̃ (z)
Normal and starburst galaxies 59
(a) (b)
n ( h30 Mpc-3 )
1
0.02
f
0.01
0 0
0 1 2 0 1 2 3 4
z z
Figure 3.6. Evolution of (a) starburst fraction f (z) and (b) comoving normal and
starburst galaxy number densities n n (z) and n s (z), where total comoving luminosity
density (z) = n n (z)L n + n s (z)L s is matched to the ‘moderate’, ‘weak’ and ‘strong’
evolution curves in figure 3.1. Model 0 lies midway between models 1–4, which have
maximum and minimum possible values of the two adjustable parameters f0 ≡ f (0) and
0 ≡ L s /L n .
Here ñ(z) ≡ n(z)/n 0 is the relative comoving number density and the
dimensional content of both integrals has been pulled into the prefactor
0 λ0 λ0
Ins = Ins (λ0 ) = = 970 CUs . (3.35)
4πh H0 Å Å
This is explicitly independent of h 0 , as before, and it is important to realize how
this comes about. There is an implicit factor of L 0 contained in the galaxy SEDs
Fn (λ) and Fs (λ) via their normalizations (i.e. via the fact that we require the
galaxy population to make up the observed comoving luminosity density of the
Universe). To see this, note that equation (3.29) reads 0 = n 0 L n [1 + (0 − 1) f 0 ]
at z = 0. Since 0 ≡ n 0 L 0 , it follows that L n = L 0 /[1 + (0 − 1) f 0 ] and
L s = L 0 0 /[1 +(0 −1) f 0 ]. If the factor of L 0 is divided out of these expressions
when normalizing the functions Fn and Fs , it can be put directly into the integrals
(3.34), forming the quantity 0 in (3.35) and cancelling out the factor of h 0 in H0
as required.
The spectral intensity (3.33) is plotted in figure 3.7, where we have set
z f = 6 as usual. (Results are insensitive to this choice, increasing by less than
5% as one moves from z f = 3 to z f = 6, with no further increase for z f 6
at three-figure precision.) These plots show that the most starburst-dominated
models (1 and 2) produce the bluest EBL spectra, as might be expected. For these
two models, EBL contributions from normal galaxies remain well below those
from starbursts at all wavelengths, so that the bump in the observed spectrum at
60 The modern resolution and spectra
(a) (b)
Model 0 Model 0
Model 1 Model 1
Model 2 Model 2
Model 3 Model 3
1000 Model 4 1000 Model 4
LW76 LW76
SS78 SS78
Iλ ( CUs )
Iλ ( CUs )
D79 D79
T83 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
(c) (d)
Model 0 Model 0
Model 1 Model 1
Model 2 Model 2
Model 3 Model 3
1000 Model 4 1000 Model 4
LW76 LW76
SS78 SS78
Iλ ( CUs )
Iλ ( CUs )
D79 D79
T83 T83
J84 J84
BK86 BK86
T88 T88
M90 M90
B98 B98
H98 H98
WR00 WR00
C01 C01
100 100
1000 10000 1000 10000
λ0 ( Å ) λ0 ( Å )
Figure 3.7. The spectral EBL intensity of a combined population of normal and starburst
galaxies, with SEDs as shown in figure 3.5. The evolving number densities are such as to
reproduce the total comoving luminosity density seen in the HDF (figure 3.1). Results are
shown for the (a) EdS, (b) OCDM, (c) CDM and (d) BDM cosmologies. Also shown
are observational upper limits (full symbols and bold lines) and reported detections (open
symbols).
appears to be a significant gap between theory and observation in all but the most
vacuum-dominated cosmology, BDM (d). This is so even for the models with
the strongest luminosity density evolution (models 2 and 4). In the case of the
EdS cosmology (a), this gap is nearly an order of magnitude, as found by Yoshii
and Takahara [29]. Similar conclusions have been reached more recently from
an analysis of Subaru Deep Field data by Totani et al [36], who suggest that the
shortfall could be made up by a very diffuse, previously undetected component of
background radiation not associated with galaxies. However, other workers have
argued that existing galaxy populations are enough to explain the data, if different
assumptions are made about their SEDs [37] or if allowance is made for faint low-
surface-brightness galaxies below the detection limit of existing surveys [23].
In this connection it may be worthwhile to remember that there are preliminary
indications from the Sloan Digital Sky Survey that the traditional value of 0
(as we have used it here) may go up by as much as 40% when corrected for a
downward bias which is inherent in the way that galaxy magnitudes have been
measured at high redshift [38].
(a) (b)
Model 0 Model 0
1 Model 1 1 Model 1
Model 2 Model 2
Model 3 Model 3
Iλ / Iλ,stat
Iλ / Iλ,stat
Model 4 Model 4
0.5 0.5
0 0
10000 10000
λ0 ( Å ) λ0 ( Å )
(c) (d)
Model 0 Model 0
1 Model 1 1 Model 1
Model 2 Model 2
Model 3 Model 3
Iλ / Iλ,stat
Iλ / Iλ,stat
Model 4 Model 4
0.5 0.5
0 0
10000 10000
λ0 ( Å ) λ0 ( Å )
Figure 3.8. The ratio Iλ /Iλ,stat of spectral EBL intensity in expanding models to that in
equivalent static models, for the (a) EdS, (b) OCDM, (c) CDM and (d) BDM models.
The fact that this ratio lies between 0.3 and 0.6 in the B-band (4000–5000 Å) tells us
that expansion reduces the intensity of the night sky at optical wavelengths by a factor of
between two and three.
the spectrum is about 0.6, consistent with bolometric expectations (chapter 2).
Second, the diagonal, bottom-left to top-right orientation arises largely because
Iλ (λ0 ) drops off at short wavelengths, while Iλ,stat (λ0 ) does so at long ones. The
reason why Iλ (λ0 ) drops off at short wavelengths is that ultraviolet light reaches
us only from the nearest galaxies; anything from more distant ones is redshifted
into the optical. The reason why Iλ,stat (λ0 ) drops off at long wavelengths is
because it is a weighted mixture of the galaxy SEDs, and drops off at exactly the
same place that they do: λ0 ∼ 3 × 104 Å. In fact, the weighting is heavily tilted
toward the dominant starburst component, so that the two sharp bends apparent in
figure 3.8 are essentially (inverted) reflections of features in Fs (λ0 ); namely, the
small bump at λ0 ∼ 4000 Å and the shoulder at λ0 ∼ 11 000 Å (figure 3.5).
Finally, the numbers: figure 3.8 shows that the ratio of Iλ /Iλ,stat is
remarkably consistent across the B-band (4000–5000 Å) in all four cosmological
models, varying from a high of 0.46±0.10 in the EdS model to a low of 0.39±0.08
in the BDM model. These numbers should be compared with the bolometric
result of Q/Q stat ≈ 0.6 ± 0.1 from chapter 2. They tell us that expansion does
play a greater role in determining B-band EBL intensity than it does across the
spectrum as a whole—but not by much. If its effects were removed, the night sky
at optical wavelengths would be anywhere from twice as bright (in the EdS model)
to three times brighter (in the BDM model). These results depend modestly on
the makeup of the evolving galaxy population and figure 3.8 shows that Iλ /Iλ,stat
in every case is highest for the weak-evolution model 1, and lowest for the strong-
evolution model 4. This is as we would expect, based on our discussion at
the beginning of this chapter: models with the strongest evolution effectively
‘concentrate’ their light production over the shortest possible interval in time,
so that the importance of the lifetime factor drops relative to that of expansion.
Our numerical results, however, prove that this effect cannot qualitatively alter the
resolution of Olbers’ paradox. Whether intensity is reduced by two times or three
due to expansion, its order of magnitude is still set by the lifetime of the galaxies.
There is one factor which we have not considered in this chapter, and that is
the possibility of absorption by intergalactic dust and neutral hydrogen, both of
which are strongly absorbing at ultraviolet wavelengths. The effect of this would
primarily be to remove ultraviolet light from high-redshift galaxies and transfer
it into the infrared—light that would otherwise be redshifted into the optical and
contribute to the EBL. (Dust extinction thus acts in the opposite sense to enhanced
galaxy evolution, which effectively injects ultraviolet light at high redshifts.) The
EBL intensity Iλ (λ0 ) would therefore drop, and one could expect reductions over
the B-band in particular. The size of this effect is difficult to assess because
we have limited data on the character and distribution of dust beyond our own
galaxy. We will find indications in chapter 7, however, that the reduction could
be significant at the shortest wavelengths considered here (λ0 ≈ 2000 Å) for the
most extreme dust models. This would further widen the gap between observed
and predicted EBL intensities noted at the end of section 3.6.
Absorption would play far less of a role in the equivalent static models,
64 The modern resolution and spectra
where there is no redshift. (Ultraviolet light would still be absorbed, but the
effect would not carry over into the optical.) Therefore, the ratio Iλ /Iλ,stat would
be expected to drop in nearly direct proportion to the drop in Iλ . In this sense
Olbers may actually have had part of the solution after all—not (as he thought)
because intervening matter ‘blocks’ the light from distant sources but because
it transfers it out of the optical. The importance of this effect, which would be
somewhere below that of expansion, is, however, a separate issue from the one
we have concerned ourselves with in this chapter. What we have shown is that the
optical sky, like the bolometric one, is dark at night mainly because it has not had
time to fill up with light from distant galaxies. Cosmic expansion darkens it further
by a factor which depends on background cosmology and galaxy evolution, but
which lies between two and three in any case.
References
[1] Peacock J A 1999 Cosmological Physics (Cambridge: Cambridge University Press)
p 354
[2] McVittie G C and Wyatt S P 1959 Astrophys. J. 130 1
[3] Whitrow G J and Yallop B D 1964 Mon. Not. R. Astron. Soc. 127 130
[4] Whitrow G J and Yallop B D 1965 Mon. Not. R. Astron. Soc. 130 31
[5] Wesson P S, Valle K and Stabell R 1987 Astrophys. J. 317 601
[6] Wesson P S 1991 Astrophys. J. 367 399
[7] Fukugita M, Hogan C J and Peebles P J E 1996 Nature 381 489
[8] Fukugita M, Hogan C J and Peebles P J E 1998 Astrophys. J. 503 518
[9] Sawicki M J, Lin H and Yee H K C 1997 Astron. J. 113 1
[10] Toller G N 1983 Astrophys. J. 266 L79
[11] Henry R C 1999 Astrophys. J. 516 L49
[12] Lillie C F and Witt A N 1976 Astrophys. J. 208 64
[13] Spinrad H and Stone R P S 1978 Astrophys. J. 226 609
[14] Dube R R, Wickes W C and Wilkinson D T 1979 Astrophys. J. 232 333
[15] Boughn S P and Kuhn J R 1986 Astrophys. J. 309 33
[16] Jakobsen P et al 1984 Astron. Astrophys. 139 481
[17] Tennyson P D et al 1988 Astrophys. J. 330 435
[18] Murthy J et al 1990 Astron. Astrophys. 231 187
[19] Hauser M G et al 1998 Astrophys. J. 508 25
[20] Wright E L and Reese E D 2000 Astrophys. J. 545 43
[21] Cambrésy L et al 2001 Astrophys. J. 555 563
[22] Bernstein R A 1998 PhD Thesis California Institute of Technology
[23] Bernstein R A 1999 The Low Surface Brightness Universe (Astronomical Society of
the Pacific Conference Series, Volume 170) ed J I Davies, C Impey and S Phillipps
(San Francisco, CA: ASP) p 341
[24] Ellis R et al 2001 Astrophys. J. 560 L119
[25] Partridge R B and Peebles P J E 1967 Astrophys. J. 148 377
[26] Tinsley N M 1973 Astron. Astrophys. 24 89
[27] Bruzual A G 1981 PhD Thesis University of California, Berkeley
Back to Olbers 65
66
The four elements of modern cosmology 67
At least some of the dark matter, such as that contained in planets and ‘failed stars’
too dim to see, must be composed of ordinary atoms and molecules. The same
applies to dark gas and dust (although these can sometimes be seen in absorption,
if not emission). Such contributions are termed baryonic dark matter (BDM),
which combined together with luminous matter gives the total baryonic matter
density bar ≡ lum + bdm . If our understanding of big-bang theory and the
formation of the light elements is correct, then we will see that bar cannot
represent more than 5% of the critical density.
What, then, makes up the bulk of the Universe? Besides the dark baryons, it
now appears that three other varieties of dark matter play a role. The first of these
is cold dark matter (CDM), the existence of which has been inferred from the
behaviour of visible matter on scales larger than the Solar System (e.g. galaxies
and clusters of galaxies). CDM is thought to consist of particles (sometimes
referred to as ‘exotic’ dark-matter particles) whose interactions with ordinary
matter are so weak that they are seen primarily via their gravitational influence.
While they have not been detected (and are indeed hard to detect by definition),
such particles are predicted by plausible extensions of the standard model. The
overall CDM density cdm is believed by many cosmologists to exceed that of the
baryons (bar ) by at least an order of magnitude.
Another piece of the puzzle is provided by neutrinos, particles whose
existence is unquestioned but whose collective density (ν ) depends on their
rest mass, which is not yet known. If neutrinos are massless, or nearly so, then
they remain relativistic throughout the history of the Universe and behave for
dynamical purposes like photons. In this case neutrino contributions combine
with those of photons (γ ) to give the present radiation density as r,0 =
ν + γ . This is known to be very small. If, however, neutrinos are sufficiently
massive, then they are no longer relativistic on average, and belong together with
baryonic and cold dark matter under the category of pressureless matter, with
present density m,0 = bar +cdm +ν . These neutrinos could play a significant
dynamical role, especially in the formation of large-scale structure in the early
Universe, where they are sometimes known as hot dark matter (HDM). Recent
experimental evidence suggests that neutrinos do contribute to m,0 but at levels
below those of the baryons.
Influential only over the largest scales—those comparable to the
cosmological horizon itself—is the last component of the unseen Universe:
vacuum energy. Its many alternative names (the zero-point field, quintessence,
68 The dark matter
dark energy and the cosmological constant ) betray the fact that there is, at
present, no consensus as to where vacuum energy originates; or how to calculate
its energy density ( ) from first principles. Existing theoretical estimates of
this latter quantity range over some 120 orders of magnitude, prompting many
cosmologists until very recently to disregard it altogether. Observations of distant
supernovae and CMB fluctuations, however, increasingly imply that vacuum
energy is not only real but that its present energy density (,0 ) exceeds that
of all other forms of matter (m,0 ) and radiation (r,0 ) put together.
If this account is correct, then the real Universe hardly resembles the one
we see at night. It is composed, to a first approximation, of invisible vacuum
energy whose physical origin remains obscure. A significant minority of its total
energy density is found in CDM particles, whose ‘exotic’ nature is also not yet
understood. Close inspection is needed to make out the further contribution of
neutrinos, although this too is non-zero. And baryons, the stuff of which we
are made, are little more than a cosmic afterthought. This picture, if confirmed,
constitutes a revolution of Copernican proportions, for it is not only our location
in space which turns out to be undistinguished, but our very makeup. The ‘four
elements’ of modern cosmology are shown schematically in figure 4.1.
4.3 Baryons
Let us now go over the evidence for these four species of dark matter more
carefully, beginning with the baryons. The total present density of luminous
baryonic matter can be inferred from the observed luminosity density of the
Universe, if various reasonable assumptions are made about the fraction of
galaxies of different morphological type, their ratios of disc-type to bulge-type
stars, and so on. A recent and thorough such estimate is that of Fukugita et al [4]:
Figure 4.1. Top: the ‘four elements’ of ancient cosmology. These are widely attributed to
the Greek philosopher Empedocles, who introduced them as follows, ‘Hear first the roots
of all things: bright Zeus (fire), life-giving air (Hera), Aidoneus (earth) and Nestis (water),
who moistens the springs of mortals with her tears’ (Fragments, c. 450 BC). Bottom:
the modern counterparts. (Figure adapted from a 1519 edition of Aristotle’s De caelo
by Johann Eck.)
There are signs, however, that we are still some way from ‘precision’ values
with uncertainties of less than 10%. A recalibrated LMC Cepheid period–
luminosity relation based on a much larger sample (from the OGLE microlensing
survey) leads to considerably higher values, namely h 0 = 0.85 ± 0.05 [7].
A new, purely geometric technique, based on the use of long-baseline radio
interferometry to measure the transverse velocity of water masers [8], also implies
that the traditional calibration is off, raising all Cepheid-based estimates by
12 ± 9% [9]. This would boost the HKP value to h 0 = 0.81 ± 0.09. There
is some independent support for such a recalibration in recent observations of
eclipsing binaries [10] and ‘red-clump stars’ in the LMC [11], although this—
like most conclusions involving the value of h 0 —has been disputed [12]. If the
LMC-based numbers do go up, then one would be faced with explaining the lower
h 0 values from the absolute methods. Here the choice of cosmological model
70 The dark matter
Values at the edges of this range can discriminate powerfully between different
cosmological models. To a large extent this is a function of their ages, which can
be computed by integrating (2.44) or (in the case of flat models) directly from
(2.70). Alternatively, one can integrate the Friedmann–Lemaı̂tre equation (2.40)
numerically backward in time. Since this equation defines the expansion rate
H ≡ Ṙ/R, its integral gives the scale factor R(t). We show the results in
figure 4.2 for our four standard cosmological test models (EdS, OCDM, CDM
and BDM). These are seen to have ages of 7h −1 −1 −1 −1
0 , 8h 0 , 10h 0 and 17h 0 Gyr
respectively. Abundances of radioactive thorium and uranium in metal-poor halo
stars imply a Milky Way age 16 ± 5 Gyr [13], setting a firm lower limit of 11 Gyr
on the age of the Universe. If h 0 lies at the upper end of its allowed range
(h 0 = 0.9), then the EdS and OCDM models would be ruled out on the basis
that they are not old enough to contain these stars (this is known as the age crisis
in low-,0 models). With h 0 at the bottom of the range (h 0 = 0.6), however,
only EdS comes close to being excluded. The EdS model thus defines one edge
of the spectrum of observationally viable models.
The BDM model faces the opposite problem: figure 4.2 gives its age as
17h −1
0 Gyr, or as high as 28 Gyr (if h 0 = 0.6). The latter number, in particular,
is well beyond the age of anything seen in our Galaxy. Of course, upper limits
on age are not as easy to set as lower ones. But following Copernican reasoning,
we do not expect to live in a galaxy which is unusually young. The age of a
‘typical’ galaxy can be estimated by recalling from chapter 3 that most galaxies
appear to have formed in the redshift range 2 º z f º 4. Since R/R0 = (1 + z)−1
from (2.15), this corresponds to a range of scale factors 0.33 R/R0 0.2.
For the BDM model, figure 4.2 shows that R(t)/R0 does not reach these values
until (5 ± 2) h −1
0 Gyr after the big bang. Thus galaxies would have an age of
about (12 ± 2)h −10 Gyr in the BDM model, and not more than 21 Gyr in any
case. This is close to upper limits which have been set on the age of the Universe
in models of this type, t0 < 24 ± 2 Gyr [14]. The BDM model, or something
close to it, probably defines a position opposite that of EdS on the spectrum of
observationally viable models.
For the other three models, figure 4.2 shows that galaxy formation must be
accomplished within less than 2 Gyr after the big bang. The reason this is able
to occur so quickly is that these models all contain significant amounts of CDM,
which decouples from the primordial plasma before the baryons and prepares
Baryons 71
z=4
0.5
0
-15 -10 -5 0 5
t − t0 ( h0-1 Gyr )
Figure 4.2. Evolution of the cosmological scale factor R̃(t) ≡ R(t)/R0 as a function of
time for the cosmological test models introduced in table 3.1. Triangles indicate the range
2 z f 4 where the bulk of galaxy formation may have taken place.
potential wells for the baryons to fall into. This, as we will see, is one of the main
motivations for CDM.
Returning now to the density of luminous matter, we find with our values for
h 0 that equation (4.1) gives
This is the basis for our statement (section 4.1) that the visible Universe makes up
less than 1% of the critical density.
It is, however, conceivable that most of the baryons are not visible. How
significant could such dark baryons be? The theory of primordial big-bang
nucleosynthesis provides us with an independent method for determining the
density of total baryonic matter in the Universe, based on the assumption that
the light elements we see today were forged in the furnace of the hot big bang.
Results using different light elements are roughly consistent, which is impressive
in itself. The primordial abundances of 4 He (by mass) and 7 Li (relative to
H) imply a baryon density of bar = (0.011 ± 0.005)h −2 0 [15]. By contrast,
new measurements based exclusively on the primordial D/H abundance give a
higher value with lower uncertainty: bar = (0.019 ± 0.002)h −2 0 [16]. Since
it appears premature at present to exclude either of these results, we choose an
intermediate value of bar = (0.016 ± 0.005)h −2 0 . Combining this with our range
72 The dark matter
This agrees very well with an entirely independent estimate of bar obtained by
adding up individual mass contributions from all known repositories of baryonic
matter via their estimated mass-to-light ratios [4]. It also provides the rationale
for our choice of m,0 = 0.03 in the baryonic dark-matter model (BDM). If
tot,0 is close to unity, then it follows from equation (4.4) that all the atoms and
molecules in existence combine to make up less than 5% of the Universe by mass.
The vast majority of these baryons, moreover, are invisible. The baryonic
dark-matter fraction f bdm (≡ bdm /bar ) = 1 − lum /bar is
Here we have used equations (4.1) and (4.2), together with the range of values
for bar h 20 from nucleosynthesis. Where could these dark baryons be? One
possibility is that they are smoothly distributed in a gaseous intergalactic medium,
which would have to be strongly ionized in order to explain why it has not left a
more obvious absorption signature in the light from distant quasars. Observations
using OVI absorption lines as a tracer of ionization suggest that the contribution of
such material to bar is at least 0.003h −1
0 [17], comparable to lum . Simulations
are able to reproduce many observed features of the ‘forest’ of Lyman-α (Lyα)
absorbers with as much as 80–90% of the baryons in this form [18].
Dark baryonic matter could also be bound up in clumps of matter such as
substellar objects (jupiters, brown dwarfs) or stellar remnants (white, red and
black dwarfs, neutron stars, black holes). Substellar objects are not likely to make
a large contribution, given their small masses. Black holes are limited in the
opposite sense: they cannot be more massive than about 105 M since this would
lead to dramatic tidal disruptions and lensing effects which are not seen [19]. The
baryonic dark-matter clumps of most interest are therefore ones whose mass is
within a few orders of magnitude of M . Gravitational microlensing constraints
based on quasar variability do not seriously limit the cosmological density of
such objects at present, setting an upper bound of 0.1 (well above bar ) on their
combined contributions to m,0 in an EdS Universe [20].
The existence of dark massive compact halo objects (MACHOs) within
our own galactic halo has been confirmed by the MACHO microlensing survey of
LMC stars [21]. The inferred lensing masses lie in the range (0.15–0.9)M and
would account for between 8% and 50% of the high rotation velocities seen in the
outer parts of the Milky Way. (This depends on the choice of halo model and is an
extrapolation from the 15 or so events actually seen.) The identity of these objects
has been hotly debated, with some authors linking them to faint, fast-moving
objects apparently detected in the HDF [22]. It is unlikely that they could be
traditional white dwarfs, since these are formed from massive progenitors whose
metal-rich ejecta we do not see [23]. Ancient, low-mass (º0.6M ) red dwarfs
Cold dark matter 73
which have cooled into invisibility are one possibility. Existing bounds on such
objects [24] depend on many extrapolations from known populations. Degenerate
‘beige dwarfs’, which might be able to form above the hydrogen-burning mass
limit of 0.08M [25], have also been suggested.
to that of the Universe as a whole. If the latter is flat, then its M/L ratio is just
the ratio of the critical density to its luminosity density. This is (M/L)crit,0 =
ρcrit,0 /0 = (1040 ± 230)M /L , where we have used (2.24) for 0 , (2.36) for
ρcrit,0 and (4.2) for h 0 . The corresponding value for the Milky Way is (M/L)mw =
(21 ± 7)M /L , since the latter’s luminosity is L mw = (2.3 ± 0.6) × 1010 L
(in the B-band) and its total dynamical mass (including that of any unseen halo
component) is Mmw = (4.9 ± 1.1) × 1011 M inside 50 kpc from the motions
of galactic satellites [27]. The ratio of (M/L)mw to (M/L)crit,0 is thus less than
3%, and even if we multiply this by a factor of a few (to account for possible halo
mass outside 50 kpc), it is clear that galaxies like our own cannot make up more
than 10% of the critical density.
Most of the mass of the Universe, in other words, is spread over scales larger
than galaxies, and it is here that the arguments for CDM take on the most force.
The M/L-ratio method is, in fact, the most straightforward: one measures M/L
for a chosen region, corrects for the corresponding value in the ‘field’ and divides
by (M/L)crit,0 to obtain m,0 . Much, however, depends on the choice of region.
A widely respected application of this approach is that of the CNOC team [28],
which uses rich clusters of galaxies. These systems sample large volumes of
the early Universe, have dynamical masses which can be measured by three
independent methods (the virial theorem, x-ray gas temperatures and gravitational
lensing) and are subject to reasonably well-understood evolutionary effects. They
are found to have M/L ∼ 200M /L on average, giving m,0 = 0.19 ± 0.06
when ,0 = 0 [28]. This result scales as (1 − 0.4,0) [29], so that m,0 drops
to 0.11 ± 0.04 in a model with ,0 = 1.
The weak link in this chain of inference is that rich clusters may not be
characteristic of the Universe as a whole. Only about 10% of galaxies are found
in such systems. If individual galaxies (like the Milky Way, with M/L ≈
21M /L ) are substituted for clusters, then the inferred value of m,0 drops by a
factor of ten, approaching bar and removing the need for CDM. A recent effort to
address the impact of scale on M/L arguments concludes that m,0 = 0.16±0.05
(in flat models), when regions of all scales are considered from individual galaxies
to superclusters [30].
Another line of argument uses the ratio of baryonic to total gravitating mass
in clusters, or cluster baryon fraction (Mbar /M tot ). Baryonic matter is defined as
the sum of visible galaxies and hot cluster gas (the mass of which can be inferred
from the x-ray temperature). Total cluster mass is measured by one or all of the
three methods listed earlier (virial theorem, x-ray temperature or gravitational
lensing). At sufficiently large radii, the cluster may be taken as representative
of the Universe as a whole, so that m,0 = bar /(Mbar /M tot ), where bar is
fixed by big-bang nucleosynthesis (section 4.3). Applied to various clusters, this
procedure leads to m,0 = 0.3 ± 0.1 [31]. This result is probably an upper limit,
partly because baryon enrichment is more likely to take place inside the cluster
than out, and partly because dark baryonic matter (such as MACHOs) is not taken
into account; this would raise Mbar and lower m,0 .
Cold dark matter 75
A final, recent entry into the list of purely empirical methods uses the
separation of radio galaxy lobes as standard rulers in a variation on the classical
angular size–distance test in cosmology. The widths, propagation velocities and
magnetic field strengths of these lobes have been calibrated for 14 radio galaxies
with the aid of long-baseline radio interferometry. This leads to the constraints
m,0 < 0.10 (for ,0 = 0) and m,0 = 0.10+0.25 −0.10 for flat models with
,0 = 1 − m,0 [32].
Let us consider, next, the measurements of m,0 based on the idea that large-
scale structure formed via gravitational instability from a Gaussian spectrum of
primordial density fluctuations (GI theory for short). These are somewhat circular,
in the sense that such a process could not have taken place as it did unless m,0
is considerably larger than bar . But inasmuch as GI theory is the only structure-
formation theory we have which is both fully worked out and in good agreement
with observation (with some potential difficulties on small scales [33, 34]), this
way of measuring m,0 should be taken seriously.
According to GI theory, formation of large-scale structure is more or less
complete by z ≈ −1 m,0 − 2 [35]. Therefore, one way to constrain m,0 is to look
for number density evolution in large-scale structures such as galaxy clusters.
In a low-matter-density Universe, this would be relatively constant out to at least
z ∼ 1, whereas in a high-matter-density Universe one would expect the abundance
of clusters to drop rapidly with z because they are still in the process of forming.
The fact that massive clusters are seen at redshifts as high as z = 0.83 leads to
limits m,0 = 0.17+0.14 +0.13
−0.09 for ,0 = 0 models and m,0 = 0.22−0.07 for flat
ones [36].
Studies of the Fourier power spectra P(k) of the distributions of galaxies
or other structures can be used in a similar way. In GI theory, structures of a given
mass form by the collapse of large volumes in a low-matter-density Universe or
smaller volumes in a high-matter-density Universe. Thus m,0 can be constrained
by changes in P(k) between one redshift and another. Comparison of the mass
power spectrum of Lyα absorbers at z ≈ 2.5 with that of local galaxy clusters at
z = 0 has led to an estimate of m,0 = 0.46+0.12 −0.10 for ,0 = 0 models [37].
This result goes as approximately (1 − 0.4,0), so that the central value of m,0
drops to 0.34 in a flat model and 0.28 if ,0 = 1. One can also constrain
m,0 from the local galaxy power spectrum alone, although this involves some
assumptions about the extent to which ‘light traces mass’ (i.e. to which visible
galaxies trace the underlying density field). Early results from the 2dF survey
give m,0 = (0.20 ± 0.03)h −1 0 for flat models [38], or m,0 = 0.27 ± 0.07 with
our range of h 0 values as given by (4.2).
A final group of measurements, and one which has traditionally yielded the
highest estimates of m,0 , comes from the analysis of galaxy peculiar velocities.
These are produced by the gravitational potential of locally over- or under-
dense regions relative to the mean matter density. The power spectra of the
velocity and density distributions can be related to each other in the context of
GI theory in a way which depends explicitly on m,0 . Tests of this kind probe
76 The dark matter
relatively small volumes and are hence insensitive to ,0 , but they can depend
significantly on h 0 as well as the spectral index n of the density distribution.
In [39], where the latter is normalized to CMB fluctuations, results take the
form m,0 h 01.3 n 2 ≈ 0.33 ± 0.07 or (taking n = 1 and using our values of h 0 )
m,0 ≈ 0.48 ± 0.15.
In summarizing these results, one is struck by the fact that arguments based
on gravitational instability (GI) theory favour values of m,0 0.2 and higher,
whereas purely empirical arguments require m,0 0.4 and lower. The latter
are, in fact, compatible in some cases with values of m,0 as low as bar , raising
the possibility that CDM might not, in fact, be necessary. The results from GI-
based arguments, however, cannot be stretched this far. What is sometimes done
is to ‘go down the middle’ and blend the results of both kinds of arguments into
a single bound of the form m,0 ≈ 0.3 ± 0.1. Any such bound with m,0 > 0.05
constitutes a proof of the existence of CDM, since bar 0.04 from (4.4).
(Massive neutrinos do not affect this argument, as we will indicate in section 4.5.)
A more conservative interpretation of the data, bearing in mind the full range of
m,0 values implied above (bar m,0 0.6), is
Values of cdm at the bottom of this range, however, carry with them
the (uncomfortable) implication that the conventional picture of structure
formation via gravitational instability is incomplete. Conversely, if our current
understanding of structure formation is correct, then CDM must exist and
cdm > 0.
The question, of course, becomes moot if CDM is discovered in the
laboratory. From a large field of theoretical particle candidates, two have emerged
as frontrunners: the axion and the supersymmetric weakly-interacting massive
particle (WIMP). The plausibility of both candidates rests on three properties.
Either one, if it exists, is:
(1) weakly interacting (i.e. ‘noticed’ by ordinary matter primarily via its
gravitational influence);
(2) cold (i.e. non-relativistic in the early Universe, when structures begin to
form); and
(3) expected on theoretical grounds to have a collective density within a few
orders of magnitude of the critical one.
We will return to these particles in chapters 6 and 8 respectively.
4.5 Neutrinos
Since neutrinos indisputably exist in great numbers (their number density n ν is
3/11 times that of the CMB photons, or 112 cm−3 per species [40]), they have
been leading dark-matter candidates for longer than either the axion or the WIMP.
Neutrinos 77
They gained prominence in 1980 when teams in the USA and the Soviet Union
both reported evidence of non-zero neutrino rest masses. While these claims did
not stand up, a new round of experiments now indicates once again that m ν (and
hence ν ) > 0.
Dividing n ν m ν by the critical density (2.36), one obtains
ν = m ν c2 /93.8 eV h −2
0 (4.7)
where the sum is over the three neutrino species. Here, we follow convention
and specify particle masses in units of eV/c2 , where 1 eV/c2 = 1.602 ×
10−12 erg/c2 = 1.783 × 10−33 g. The calculations in this section are strictly
valid only for m ν c2 º 1 MeV. More massive neutrinos with m ν c2 ∼ 1 GeV were
once considered as CDM candidates but are no longer viable since experiments
at the LEP collider rule out additional neutrino species with masses up to at least
half of that of the Z 0 (m Z0 c2 = 91 GeV).
Current laboratory upper bounds on neutrino rest masses are m νe c2 < 3 eV,
m νµ c2 < 0.19 MeV and m ντ c2 < 18 MeV, so it would appear feasible in principle
for these particles to close the Universe. In fact m νµ and m ντ are limited far more
stringently by (4.7) than by laboratory bounds. Perhaps the best-known theory
along these lines is that of Sciama [41], who postulated a population of 29 eV
τ neutrinos which, if they decayed into lighter neutrinos on suitable timescales,
could be shown to solve a number of longstanding astrophysical puzzles related to
interstellar and intergalactic ionization. Equation (4.7) shows that such neutrinos
would also account for much of the dark matter, contributing a minimum collective
density of ν 0.38 (assuming as usual that h 0 0.9). We will consider the
decaying-neutrino hypothesis in more detail in chapter 7.
Strong upper limits can be set on ν within the context of the gravitational
instability picture. Neutrinos constitute hot dark matter (i.e. they are relativistic
when they decouple from the primordial fireball) and are therefore able to stream
freely out of density perturbations in the early Universe, erasing them before they
have a chance to grow. Good agreement with observations of large-scale structure
can be achieved in models with ν as high as 0.2, but only if bar + cdm = 0.8
and h 0 = 0.5 [42]. Statistical exploration of a larger set of (flat) models leads
to an upper limit on m ν which is roughly proportional to the CDM density cdm
and peaks (when cdm = 0.6) at m ν c2 5.5 eV [43]. Since 0 cdm 0.6
from (4.6), this implies
m ν c2 (9.2 eV)cdm . (4.8)
If h 0 0.6 and cdm 0.6, and if structure grows via gravitational instability as
is generally assumed, then equations (4.7) and (4.8) together allow a collective
neutrino density as high as ν = 0.16. Thus neutrino contributions could,
in principle, be as much as four to ten times as high as those from baryons,
equation (4.4). This is perfectly consistent with the neglect of relativistic matter
during the matter-dominated era (section 2.5). Neutrinos, while relativistic
at decoupling, lose energy and become non-relativistic on timescales tnr ≈
78 The dark matter
190 000 yr (m ν c2 /eV)−2 [44]. This is well before the present epoch for neutrinos
which are massive enough to be of interest.
New lower limits on ν have been reported in recent years from atmospheric
(Super-Kamiokande [45]), Solar (SAGE [46], Homestake [47], GALLEX [48]),
and accelerator-based (LSND [49]) neutrino experiments. In each case it
appears that two species are converting into each other in a process known
as neutrino oscillation, which can only take place if both have non-zero rest
masses. The strongest evidence comes from Super-Kamiokande, which has
reported oscillations between τ and µ neutrinos with 5 × 10−4 eV2 < m 2τ µ c2 <
6×10−3 eV2 , where m 2τ µ ≡ |m 2ντ −m 2νµ | [45]. Oscillation experiments measure
the square of the mass difference between two species, and cannot fix the mass
of any one species unless they are combined with other evidence such as that
from neutrinoless double-beta decay [50]. Nevertheless, if neutrino masses
are hierarchical, like those of other fermions, then m ντ m νµ and the Super-
Kamiokande results imply that m ντ c2 > 0.02 eV. In this case it follows from (4.7)
that ν 0.0003 (with h 0 0.9 as usual). If, instead, neutrino masses are nearly
degenerate, then ν could, in principle, be much larger than this but will still lie
below the upper bound imposed by structure formation. Putting (4.8) into (4.7),
we conclude that
0.0003 ν < 0.3cdm . (4.9)
With all these reasons to take this term seriously, why have many
cosmologists since Einstein set = 0? Mathematical simplicity is one
explanation. Another is the smallness of most effects associated with the -term.
Einstein himself set = 0 in 1931 ‘for reasons of logical economy’, because
he saw no hope of measuring this quantity experimentally at the time. He is
often quoted as adding that its introduction in 1915 was the biggest blunder of his
life. This comment (which does not appear in Einstein’s writings but was rather
attributed to him by Gamow [53]) is sometimes interpreted as a rejection of the
very idea of a cosmological constant. It more likely represents Einstein’s rueful
recognition that, by invoking the -term solely to obtain a static solution of the
field equations, he had narrowly missed what would surely have been one of the
greatest triumphs of his life: the prediction of cosmic expansion.
The relation between and the energy density of the vacuum has led to a
quandary in more recent times: the fact that ρ as estimated in the context of
quantum field theories such as quantum chromodynamics (QCD), electroweak
(EW) and grand unified theories (GUTs) implies impossibly large values of
,0 (table 4.1). These theories have been very successful in the microscopic
realm. Here, however, they are in gross disagreement with the observed facts
of the macroscopic world, which tell us that ,0 cannot be much larger than
order unity. This cosmological-constant problem has been reviewed by several
workers but there is no consensus on how to solve it [54]. It is undoubtedly
another reason why many cosmologists have preferred to set = 0, rather than
deal with a parameter whose microphysical origins are still unclear.
Setting to zero, however, is not really an appropriate response because
observations indicate that ,0 , while nowhere near the size suggested by
table 4.1, is nevertheless greater than zero. The cosmological constant problem
has therefore become more baffling, in that an explanation of this parameter must
apparently contain a cancellation mechanism which is not only good to some 44
(or 122) decimal places, but which begins to fail at precisely the 45th (or 123rd).
One suggestion for understanding the possible nature of such a cancellation
has been to treat the vacuum energy field literally as an Olbers-type summation
of contributions from different places in the Universe [55]. It can then be
handled with the same formalism that we have developed in chapters 2 and 3
for background radiation. This has the virtue of framing the problem in concrete
80 The dark matter
terms, and raises some interesting possibilities, but does not in itself explain why
the energy density inherent in such a field does not gravitate in the conventional
way [56]. Another idea is that theoretical expectations for the value of of
might refer only to the latter’s primordial or ‘bare’ value, which could have
been progressively ‘screened’ over time. The cosmological constant would thus
become a variable cosmological term (see [57] for a review). In such a scenario
the present ‘low’ value of ,0 would simply reflect the fact that the Universe is
old. In general, however, this means modifying Einstein’s field equations (2.1)
and/or introducing new forms of matter such as scalar fields. We will look at this
suggestion in more detail in chapter 5.
A third possibility occurs in higher-dimensional gravity, where the
cosmological constant can arise as an artefact of dimensional reduction (i.e.
in extracting the appropriate four-dimensional limit from the theory). In such
theories it is possible that the ‘effective’ 4 could be small while its N-
dimensional analogue N is large [58]. We will consider some aspects of
higher-dimensional gravity in chapter 9. Some workers, finally, have argued
that a Universe in which was too large might be incapable of giving rise to
intelligent observers. That is, the fact of our own existence might already ‘require’
,0 ∼ 1 [59]. This is an application of the anthropic principle whose status,
however, remains unclear.
Let us pass now to what is known about the value of ,0 from cosmology.
It is widely believed that the Universe originated in a big-bang singularity rather
than passing through a ‘big bounce’ at the beginning of the current expansionary
phase. By differentiating the Friedmann–Lemaı̂tre equation (2.40) and setting
both the expansion rate and its time derivative to zero, one finds that this implies
an upper limit (sometimes called the Einstein limit ,E ) on ,0 as a function
of m,0 . Models with = ,E , known as Eddington–Lemaı̂tre models, are
asymptotic to Einstein’s original static model in the infinite past. The quantity
,E can be expressed as follows for cases with m,0 < 0.5 [60]:
1/3
2/3
,E = 1 − m,0 + 32 m,0 1 − m,0 + 1 − 2m,0
1/3
+ 1 − m,0 − 1 − 2m,0 . (4.10)
For m,0 = 0.3 the requirement that ,0 < ,E implies ,0 < 1.71, a limit
that tightens to ,0 < 1.16 for m,0 = 0.03.
A slightly stronger constraint can be formulated (for closed models) in
terms of the antipodal redshift (z a ). The antipodes are defined as the set of
points located on the ‘other side of the Universe’ at χ = π, where dχ (the
radial coordinate distance element) is given by dχ = (1 − kr 2 )−1/2 dr from
the metric (2.2). Using (2.10) and (2.18) this can be put into the form dχ =
−(c/H0 R0 ) dz/ H̃ (z), which can be integrated from z = 0 to z a with the help of
(2.39) and (2.40). Gravitational lensing of sources beyond the antipodes cannot
Vacuum energy 81
give rise to normal (multiple) images [61], so the redshift z a of the antipodes
must exceed that of the most distant normally-lensed object, currently a galaxy at
z = 4.92 [62]. Requiring that z a > 4.92 leads to the upper bound ,0 < 1.57
if m,0 = 0.3. This tightens to ,0 < 1.15 for BDM-type models with
m,0 = 0.03.
Gravitational lensing also leads to a different and considerably more
stringent upper limit (for all models) based on lensing statistics. The increase in
path length to a given redshift in vacuum-dominated models (relative to, say, the
EdS model) means that there are more sources to be lensed, and presumably more
lensed objects to be seen. The observed frequency of lensed quasars, however,
is rather modest, leading to the bound ,0 < 0.66 for flat models [63]. Dust
could hide distant sources [64]. However, radio lenses should be far less affected
and these give only slightly weaker constraints: ,0 < 0.73 (for flat models)
or ,0 º 0.4 + 1.5m,0 (for curved ones) [65]. Uncertainties arise in lens
modelling and evolution as well as source redshifts and survey completeness. A
recent conservative limit from radio lenses is ,0 < 0.95 for flat models [66].
Tentative lower limits have been set on ,0 using faint galaxy number
counts. This method is based on a similar premise to that of lensing statistics:
the enhanced comoving volume at large redshifts in vacuum-dominated models
should lead to greater (projected) galaxy number densities at faint magnitudes. In
practice, it has proven difficult to disentangle this effect from galaxy luminosity
evolution. Early claims of a best fit at ,0 ≈ 0.9 [67] have been disputed on the
basis that the steep increase seen in numbers of blue galaxies is not matched in
the K-band, where luminosity evolution should be less important [68]. Attempts
to account for evolution in a comprehensive way have subsequently led to a lower
limit of ,0 > 0.53 [69] and, most recently, a reasonable fit (for flat models)
with a vacuum density parameter of ,0 = 0.8 [70].
Other evidence for a significant ,0 term has come from numerical
simulations of large-scale structure formation. Figure 4.3 shows the evolution
of massive structures between z = 3 and z = 0 in simulations by the VIRGO
Consortium [71]. Of the two models shown, CDM (top row) is qualitatively
closer to the observed distribution of galaxies in the real Universe than EdS
(‘SCDM’, bottom row). The improvement is especially marked at higher redshifts
(left-hand panels). Power spectrum analysis, however, reveals that the match is
not particularly good in either case [71]. This may be a reflection of bias (i.e. of
a systematic discrepancy between the distributions of mass and light). Different
combinations of m,0 and ,0 might also provide better fits. Simulations of
closed BDM-type models would be of particular interest [72, 73].
The first measurements to put both lower and upper bounds on ,0 have
come from Type Ia supernovae (SNIa). These objects have luminosities which
are both high and consistent, and they are thought not to evolve significantly
with redshift. All of these properties make them ideal for use in the classical
magnitude–redshift relation. Two independent groups (HZT [74] and SCP [75])
have reported a systematic dimming of SNIa at z ≈ 0.5 by about 0.25 magnitudes
82 The dark matter
Figure 4.3. Numerical simulations of structure formation. In the top row is the CDM
model with m,0 = 0.3, ,0 = 0.7 and h 0 = 0.7. The bottom row shows the EdS
(‘SCDM’) model with m,0 = 1, ,0 = 0 and h 0 = 0.5. The panel size is comoving
with the Hubble expansion, and time runs from left (z = 3) to right (z = 0). (Images
courtesy of J Colberg and the VIRGO Consortium.)
relative to that expected in an EdS model, suggesting that space at these redshifts
is ‘stretched’ by a significant vacuum term. The 2σ confidence intervals from
both studies may be summarized as 0.8m,0 − 0.6,0 = −0.2 ± 0.3, or
,0 = 43 m,0 + 1
3 ± 12 . (4.11)
From this result it follows that ,0 > 0 for any m,0 > 1/8, and that ,0 > 0.2
if m,0 = 0.3. These numbers are in disagreement with both the EdS and OCDM
models. To extract limits on ,0 alone, we recall that m,0 bar 0.02
(section 4.3) and m,0 0.6 (section 4.4). If we combine these conservative
bounds with a single limit, m,0 = 0.31 ± 0.29, then (4.11) gives
The Universe is therefore spatially flat or very close to it. To extract a value
for ,0 alone, we can do as in the SNIa case and substitute our matter-density
bounds (m,0 = 0.31 ± 0.29) into (4.13) to obtain
This is consistent with (4.12), but has error bars which have been reduced
by 50% and are now due almost entirely to the uncertainty in m,0 . This
measurement is impervious to most of the uncertainties of the earlier ones,
because it leapfrogs ‘local’ systems whose interpretation is complex (supernovae,
galaxies, and quasars), going directly back to the radiation-dominated era when
physics was simpler. Equation (4.14) is sufficient to establish that ,0 > 0.4,
and hence that vacuum energy not only exists, but may very well dominate the
energy density of the Universe.
The CMB power spectrum favours vacuum-dominated models but is not yet
resolved with sufficient precision to discriminate (on its own) between models
which have exactly the critical density (like CDM) and those which are close
to the critical density (like BDM). As it stands, the location of the first peak
in these data actually hints at ‘marginally closed’ models, although the implied
departure from flatness is not statistically significant and could also be explained
in other ways [85].
Much attention is focused on the second- and higher-order peaks of the
spectrum, which contain valuable clues about the matter component. Odd-
numbered peaks are produced by regions of the primordial plasma which have
been maximally compressed by infalling material, and even ones correspond to
maximally rarefied regions which have rebounded due to photon pressure. A high
84 The dark matter
Figure 4.4. Observational constraints on the values of m,0 and ,0 from supernovae
data (SNIa) and the power spectrum of CMB fluctuations (BOOMERANG, MAXIMA).
Shown are 68%, 95% and 99.7% confidence intervals inferred both separately and jointly
from the data. The dashed line indicates spatially flat models with k = 0; models on the
lower left are closed while those on the upper right are open. (Reprinted from [84] by
permission of A Jaffe and P L Richards.)
(a) (b)
1.5 ΛCDM (Ωm,0=0.3, ΩΛ,0=0.7): Ωm(t) 1.5 ΛCDM (Ωm,0=0.3, ΩΛ,0=0.7): Ωm(t)
ΩΛ(t) ΩΛ(t)
Matter, vacuum density parameters
0.5 0.5
0 0
1e−18 1e−12 1e−06 1 1e+06 1e+12 1e+18 −20 −10 0 10 20 30 40 50 60 70 80
−1 −1
t ( h0 Gyr ) t − t0 ( h0 Gyr )
Figure 4.5. The evolution of m (t) and (t) in vacuum-dominated models. The
left-hand panel (a) shows a single model (CDM) over 20 powers of time in either
direction (after a similar plot against scale factor R in [54]). Plotted this way, we are
seen to live at a very special time (marked ‘Now’). Standard nucleosynthesis (tnuc ∼ 1 s)
and matter–radiation decoupling times (tdec ∼ 1011 s) are included for comparison. The
right-hand panel (b) shows both the CDM and BDM models on a linear rather than
logarithmic scale, for the first 100h −1
0 Gyr after the big bang (i.e. the lifetime of the stars
and galaxies).
R̃(t) and H̃ (t). Results for the CDM model are illustrated in figure 4.5(a). At
early times, the cosmological term is insignificant ( → 0 and m → 1), while
at late ones it entirely dominates and the Universe is effectively empty ( → 1
and m → 0).
What is remarkable in this figure is the location of the present (marked
‘Now’) in relation to the values of m and . We have apparently arrived
on the scene at the precise moment when these two parameters are in the midst
of switching places. (We have not considered radiation density r here, but
similar considerations apply to it.) This has come to be known as the coincidence
problem and Carroll [54] has aptly described such a Universe as ‘preposterous’,
writing: ‘This scenario staggers under the burden of its unnaturalness, but
nevertheless crosses the finish line well ahead of any of its competitors by
agreeing so well with the data’. Cosmology may indeed be moving toward a
position like that of particle physics, in which there is a standard model which
accurately accounts for all observed phenomena but which appears to be founded
on a series of finely-tuned parameter values which leave one with the distinct
impression that the underlying reality has not yet been grasped.
Figure 4.5(b) is a close-up view of figure 4.5(a), with one difference: it is
plotted on a linear scale in time for the first 100h −1
0 Gyr after the big bang, rather
than a logarithmic scale over 10±20h −1 0 Gyr. The rationale for this is simple:
The coincidental Universe 87
100 Gyr is approximately the lifespan of the galaxies (as determined by their
main-sequence stellar populations). One would not, after all, expect observers
to appear on the scene long after all the galaxies had disappeared or, for that
matter, in the early stages of the expanding fireball. Seen from the perspective
of figure 4.5(b), the coincidence, while still striking, is perhaps no longer so
preposterous. However, the CDM model still appears fine-tuned, in that ‘Now’
follows rather quickly on the heels of the epoch of matter–vacuum equality. In the
BDM model, m,0 and ,0 are closer to the cosmological time-averages of
m (t) and (t), namely zero and one respectively. In such a picture it might be
easier to believe that we have not arrived on the scene at a special time but merely
a late one. Whether or not this is a helpful way to approach the coincidence
problem is, to a certain extent, a matter of taste.
To summarize the contents of this chapter: we have learned that what can
be seen with our telescopes constitutes no more than 1% of the density of the
Universe. The rest is dark. A small portion (no more than 5%) of this dark
matter is made up of ordinary baryons. Many observational arguments hint at
the existence of a second, more exotic species known as cold dark matter (though
they do not quite establish its existence unless they are combined with ideas about
the formation of large-scale structure). Experiments also imply the existence of
a third dark-matter species, the massive neutrino, although its role appears to
be more limited. Finally, all these components are dwarfed in importance by a
newcomer whose physical origin remains shrouded in obscurity: the energy of
the vacuum.
In the chapters that follow, we will explore the leading contenders for the
dark matter in more depth: vacuum energy, axions, neutrinos, supersymmetric
weakly-interacting massive particles (WIMPs) and objects like black holes.
Our focus will be on what can be learned about each one from its possible
contributions to the extragalactic background light at all wavelengths, from the
radio region to the γ -ray. In the spirit of Olbers’ paradox and what we have done
so far, our main question for each candidate will thus be: Just how dark is it?
References
[1] Kapteyn J C 1922 Astrophys. J. 55 302
[2] Oort J H 1932 Bull. Astron. Inst. Neth. 6 249
[3] Zwicky F 1937 Astrophys. J. 86 217
[4] Fukugita M, Hogan C J and Peebles P J E 1998 Astrophys. J. 503 518
[5] Freedman W L et al 2001 Astrophys. J. 553 47
[6] Peacock J A 1999 Cosmological Physics (Cambridge: Cambridge University Press)
pp 141–5
[7] Willick J A and Batra P 2001 Astrophys. J. 548 564
[8] Herrnstein J R et al 1999 Nature 400 539
[9] Maoz E et al 1999 Nature 401 351
[10] Guinan E F et al 1998 Astrophys. J. 509 L21
88 The dark matter
The vacuum
90
The variable cosmological ‘constant’ 91
specialize to the case of vacuum decay into radiation. Our objective is to assess
the impact of such a process on both the bolometric and spectral EBL intensity,
using the same formalism that was applied in chapters 2 and 3 to the light from
ordinary galaxies.
∂µ =
8π G ν
c4
∇ ̵ν . (5.1)
Ì
Assuming that matter and energy (as contained in µν ) are conserved, it follows
that ∂µ = 0 and, hence, that = constant.
In variable- theories, one must therefore do one of three things: abandon
matter–energy conservation, modify general relativity or stretch the definition of
what is conserved. The first of these routes was explored in 1933 by Bronstein [1],
who sought to connect energy non-conservation with the observed fact that the
Universe is expanding rather than contracting. (Einstein’s equations actually
allow both types of solutions, and a convincing explanation for this cosmological
arrow of time has yet to be given.) Bronstein was executed in Stalin’s Soviet
Union a few years later and his work is not widely known [2].
Today, few physicists would be willing to sacrifice energy conservation
outright. Some, however, would be willing to modify general relativity, or to
consider new forms of matter and energy. Historically, these two approaches have
sometimes been seen as distinct, with one being a change to the ‘geometry of
nature’ while the other is concerned with the material content of the Universe.
The modern tendency, however, is to regard them as equivalent. This viewpoint
is best personified by Einstein, who in 1936 compared the left-hand (geometrical)
and right-hand (matter) sides of his field equations to ‘fine marble’ and ‘low-
grade wooden’ wings of the same house [3]. In a more complete theory, he
argued, matter fields of all kinds would be seen to be just as geometrical as the
gravitational one.
Let us see how this works in one of the oldest and simplest variable-
theories: a modification of general relativity in which the metric tensor gµν is
supplemented by a scalar field ϕ whose coupling to matter is determined by a
parameter ω. The idea for such a theory goes back to Jordan in 1949 [4], Fierz
in 1956 [5] and Brans and Dicke in 1961 [6]. In those days, new scalar fields
92 The vacuum
were not introduced into theoretical physics as routinely as they are today, and
all these authors sought to associate ϕ with a known quantity. Various lines of
argument (notably Mach’s principle) pointed to an identification with Newton’s
gravitational ‘constant’ such that G ∼ 1/ϕ. By 1968 it was appreciated that
and ω too would depend on ϕ in general [7]. The original Brans–Dicke theory
(with = 0) has thus been extended to generalized scalar–tensor theories in
which = (ϕ) [8], = (ϕ), ω = ω(ϕ) [9] and = (ϕ, ψ), ω = ω(ϕ)
where ψ ≡ ∂ µ ϕ∂µ ϕ [10]. Let us consider the last and most general of these cases,
for which the field equations are found to read:
ω(ϕ)
Ê µν −
1
2
Ê gµν + [∇µ (∂ν ϕ) − ϕgµν ] + 2
1
ϕ ϕ
1
∂µ ϕ∂ν ϕ − ψgµν
2
∂(ϕ, ψ)
− (ϕ, ψ)gµν + 2
∂ψ
8π
∂µ ϕ∂ν ϕ = − 4 µν
ϕc
Ì (5.2)
Ì
Now energy conservation (∇ ν µν = 0) no longer requires = constant. In fact,
it is generally incompatible with constant , unless an extra condition is imposed
on the terms inside the curly brackets in (5.3). (This cannot come from the wave
equation for ϕ, which merely confirms that the terms inside the curly brackets sum
to zero, in agreement with energy conservation.) Similar conclusions hold for
other scalar–tensor theories in which ϕ is no longer associated with G. Examples
include models with non-minimal couplings between ϕ and the curvature scalar
Ê [11], conformal rescalings of the metric tensor by functions of ϕ [12] and non-
zero potentials V (ϕ) for the scalar field [13–15]. (Theories of this last kind are
now known as quintessence scenarios [16].) For all such cases, the cosmological
‘constant’ becomes a dynamical variable.
In the modern approach to variable- cosmology, which goes back to
Zeldovich in 1968 [17], all extra terms of the kind just described—including
—are moved to the right-hand side of the field equations (5.2), leaving only
Ê Ê
the Einstein tensor ( µν − 12 gµν ) to make up the ‘geometrical’ left-hand side.
The cosmological term, along with scalar (or other) additional fields, are thus
effectively reinterpreted as new kinds of matter. The field equations (5.2) may
then be written
Ê µν −
1
2
Ê gµν = − 4 µν
ϕc
Ì
8π eff
+ (ϕ)gµν . (5.4)
The variable cosmological ‘constant’ 93
ordinary matter plus whatever scalar (or other) fields have been added to the
theory. For generalized scalar–tensor theories as described above, this could be
ϕ ϕ
written as ̵ν
eff ≡ Ì
µν + ̵ν where ̵ν refers to ordinary matter and ̵ν to the
scalar field. For the case with = (ϕ) and ω = ω(ϕ), for instance, the latter
would be defined by (5.2) as
ω(ϕ)
̵ν ≡ [∇µ (∂ν ϕ) − ϕgµν ] + 2 ∂µ ϕ∂ν ϕ − ψgµν .
ϕ 1 1
(5.5)
ϕ ϕ 2
The covariant derivative of the field equations (5.4) with the Bianchi identities
now gives
ν 8π
0=∇ Ì − (ϕ)gµν .
eff
(5.6)
ϕc4 µν
Equation (5.6) carries the same physical content as (5.3), but is more general in
form and can readily be extended to other theories. Physically, it says that energy
is conserved in variable- cosmology—where ‘energy’ is now understood to refer
to the energy of ordinary matter along with that in any additional fields which may
be present, and along with that in the vacuum, as represented by . In general,
the latter parameter can vary as it likes, so long as the conservation equation (5.6)
is satisfied.
It is natural to wonder whether the evolution of in these theories actually
helps with the cosmological ‘constant’ problem, in the sense that drops
from large primordial values to ones like those seen today without fine-tuning.
Behaviour of this kind was noted at least as early as 1977 [8] in the context of
models with = (ϕ) and ω = constant, which have solutions for ϕ(t) such that
∝ t −2 . In precursors to the modern quintessence scenarios, Barr [13] found
models in which ∝ t − at late times, while Peebles and Ratra [14] discussed a
theory in which ∝ R −m at early ones (here and m are powers). There is now
a rich literature on -decay laws of this kind (see [18] for a review). Their appeal
is easy to understand and can be illustrated with a simple dimensional argument
for the case in which ∝ R −2 [19]. Since already has dimensions of L −2 , the
proportionality factor in this case is a pure number (α, say) which is presumably
of order unity. Taking α ∼ 1 and identifying R with a suitable length scale in
cosmology (namely the Hubble distance c/H0), one finds that 0 ∼ H02/c2 . The
present vacuum density parameter is then ,0 ≡ 0 c2 /3H02 ∼ 1/3, close to the
values implied by current supernovae data (section 4.6). A natural choice for the
primordial value of , moreover, would have R ∼ Pl so that Pl ∼ α−2 Pl . This
leads to a ratio Pl /0 ∼ (c/H0Pl )2 ∼ 10122, which may be compared with the
values in table 4.1.
While this would seem to be a promising approach, two cautions must be
kept in mind. The first is theoretical. Insofar as the mechanisms discussed so
far are entirely classical, they do not address the underlying problem. For this,
one would also need to explain why net contributions to from the quantum
94 The vacuum
vacuum do not remain at the primordial level or how they are suppressed with
time. Polyakov [20] and Adler [21] in 1982 were the first to speculate explicitly
that such a suppression might come about if the ‘bare’ cosmological term implied
by quantum field theory were progressively screened by an ‘induced’ counterterm
of opposite sign, driving the effective value of (t) toward zero at late times.
Many theoretical adjustment mechanisms have now been identified as potential
sources of such a screening effect, beginning with a 1983 suggestion by Dolgov
[22] based on non-minimally-coupled scalar fields. Subsequent proposals have
involved scalar fields [23–25], fields of higher spin [26–28], quantum effects
during inflation [29–31] and other phenomena [32–34]. In most of these cases,
no analytic expression is found for in terms of time or other cosmological
parameters; the intent is merely to demonstrate that decay (and preferably near-
cancellation) of the cosmological term is possible in principle. None of these
mechanisms has been widely accepted as successful to date. In fact, there is a
general argument due to Weinberg to the effect that a successful mechanism based
on scalar fields would necessarily be so finely tuned as to be just as mysterious
as the original problem [35]. Similar concerns have been raised in the case of
vector and tensor-based proposals [36]. Nevertheless, the idea of the adjustment
mechanism remains feasible in principle and continues to attract more attention
than any other approach to the cosmological-constant problem.
The second caution is empirical. Observational data place increasingly
strong restrictions on the way in which can vary with time. Among the
most important are early-time bounds on the vacuum energy density ρ c2 =
c4 /8π G. The success of standard primordial nucleosynthesis theory implies
that ρ was smaller than ρr and ρm during the radiation-dominated era, and large-
scale structure formation could not have proceeded in the conventional way unless
ρ < ρm during the early matter-dominated era. Since ρr ∝ R −4 and ρm ∝ R −3
from (2.32), these requirements mean in practice that the vacuum energy density
must climb less steeply than R −3 in the past direction, if it is comparable to that
of matter or radiation at present [37, 38]. The variable- term must also satisfy
late-time bounds like those which have been placed on the cosmological constant
(section 4.6). Tests of this kind have been carried out using data on the age of
the Universe [39, 40], structure formation [41–43], galaxy number counts [44],
the CMB power spectrum [45, 46], gravitational lensing statistics [46–48] and
Type Ia supernovae [46, 49]. Some of these tests are less restrictive in the case
of a variable -term than they are for = constant, and this can open up new
regions of parameter space. Observation may even be compatible with some non-
singular models whose expansion originates in a hot, dense ‘big bounce’ rather
than a big bang [50], a possibility which can be ruled out on quite general grounds
if = constant.
A third group of limits comes from asking what the vacuum decays into.
In quintessence theories, vacuum energy is transferred to the kinetic energy of
a scalar field as it ‘rolls’ down a gradient toward the minimum of its potential.
This may have observable consequences if the scalar field is coupled strongly to
Energy density 95
0 = ∇ ν (̵ν
eff
− ρ c2 gµν ). (5.8)
Here ρ c2 ≡ c4 /8π G from (2.34) and we have put back G in place of 1/ϕ.
Equations (5.7) and (5.8) have the same form as their counterparts (2.1) and
(2.29) in standard cosmology, the key difference being that the cosmological term
has migrated to the right-hand side and is no longer necessarily constant. Its
dynamical evolution is now governed by the conservation equations (5.8), which
require only that any change in ρ c2 gµν be offset by an equal and opposite change
in the energy–momentum tensor ̵ν eff .
̵ν
eff
= (ρeff + peff /c2 )Uµ Uν + peff gµν . (5.9)
Comparison of equations (5.8) and (5.9) shows that the conserved quantity in
(5.8) must then also have the form of a perfect-fluid energy–momentum tensor,
with density and pressure given by
The conservation law (5.8) may then be simplified at once by analogy with
equation (2.29):
1 d 3 d
3
[R (ρeff c2 + peff )] = ( peff − ρ c2 ). (5.11)
R dt dt
This reduces to the standard result (2.30) for the case of a constant cosmological
term, ρ = constant.
In this chapter, we will allow the cosmological term to contain both a
constant part and a time-varying part so that
Let us assume, in addition, that the perfect fluid described by ̵ν eff consists of a
1 d 4 1 d dρv
(R ρr ) + 3 (R 3 ρm ) + = 0. (5.14)
R 4 dt R dt dt
From this equation it is clear that one (or both) of the radiation and matter
densities can no longer obey the usual relations ρr ∝ R −4 and ρm ∝ R −3 in
Energy density 97
It remains to solve equations (5.15), (5.16), (5.18) and (5.19) for the four
dynamical quantities R, ρm , ρr and ρv in terms of the constants ρm,0 , ρr,0 , x r and
x m . We relegate the details of this exercise to appendix B and summarize the
results here. The scale factor is found as
t 1/2(1−xr)
(t < teq )
t
R(t) ∝ 0 2/3 (5.20)
Ëm (t)
(t teq ) .
Ëm (t0 )
The vacuum density is given by
αx r
t −2 (t < teq )
(1 − x r )2
ρv (t) = xm (5.21)
ρr (t) (t teq )
1 − xm
−4
xr = xm = 10 : matter EdS: matter
1e−09 radiation 1e−09 radiation
vacuum vacuum
1e−12 xr = xm = 0.07: matter 1e−12 ΛCDM: matter
Densities ρv, ρr, ρm ( g cm−3 )
1e−24 1e−24
1e−27 1e−27
1e−30 1e−30
1e−33 1e−33
1e+08 1e+10 1e+12 1e+14 1e+16 1e+18 1e+08 1e+10 1e+12 1e+14 1e+16 1e+18
Time t ( s ) Time t ( s )
Figure 5.1. The densities of decaying vacuum energy (ρv ), radiation (ρr ) and matter (ρm )
as functions of time. The left-hand panel (a) shows the effects of changing the values of xr
and xm , assuming a model with m,0 = 0.3 (similar to CDM). The right-hand panel (b)
shows the effects of changing the cosmological model, assuming xr = xm = 10−4 . The
vertical lines indicate the epochs when the densities of matter and radiation are equal (teq ).
All curves assume h 0 = 0.75.
The epoch of matter–radiation equality plays a crucial role in this chapter because
it is at about this time that the Universe became transparent to radiation (the two
events are not simultaneous but the difference between them is minor for our
purposes). Decay photons created before teq would simply have been thermalized
by the primordial plasma and eventually re-emitted as part of the CMB. It is the
decay photons emitted after this time which can contribute to the EBL, and whose
contributions we wish to calculate. The quantity teq is thus analogous to the galaxy
formation time tf in previous chapters.
The densities ρm (t), ρr (t) and ρv (t) are plotted as functions of time in
figure 5.1. The left-hand panel (a) shows the effects of varying the parameters
x r and x m within a given cosmological model (here, CDM). Raising the value
of x m (i.e. moving from bold to light curves) leads to a proportionate increase
in ρv and a modest drop in ρr . It also flattens the slope of both components.
This change in slope (relative to that of the matter component) pushes the epoch
Source regions and luminosity 101
of equality back toward the big bang (vertical lines). Such an effect could,
in principle, allow more time for structure to form during the early matter-
dominated era [37], although the ‘compression’ of the radiation-dominated era
rapidly becomes unrealistic for values of x m close to 14 . Thus figure 5.1(a) shows
that the value of teq is reduced by a factor of over 100 in going from a model with
x m = 10−4 to one with x m = 0.07. In the limit x m → 14 , the duration of the
radiation-dominated era in fact dwindles to nothing, as remarked earlier and as
shown explicitly by equations (5.26).
Figure 5.1(b) shows the effects of changes in cosmological model for fixed
values of x r and x m (here both set to 10−4 ). Moving from the matter-filled EdS
model toward vacuum-dominated ones such as CDM and BDM does three
things. The first is to increase the age (t0 ) of the Universe. This increases the
density of radiation at any given time, since the latter is fixed at present and climbs
at the same rate in the past direction. Based on our experience with the galactic
EBL in previous chapters, we may expect that this should lead to significantly
higher levels of background radiation when integrated over time. However, there
is a second effect in the present theory which acts in the opposite direction:
smaller values of m,0 imply higher values of teq as well as t0 , thus delaying
the onset of the matter-dominated era (vertical lines). As we will see, these
two changes all but cancel each other out as far as vacuum decay contributions
to the background are concerned. The third important consequence of vacuum-
dominated cosmologies is ‘late-time inflation’, the sharp increase in the expansion
rate at recent times (figure 4.2). This translates in figure 5.1(b) into a noticeable
‘droop’ in the densities of matter and radiation at the right-hand edge of the figure
for the BDM model in particular.
These regions are introduced for convenience, and are not physically significant
since the vacuum decays uniformly throughout space. We therefore expect that
the parameter V0 will not appear in our final results.
The next step is to identify the ‘source luminosity’. There are at least two
ways to approach this question [56]. One could simply regard the source region as
a ball of physical volume V (t) = R̃ 3 (t)V0 filled with fluctuating vacuum energy.
As the density of this energy drops by −dρv during time dt, the ball loses energy
at a rate −dρv /dt. If some fraction β of this energy flux goes into photons, then
102 The vacuum
L th 4
= (1 − x m ). (5.35)
L rel 3
This takes numerical values between 43 (in the limit x m → 0 where standard
cosmology is recovered) and 1 (in the opposite limit where x m takes its maximum
theoretical value of 14 ). There is thus little difference between the two scenarios
in practice, at least where this model of vacuum decay is concerned. We will
proceed using the relativistic definition (5.31) which gives lower intensities and
hence more conservative limits on the theory. At the end of the chapter it will
be a small matter to obtain the corresponding intensity for the thermodynamical
case (5.28) by multiplying through by 43 (1 − x m ).
We now turn to the question of the branching ratio β, or fraction of vacuum
decay energy which goes into photons as opposed to other forms of radiation such
as massless neutrinos. This is model-dependent in general. If the vacuum-decay
radiation reaches equilibrium with that already present, however, then we may
reasonably set this equal to the ratio of photon-to-total radiation energy densities
in the CMB:
β = γ /r,0 . (5.36)
The density parameter γ of CMB photons is given in terms of their blackbody
temperature T by Stefan’s law. Using the COBE value Tcmb = 2.728 K [53], we
get
4σSB T 4
γ = 3 = 2.48 × 10−5 h −2
0 . (5.37)
c ρcrit,0
The total radiation density r,0 = γ + ν is harder to determine, since there
is little prospect of detecting the neutrino component directly. What is done in
standard cosmology is to calculate the size of neutrino contributions to r,0 under
the assumption of entropy conservation. With three massless neutrino species,
this leads to
7 Tν 4
r,0 = γ 1 + 3 × (5.38)
8 T
where Tν is the blackbody temperature of the relic neutrinos and the factor
of 7/8 arises from the fact that these particles obey Fermi rather than Bose–
Einstein statistics [58]. During the early stages of the radiation-dominated era,
neutrinos were in thermal equilibrium with photons so that Tν = T . They
dropped out of equilibrium, however, when the temperature of the expanding
fireball dropped below about kT ∼ 1 MeV (the energy scale of weak interactions).
Shortly thereafter, when the temperature dropped to kT ∼ m e c2 = 0.5 MeV,
electrons and positrons began to annihilate, transferring their entropy to the
remaining photons in the plasma. This raised the photon temperature by a factor
of (1 +2 × 78 = 114 )
1/3 relative to that of the neutrinos. In standard cosmology, the
104 The vacuum
ratio of Tν /T has remained at (4/11)1/3 down to the present day, so that (5.38)
gives
r,0 = 1.68γ = 4.17 × 10−5 h −2
0 . (5.39)
We will take these as our ‘standard values’ of r,0 and β in what follows. They
are conservative ones, in the sense that most alternative lines of argument would
imply higher values of β. Birkel and Sarkar [38], for instance, have argued that
vacuum decay into radiation with x r = constant would be easier to reconcile
with processes such as electron–positron annihilation if the vacuum coupled
to photons but not neutrinos. This would complicate the theory, breaking the
radiation density ρr in (5.14) into a photon part ργ and a neutrino part with
different dependencies on R. One need not solve this equation, however, in
order to appreciate the main impact of such a modification. Decay into photons
alone would pump entropy into the photon component relative to the neutrino
component in an effectively ongoing version of the electron–positron annihilation
argument outlined above. The neutrino temperature Tν (and density ρν ) would
continue to be driven down relative to T (and ργ ) throughout the radiation-
dominated era and into the matter-dominated one. In the limit Tν /T → 0 one
sees from (5.36) and (5.38) that such a scenario would lead to
In other words, the present energy density of radiation would be lower, but it
would effectively all be in the form of photons. Insofar as the decrease in r,0
is precisely offset by the increase in β, these changes cancel each other out. The
drop in r,0 , however, has an added consequence which is not cancelled: it pushes
teq farther into the past, increasing the length of time over which the decaying
vacuum has been contributing to the background. This raises the latter’s intensity,
particularly at longer wavelengths. The effect can be significant, and we will
return to this possibility at the end of the chapter. For the most part, however, we
will stay with the values of r,0 and β given by equations (5.39) and (5.40).
Armed with a definition for vacuum luminosity, equation (5.31), and a value
for β, equation (5.40), we are in a position to calculate the luminosity of the
vacuum. Noting that V̇ = 3(R/R0 )3 ( Ṙ/R)V0 and substituting equations (5.20),
(5.21) and (5.33) into (5.31), we find that
t −(5−8xm )/3
L v (t) = v,0 V0 × 0
t
(5.42)
cosh(t/τ0 )
sinh(t/τ0 ) −(5−8xm )/3
.
cosh(t0 /τ0 ) sinh(t0 /τ0 )
Bolometric intensity 105
The first of these solutions corresponds to models with m,0 = 1 while the second
holds for the general case (0 < m,0 < 1). Both results reduce at the present time
t = t0 to
L v,0 = v,0 V0 (5.43)
where v,0 is the comoving luminosity density of the vacuum
In principle, then, the comoving luminosity density of the decaying vacuum can
reach levels as high as 10 or even 50 times that of galaxies, as given by (2.24).
Raising the value of the branching ratio β to 1 instead of 0.595 does not affect
these results, since this must be accompanied by a proportionate drop in the value
of r,0 as argued earlier. The numbers in (5.45) do go up if one replaces the
relativistic definition (5.31) of vacuum luminosity with the thermodynamical one
(5.28) but the change is modest, raising v,0 by no more than a factor of 1.2 (for
x m = 0.07). The primary reason for the high luminosity of the decaying vacuum
lies in the fact that it converts nearly 60% of its energy density into photons. By
comparison, less than 1% of the rest energy of ordinary luminous matter has gone
into photons so far in the history of the Universe.
The first of these integrals corresponds to models with m,0 = 1 while the second
holds for the general case (0 < m,0 < 1). The latter may be simplified
106 The vacuum
where T (t) is the blackbody temperature. The function C(t) is found as usual by
normalization, equation (3.1). Changing integration variables from λ to ν = c/λ
for convenience, we find
−4
C(t) ∞ ν 3 dν C(t) h
L v (t) = 4 = 4 (4)ζ(4). (5.50)
c 0 exp[hν/kT (t)] − 1 c kT (t)
Inserting our result (5.42) for L v (t) and using the facts that (4) = 3! = 6 and
ζ (4) = π 4 /90, we then obtain for C(t):
t −(5−8xm )/3
4
15v,0 V0 hc t
C(t) = × 0
π4 kT (t)
cosh(t/τ0 ) sinh(t/τ0 ) −(5−8xm)/3
.
cosh(t0 /τ0 ) sinh(t0 /τ0 )
(5.51)
Here the upper expression refers as usual to the EdS case (m,0 = 1), while
the lower applies to the general case (0 < m,0 < 1). Temperature T can
be determined by assuming thermal equilibrium between the vacuum-decay
photons and those already present. Stefan’s law then relates T (t) to the radiation
energy density ρr (t)c2 as follows:
4σSB
ρr (t)c2 = [T (t)]4 . (5.52)
c
Substituting equation (5.22) into this expression and expanding the Stefan–
Boltzmann constant, we find that
t 2(1−xm )/3
(m,0 = 1)
hc t
= λv × 0 (5.53)
kT (t)
sinh(t/τ0 ) 2(1−xm )/3
(0 < m,0 < 1)
sinh(t0 /τ0 )
where the constant λv is given by
1/4 −1/4
8π 5 hc r,0 h 20
λv ≡ = 0.46 cm . (5.54)
15ρr,0 c2 4.17 × 10−5
This value of λv tells us that the peak of the observed spectrum of decay radiation
lies in the microwave region as expected, near that of the CMB (λcmb = 0.11 cm).
Putting (5.53) back into (5.51), we obtain
t
15λ4v v,0 V0 t0
C(t) = × (5.55)
π4
cosh(t/τ0 ) sinh(t/τ0 )
.
cosh(t0 /τ0 ) sinh(t0 /τ0 )
These two expressions refer to models with m,0 = 1 and 0 < m,0 < 1
respectively. With C(t) thus defined, the SED (5.49) of vacuum decay is
completely specified.
108 The vacuum
Here we have used integration variables τ ≡ t/t0 in the first case (for models with
m,0 = 1) and τ ≡ t/τ0 in the second (for models with 0 < m,0 < 1). The
dimensional content of both integrals is contained in the prefactor Iv (λ0 ), which
is given by
4 4
5v,0 λv xm λv
Iv (λ0 ) ≡ = 15 500 CUs . (5.57)
2π h H0 λ0
5 1 − xm λ0
We have divided through by photon energy hc/λ0 so as to express the results in
continuum units (CUs) as usual, where 1 CU ≡ 1 photon s−1 cm−2 Å−1 ster−1 .
We will use CUs throughout this book, for the sake of uniformity as well as
the fact that these units carry several advantages from the theoretical point of
view (section 3.3). The reader who consults the literature, however, will soon
find that each part of the electromagnetic spectrum has its own ‘dialect’ of
preferred units. In the microwave region it is most common to find background
intensities given in terms of the quantity ν Iν , which is the integral of flux per unit
frequency over frequency, and is usually expressed in units of nW m−2 ster−1 =
10−6 erg s−1 cm−2 ster−1 . To translate a given value of ν Iν (in these units) into
CUs, one need only multiply by a factor of 10−6 /(hc) = 50.34 erg−1 Å−1 . The
Jansky (Jy) is also often encountered, with 1 Jy = 10−23 erg s−1 cm−2 Hz−1 .
To convert a given value of ν Iν from Jy ster−1 into CUs, one multiplies by
10−23 / hλ = (1509 Hz erg−1)/λ with λ in Å.
Equation (5.56) gives the combined intensity of decay photons which have
been emitted at many wavelengths and redshifted by various amounts, but reach
us in a waveband centred on λ0 . The arbitrary volume V0 has dropped out of the
integral as expected and this result is also independent of the uncertainty h 0 in
Hubble’s constant since there is a factor of h 0 in both v,0 and H0. Results are
plotted in figure 5.2 over the waveband 0.01–1 cm, together with observational
detections of the background in this part of the spectrum. The most celebrated
of these is the COBE detection of the CMB [53] which we have shown as a
The microwave background 109
(a) (b)
xm = 0.25 xm = 0.25
xm = 0.060 xm = 0.015
xm = 0.010 xm = 0.003
xm = 0.001 −4
xm = 3×10
100000 xm = 1×10
−4 100000 xm = 3×10
−5
F96 F96
F98 F98
H98 H98
10000 L00 10000 L00
Iλ ( CUs )
Iλ ( CUs )
1000 1000
100 100
Figure 5.2. The spectral intensity of background radiation due to the decaying vacuum
for various values of xm , compared with observational data in the microwave region (bold
unbroken line) and far infrared (bold dotted line and points). For each value of xm there are
three curves representing cosmologies with m,0 = 1 (bold lines), m,0 = 0.3 (medium
lines) and m,0 = 0.03 (light lines). The left-hand panel (a) assumes L = L rel and
β = 0.595, while the right-hand panel (b) assumes L = L th and β = 1.
Here, (a) and (b) refer to the scenarios represented by figures 5.2(a) and (b),
with the corresponding values of r,0 as defined by equations (5.39) and (5.41)
respectively. We have assumed that h 0 0.6 as usual. It is clear from the
limits (5.58) that a decaying vacuum, at least of the kind we have considered
in this chapter, does not contribute significantly to the density of dark matter.
It should be recalled, however, that there are good reasons from quantum
theory for expecting some kind of instability for the vacuum in a Universe which
progressively cools. (Equivalently, there are good reasons for believing that the
cosmological ‘constant’ is not.) Our conclusion is that if the vacuum decays, it
either does so very slowly or in a manner that does not upset the isotropy of the
cosmic microwave background.
References
[1] Bronstein M 1933 Physikalische Zeitschrift der Sowjetunion 3 73
[2] Kragh H 1996 Cosmology and Controversy (Princeton, NJ: Princeton University
Press) p 36
The microwave background 111
[48] Bloomfield Torres L F and Waga I 1996 Mon. Not. R. Astron. Soc. 279 712
[49] Podariu S and Ratra B 2000 Astrophys. J. 532 109
[50] Overduin J M 1999 Astrophys. J. 517 L1
[51] Pavón D 1991 Phys. Rev. D 43 375
[52] Lima J A S 1996 Phys. Rev. D 54 2571
[53] Fixsen D J et al 1996 Astrophys. J. 473 576
[54] Hu W and Silk J 1993 Phys. Rev. D 48 485
[55] Pollock M D 1980 Mon. Not. R. Astron. Soc. 193 825
[56] Overduin J M, Wesson P S and Bowyer S 1993 Astrophys. J. 404 1
[57] Misner C W, Thorne K S and Wheeler J A 1973 Gravitation (San Francisco, CA:
Freeman) p 859
[58] Peebles P J E 1993 Principles of Physical Cosmology (Princeton, NJ: Princeton
University Press) p 164
[59] Fixsen D J et al 1998 Astrophys. J. 508 128
[60] Hauser M G et al 1998 Astrophys. J. 508 25
[61] Lagache G et al 2000 Astron. Astrophys. 354 247
Chapter 6
Axions
113
114 Axions
Figure 6.1. The Feynman diagram corresponding to the decay of the axion (a) into two
photons (γ ) with coupling strength gaγ γ .
by enhancing their conversion into photons with strong magnetic fields, as shown
by Sikivie in 1983 [13]. This is now the basis for axion ongoing cavity detector
experiments in Japan [14] and the USA [15]. It has also been suggested that
the Universe itself might behave like a giant axion cavity detector, since it is
threaded by intergalactic magnetic fields. These could stimulate the reverse
process, converting photons into a different species of ‘ultralight’ axion and
possibly explaining the dimming of high-redshift supernovae without the need
for large amounts of vacuum energy [16].
Promising as they are, we will not consider low-mass axions (sometimes
known as ‘invisible axions’) further in this chapter. This is because they decay
too slowly to have a noticeable impact on the EBL. Axions decay into photon
pairs (a → γ + γ ) via a loop diagram, as illustrated in figure 6.1. The decay
lifetime of this process is [4]
Here m 1 ≡ m a c2 /(1 eV) is the axion rest energy in units of eV and ζ is a constant
which is proportional to the coupling strength gaγ γ [17]. For our purposes, it
is sufficient to treat ζ as a free parameter which depends on the details of the
axion theory chosen. Its value has been normalized in equation (6.4) so that
ζ = 1 in the simplest grand unified theories (GUTs) of strong and electroweak
interactions. This could drop to ζ = 0.07 in other theories, however [18],
amounting to a strong suppression of the two-photon decay channel. In principle ζ
could even vanish altogether, corresponding to a radiatively stable axion, although
this would require an unlikely cancellation of terms. We will consider values
in the range 0.07 ζ 1 in what follows. For these values of ζ , and with
m 1 º 6 × 10−3 as given by (6.3), equation (6.4) shows that axions decay on
timescales τa ² 9 × 1035 s. This is so much longer than the age of the Universe
that such particles would truly be invisible.
We therefore shift our attention to axions with rest energies above m a c2 ∼
3 × 10−2 eV. Turner showed in 1987 [19] that the vast majority of these
would have arisen via thermal mechanisms such as Primakoff scattering and
photoproduction processes in the early Universe. Application of the Boltzmann
equation gives their present comoving number density as n a = (830/g∗F ) cm−3 ,
where g∗F counts the number of relativistic degrees of freedom left in the plasma
116 Axions
at the time when axions ‘froze out’ [4]. Ressell [17] has estimated the latter
quantity as g∗F = 15. The present density parameter a = n a m a /ρcrit,0 of
thermal axions is thus
a = 5.2 × 10−3h −2
0 m 1. (6.5)
Whether or not this is significant depends on the axion rest mass. The neutrino
burst from SN1987a now imposes a lower limit on m a rather than an upper limit as
before, because axions in this range are massive enough to interact with nucleons
in the supernova core and can no longer stream out freely (this is referred to as
the ‘trapped regime’). Only axions at the lower end of the mass-range are able
to carry away enough energy to interfere with the duration of the neutrino burst,
leading to the limit [20]
m a c2 2.2 eV. (6.6)
Laboratory upper limits on m a come from the fact that axions cannot carry away
too much ‘missing energy’ in processes such as the decay of the kaon [21].
Astrophysical constraints (similar to those mentioned already) are considerably
stronger. These now depend critically on whether axions couple only to hadrons
at tree-level, or to leptons as well. The former kind are known as KSVZ axions
after Kim [22] and Shifman, Vainshtein and Zakharov [23]; while the latter take
the name DFSZ axions after Zhitnitsky [24] and Dine, Fischler and Srednicki
[25]. The extra lepton coupling of DFSZ axions allows them to carry so much
energy out of the cores of red-giant stars that helium ignition is disrupted unless
m a c2 9 × 10−3 eV [26]. This closes the high-mass window on DFSZ axions,
which can consequently exist only with the ‘invisible’ masses discussed earlier.
For hadronic (KSVZ) axions, the duration of the helium-burning phase in red
giants imposes a weaker bound [27]:
This translates into an upper limit m a c2 10 eV for the simplest axion models
with ζ 0.07. And irrespective of the value of ζ , it has been argued that axions
with m a c2 10 eV can be ruled out because they would interact strongly enough
with baryons to produce a detectable signal in existing Čerenkov detectors [28].
In the hadronic case, then, there remains a window of opportunity for multi-
eV axions with 2 m1 10. Equation (6.5) shows that these particles
would contribute a total density of about 0.03 a 0.15, where we take
0.6 h 0 0.9 as usual. Axions of this kind would not be able to provide the
required density of dark matter in the CDM model (m,0 = 0.3). They would,
however, suffice in low-density models midway between CDM and BDM
(section 3.3). Since such models are compatible with current observational data
(chapter 4), it is worth proceeding to see whether multi-eV axions can be further
constrained by their contributions to the EBL.
Axion halos 117
Using this together with (6.5) for a , and setting bar ≈ 0.016h −2
0 from
section 4.3, we find from (6.11) that
−3
9 × 10 M h 0 (m 1 = 3)
11
−3
Mtot = 1 × 10 M h 0
12 (m 1 = 5) (6.13)
2 × 1012 M h −3 (m = 8).
0 1
118 Axions
Let us compare these numbers with recent dynamical data on the mass of the
Milky Way using the motions of galactic satellites. These assume a Jaffe profile
[30] for the halo density:
vc2 rj2
ρtot (r ) = (6.14)
4π Gr 2 (r + rj )2
where vc is the circular velocity, rj the Jaffe radius and r the radial distance
from the centre of the Galaxy. The data imply that vc = 220 ± 30 km s−1 and
rj = 180 ± 60 kpc [31]. Integrating over r from zero to infinity gives
vc2rj
Mtot = = (2 ± 1) × 1012 M . (6.15)
G
This is consistent with (6.13) for a wide range of values of m 1 and h 0 . So axions
of this type could, in principle, make up all the dark matter which is required on
galactic scales.
Putting (6.11) into (6.9) we obtain for the mass of the axion halos:
a ρcrit,0
Mh = . (6.16)
n0
This could also have been derived as the mass of a region of space of comoving
volume V0 = n −1 0 filled with homogeneously-distributed axions of mean density
ρa = a ρcrit,0 . (This is the approach that we adopted in defining vacuum regions
in chapter 5.)
To obtain the halo luminosity, we add up the rest energies of all the decaying
axions in the halo and divide by the decay lifetime:
Mh c2
Lh = . (6.17)
τa
= 2 × 1011 L h −3
0 ζ
2 (m 1 = 5)
3 × 1012 L h −3 ζ 2 (m = 8).
0 1
6.4 Intensity
Substituting the halo comoving number density n 0 and luminosity L h into
equation (2.20), we find for the bolometric intensity of the decaying axions:
zf
dz
Q = Qa . (6.19)
0 (1 + z)2 H̃ (z)
Here the dimensional content of the integral is contained in the prefactor Q a which
takes the following numerical values:
a ρcrit,0 c3
Qa = = (1.2 × 10−7 erg s−1 cm−2 )h −3
0 ζ m1
2 6
(6.20)
H0 τa
−5 −1 −2 −1 2 (m = 3)
9 × 10 erg s cm )h 0 ζ 1
= 2 × 10−3 erg s−1 cm−2 )h −1 ζ 2 (m = 5)
1
0
3 × 10−2 erg s−1 cm−2 )h −10 ζ 2 (m = 8).
1
There are three things to note about this quantity. First, it is comparable in
magnitude to the observed EBL due to galaxies, Q ∗ ≈ 3 × 10−4 erg s−1 cm−2
(chapter 2). Second, unlike Q ∗ for galaxies or Q v for decaying vacuum energy,
Q a depends explicitly on the uncertainty h 0 in Hubble’s constant. Physically, this
reflects the fact that the axion density ρa = a ρcrit,0 in the numerator of (6.20)
comes to us from the Boltzmann equation and is independent of h 0 , whereas the
density of luminous matter such as that in galaxies is inferred from its luminosity
density 0 (which is proportional to h 0 , thus cancelling the h 0 -dependence in
H0). The third thing to note about Q a is that it is independent of n 0 . This is
because the collective contribution of decaying axions to the diffuse background
is determined by their mean density a and does not depend on how they are
distributed in space.
To evaluate (6.19) we need to specify the cosmological model. Let us
assume that the Universe is spatially flat, as is increasingly suggested by the data
(chapter 4). Hubble’s parameter (2.40) then reduces to
where m,0 = a + bar . Putting this into (6.19) along with (6.20) for Q a , we
obtain the plots of Q(m 1 ) shown in figure 6.2 for ζ = 1. The three bold lines in
this plot show the range of intensities obtained by varying h 0 and bar h 20 within
the ranges 0.6 h 0 0.9 and 0.011 bar h 20 0.021 respectively. We have
set z f = 30, since axions were presumably decaying long before they became
bound to galaxies. (Results are insensitive to this choice, rising by less than 2%
as z f → 1000 and dropping by less than 1% for z f = 6.) The axion-decay
background is faintest for the largest values of h 0 , as expected from the fact that
Q a ∝ h −1
0 . This is partly offset, however, by the fact that larger values of h 0 also
lead to a drop in m,0 , extending the age of the Universe and hence the length
120 Axions
0.1
Ωbarh20 = 0.011, h0 = 0.6
Ωbarh20 = 0.016, h0 = 0.75
Ωbarh20 = 0.021, h0 = 0.9
0.01
EdS
Q ( erg s−1 cm−2 )
Q∗
0.001
0.0001
1e−05
2 3 4 5 6 7 8 9 10
ma c2 ( eV )
Figure 6.2. The bolometric intensity Q of the background radiation from axions as a
function of their rest mass m a . The faint dash-dotted line shows the equivalent intensity
in an EdS model (in which axions alone cannot provide the required CDM). The dotted
horizontal line indicates the approximate bolometric intensity (Q ∗ ) of the observed EBL.
of time over which axions have been contributing to the background. (Smaller
values of bar raise the intensity slightly for the same reason.) Overall, figure 6.2
confirms that axions with ζ = 1 and m a c2 3.5 eV produce a background
brighter than that from the galaxies themselves.
Since 2 º m 1 º 10, the value of this parameter tells us that we will be most
interested in the infrared and optical bands (roughly 4000–40 000 Å). We can
The infrared and optical backgrounds 121
For the standard deviation of the curve, we can use the velocity dispersion
vc of the bound axions [32]. This is 220 km s−1 for the Milky Way, giving
σλ ≈ 40 Å/m 1 where we have used σλ = 2(vc /c)λa (section 3.4). For axions
bound in galaxy clusters, vc rises to as much as 1300 km s−1 [17], implying that
σλ ≈ 220 Å/m 1 . For convenience we parametrize σλ in terms of a dimensionless
quantity σ50 ≡ σλ /(50 Å/m 1 ) so that
With the SED F(λ) thus specified along with Hubble’s parameter (6.21), the
spectral intensity of the background radiation produced by axion decays is given
by (3.6) as
1 λ0 /(1 + z) − λa 2
zf exp − 2 σλ
dz
Iλ (λ0 ) = Ia . (6.25)
0 (1 + z) [m,0 (1 + z) + 1 − m,0 ]
3 3 1/2
ma c2 = 8 eV
2
ma c = 5 eV
2
100000 ma c = 3 eV
EdS
LW76
SS78
D79
T83
10000 J84
Iλ ( CUs )
BK86
T88
M90
B98
H98
1000 WR00
C01
10000 100000
λ0 ( Å )
Figure 6.3. The spectral intensity Iλ of the background radiation from decaying axions
as a function of observed wavelength λ0 . The four curves for each value of m a (labelled)
correspond to upper, median and lower limits on h 0 and bar together with the equivalent
intensity for the EdS model (as in figure 6.2). Also shown are observational upper limits
(full symbols and bold lines) and reported detections (empty symbols) over this waveband.
HST/Las Campanas telescope observations (B98 [41]) and the DIRBE instrument
aboard the COBE satellite (H98 [42], WR00 [43], C01 [44]).
Figure 6.3 shows that 8 eV axions with ζ = 1 would produce a hundred
times more background light at ∼3000 Å than is actually seen. The background
from 5 eV axions would exceed observed levels by a factor of ten at ∼5000 Å.
(This, we can note in passing, would colour the night sky green.) Only axions
with m a c2 3 eV are compatible with observation if ζ = 1. These results
are significantly higher than those obtained assuming an EdS cosmology [17, 45]
especially at wavelengths longward of the peak. This simply reflects the fact
that the background in a low-m,0 , high-,0 Universe like that considered here
receives many more contributions from sources at high redshift.
To obtain more detailed constraints, we can instruct a computer to evaluate
the integral (6.25) at more finely-spaced intervals in m a . Since Iλ ∝ ζ −2 , the
value of ζ required to reduce the minimum predicted axion intensity Ith (m a , λ0 )
below a given observational upper limit Iobs (λ0 ) in figure 6.3 is
Iobs (λ0 )
ζ (m a , λ0 ) . (6.27)
Ith (m a , λ0 )
The infrared and optical backgrounds 123
10
Ωm,0 = Ωa + Ωbar (axions + baryons only)
Ωm,0 = 1 (EdS)
9
8
ma c2 ( eV )
3
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
ζ
Figure 6.4. The upper limits on the value of m a c2 as a function of the coupling strength ζ
(or vice versa). These are derived by requiring that the minimum predicted axion intensity
(as plotted in figure 6.3) be less than or equal to the observational upper limits on the
intensity of the EBL.
The upper limit on ζ (for each value of m a ) is then the smallest such value of
ζ (m a , λ0 ); i.e. that which brings Ith (m a , λ0 ) down to Iobs (λ0 ) or less for each
wavelength λ0 . From this procedure we obtain a function which can be regarded
as an upper limit on the axion rest mass m a as a function of ζ (or vice versa).
Results are plotted in figure 6.4 (bold full line). This curve tells us that even in
models where the axion–photon coupling is strongly suppressed and ζ = 0.07,
the axion cannot be more massive than
As expected, these limits are stronger than those which would be obtained in an
EdS model (faint dotted line in figure 6.4). This is a small effect, however, because
the strongest constraints tend to come from the region near the peak wavelength
(λa ), whereas the difference between matter- and vacuum-dominated models is
most pronounced at wavelengths longward of the peak where the majority of the
radiation originates at high redshift. Figure 6.4 shows that cosmology has the
most effect over the range 0.1 º ζ º 0.4, where upper limits on m a c2 weakened
124 Axions
by about 10% in the EdS model relative to one in which all the CDM is provided
by axions of the kind we have discussed here.
Combining equations (6.6) and (6.29) we conclude that axions in the
simplest models are confined to a slender range of viable rest masses:
References
[1] Peccei R and Quinn H 1977 Phys. Rev. Lett. 38 1440
[2] Weinberg S 1978 Phys. Rev. Lett. 40 223
[3] Wilczek F 1978 Phys. Rev. Lett. 40 279
The infrared and optical backgrounds 125
[4] Kolb E W and Turner M S 1990 The Early Universe (Reading, MA: Addison-Wesley)
pp 401–27
[5] Preskill J, Wise M and Wilczek F 1983 Phys. Lett. B 120 127
[6] Abbott L and Sikivie P 1983 Phys. Lett. B 120 133
[7] Dine M and Fischler W 1983 Phys. Lett. B 120 137
[8] Davis R 1986 Phys. Lett. B 180 255
[9] Sikivie P 2000 Beyond the Desert 1999 ed H V Klapdor-Kleingrothaus and
I V Krivosheina (Oxford: Institute of Physics Press) p 547
[10] Battye R A and Shellard E P S 2000 Beyond the Desert 1999 ed H V Klapdor-
Kleingrothaus and I V Krivosheina (Oxford: Institute of Physics Press) p 565
[11] Janka H-T et al 1996 Phys. Rev. Lett. 76 2621
[12] Keil W et al 1997 Phys. Rev. D 56 2419
[13] Sikivie P 1983 Phys. Lett. 51 1415
[14] Yamamoto K et al 2000 Dark Matter in Astro- and Particle Physics ed H V Klapdor-
Kleingrothaus (Heidelberg: Springer) p 638
[15] Asztalos S J and Kinion D 2000 Dark Matter in Astro- and Particle Physics ed
H V Klapdor-Kleingrothaus (Heidelberg: Springer) p 630
[16] Csáki C, Kaloper N and Terning J 2002 Phys. Rev. Lett. 88 161302
[17] Ressell M T 1991 Phys. Rev. D 44 3001
[18] Kaplan D B 1985 Nucl. Phys. B 260 215
[19] Turner M S 1987 Phys. Rev. Lett. 59 2489
[20] Turner M S 1988 Phys. Rev. Lett. 60 1797
[21] Kim J-E 1987 Phys. Rep. 150 1
[22] Kim J E 1979 Phys. Rev. Lett. 43 103
[23] Shifman M A, Vainshtein A I and Zakharov V I 1980 Nucl. Phys. B 166 493
[24] Zhitnitsky A R 1980 Sov. J. Nucl. Phys. 31 260
[25] Dine M, Fischler W and Srednicki M 1981 Phys. Lett. B 104 199
[26] Raffelt G and Weiss A 1995 Phys. Rev. D 51 1495
[27] Raffelt G G and Dearborn D S P 1987 Phys. Rev. D 36 2211
[28] Engel J, Seckel D and Hayes A C 1990 Phys. Rev. D 65 960
[29] Peebles P J E 1993 Principles of Physical Cosmology (Princeton, NJ: Princeton
University Press) p 122
[30] Jaffe W 1983 Mon. Not. R. Astron. Soc. 202 995
[31] Kochanek C S 1996 Astrophys. J. 457 228
[32] Kephart T W and Weiler T J 1987 Phys. Rev. Lett. 58 171
[33] Lillie C F and Witt A N 1976 Astrophys. J. 208 64
[34] Spinrad H and Stone R P S 1978 Astrophys. J. 226 609
[35] Dube R R, Wickes W C and Wilkinson D T 1979 Astrophys. J. 232 333
[36] Boughn S P and Kuhn J R 1986 Astrophys. J. 309 33
[37] Toller G N 1983 Astrophys. J. 266 L79
[38] Jakobsen P et al 1984 Astron. Astrophys. 139 481
[39] Tennyson P D et al 1988 Astrophys. J. 330 435
[40] Murthy J et al 1990 Astron. Astrophys. 231 187
[41] Bernstein R A 1999 The Low Surface Brightness Universe (Astronomical Society of
the Pacific Conference Series, Volume 170) ed J I Davies, C Impey and S Phillipps
(San Francisco, CA: ASP) p 341
[42] Hauser M G et al 1998 Astrophys. J. 508 25
[43] Wright E L and Reese E D 2000 Astrophys. J. 545 43
126 Axions
Neutrinos
127
128 Neutrinos
The τ neutrino decays into a µ neutrino plus a photon (figure 7.1). Assuming that
m ντ m νµ , conservation of energy and momentum require this photon to have
an energy E γ = 12 m ντ c2 = 14.4 ± 0.5 eV.
The concreteness of this proposal has made it eminently testable. Some of
the strongest bounds come from searches for the line emission near 14 eV that
would be expected from concentrations of decaying dark matter in clusters of
galaxies. No such signal has been seen in the direction of the galaxy cluster
surrounding the quasar 3C 263 [11] or in the direction of the rich cluster Abell 665
which was observed using the Hopkins Ultraviolet Telescope in 1991 [12]. It
may be, however, that absorption plays a stronger role than expected along the
lines of sight to these clusters or that most of their dark matter is in another
form [4, 13]. A potentially more robust test of the decaying-neutrino hypothesis
comes from the diffuse background light. This has been looked at in a number of
studies [14–20]. The task is a challenging one for several reasons. Photons whose
energies lie near 14 eV are strongly absorbed by both dust and neutral hydrogen,
and the distribution of these quantities in intergalactic space is uncertain. It is also
notoriously difficult, perhaps more so in this part of the spectrum than any other,
to distinguish between those parts of the background which are truly extragalactic
and those which are due to a complex mixture of competing foreground signals
[21, 22].
Here we reconsider the problem with the help of the formalism developed
in preceding chapters, adapting it to allow for absorption by dust and neutral
hydrogen. Despite all the uncertainties, we will come to a firm conclusion:
existing observations are sufficient to rule out the decaying-neutrino hypothesis,
unless either the rest mass or decay lifetime of the neutrino lie outside the bounds
specified by the theory.
hc
λν = = 860 ± 30 Å. (7.2)
Eγ
This lies in the extreme ultraviolet (EUV) portion of the spectrum, although
the redshifted tail of the observed photon spectrum will stretch across the far
ultraviolet (FUV) and near ultraviolet (NUV) bands as well. [For definiteness,
Bound neutrinos 129
Figure 7.1. Feynman diagrams corresponding to the decay of a massive neutrino (ν1 ) into
a second, lighter neutrino species (ν2 ) together with a photon (γ ). The process is mediated
by charged leptons ( ) and the W boson (W).
where L h , the halo luminosity, has yet to be determined. For the standard
deviation σλ we can follow the same procedure as in the preceding chapter and
use the velocity dispersion of the bound neutrinos, giving σλ = 2λν vc /c. Let us
parametrize this for convenience using the range of uncertainty given by (7.2) for
the value of λν , so that σ30 ≡ σλ /(30 Å).
The halo luminosity is just the ratio of the number of decaying neutrinos
(Nτ ) to their decay lifetime (τν ), multiplied by the energy of each decay photon
(E γ ). Because the latter is just above the hydrogen-ionizing energy of 13.6 eV,
we also need to multiply the result by an efficiency factor (between zero and
one), to reflect the fact that some of the decay photons are absorbed by neutral
hydrogen in their host galaxy before they can leave the halo and contribute to its
luminosity. Altogether, then:
Nτ E γ Mh c2
Lh = = . (7.4)
τν 2τν
Here we have expressed Nτ as the number of neutrinos with rest mass m ντ =
2E γ /c2 per halo mass Mh .
To calculate the mass of the halo, let us follow reasoning similar to that
adopted for axion halos in section 6.3 and assume that the ratio of baryonic-to-
total mass in the halo is comparable to the ratio of baryonic-to-total matter density
in the Universe at large:
Here we have made the economical assumption that there are no other
contributions to the matter density, apart from those of baryons and massive
neutrinos. It follows from equation (7.5) that
bar −1
Mh = Mtot 1 + . (7.6)
ν
We take Mtot = (2 ± 1) × 1012 M following equation (6.15). For bar we use the
value (0.016 ± 0.005)h −20 quoted in section 4.3. And to calculate ν we put the
neutrino rest mass m ντ into equation (4.7), giving
ν = (0.31 ± 0.01)h −2
0 . (7.7)
As pointed out by Sciama [9], massive neutrinos are thus consistent with a critical-
density EdS Universe (m,0 = 1) if
This is just below the range of values which many workers now consider
observationally viable for Hubble’s constant (section 4.3). But it is a striking fact
that the same neutrino rest mass which resolves several unrelated astrophysical
problems also implies a reasonable expansion rate in the simplest cosmological
model. In the interests of testing the decaying-neutrino hypothesis in a
self-consistent way, we will follow Sciama in adopting the narrow range of
values (7.10) for this chapter only.
7.3 Luminosity
To evaluate the halo luminosity (7.4), it remains to find the fraction of decay
photons which escape from the halo. A derivation of is given in appendix C,
based on the hydrogen photoionization cross section with respect to 14 eV
photons, and taking into account the distribution of decaying neutrinos relative
to that of neutral hydrogen in the galactic disc. Here we show that the results can
Luminosity 131
with −2
ν (r, θ ) ≡ 1 + (r/r ) sin θ + (r/ h) cos θ
2 2 2 2 .
where
3
Mν ≡ 16πn m ντ r = 9.8 × 1011 M
x max (rh , θ ) = (rh /r )/ sin2 θ + (r / h)2 cos2 θ.
Outside x > x max , we assume that the halo density drops off exponentially and can
be ignored. Using (7.12) it can be shown that halos whose masses Mh are given
by (7.8) have scale radii rh = (70 ± 25) kpc. This is consistent with evidence
from the motion of galactic satellites [25].
We now put ourselves in the position of a decay photon released at cylindrical
coordinates (yν , z ν ) inside the halo, as shown in figure 7.2(a). It may be seen that
the disc of the Galaxy presents an approximately elliptical figure with maximum
angular width 2α and angular height 2β, where
α = tan−1 (rd2 − d 2 )/[(yν − d)2 + z ν2 ]
1 −1 yν + rd −1 yν − rd
β= tan − tan . (7.13)
2 zν zν
132 Neutrinos
zν
α
β βθ
α
rd yν rh
a φ
b r
(a) (b)
Figure 7.2. Left-hand side (a): absorption of decay photons inside the halo. For neutrinos
decaying at (yν , z ν ), the probability that decay photons will be absorbed inside the halo
(radius rh ) is essentially the same as the probability that they will strike the galactic disc
(radius rd , shaded). Right-hand side (b): an ellipse of semi-major axis a, semi-minor axis
b and radial arm r (φ) subtends corresponding angles of α, β and θ(φ) respectively.
Here d = [(yν2 + z ν2 + rd2 ) − (yν2 + z ν2 + rd2 )2 − 4yν2rd2 ]/2yν , yν = r sin θ ,
z ν = r cos θ and r = r x. In spherical coordinates centred on the photon, the
solid angle subtended by an ellipse is
2π θ(φ) 2π
e = dφ sin θ dθ = [1 − cos θ (φ)] dφ. (7.14)
0 0 0
Here θ (φ) is the angle subtended by a radial arm of the ellipse as depicted in
figure 7.2(b). The cosine of this angle can be expressed in terms of α and β by
−1/2
tan2 α tan2 β
cos θ (φ) = 1 + . (7.15)
tan2 α sin2 φ + tan2 β cos2 φ
The single-point probability that a photon released at (x, θ ) will escape from the
halo is then
e = 1 − e
(α, β)
. (7.16)
4π
For a given halo size rh and disc size rd , we obtain a good approximation to
by averaging e over all locations (x, θ ) in the halo and weighting by the
neutrino number density ν . Trial and error shows that, for the range of halo
sizes considered here, a disc radius of rd = 36 kpc leads to values which are
within 1% of those found in appendix C. The latter read:
0.63 (r = 45 kpc)
h
= 0.77 (rh = 70 kpc) (7.17)
0.84 (rh = 95 kpc).
Free-streaming neutrinos 133
As expected, the escape fraction of decay photons goes up as the scale size of the
halo increases relative to that of the disc. As rh rd one gets → 1, while a
small halo with rh º rd leads to ≈ 0.5.
With the decay lifetime τν , halo mass Mh and efficiency factor all known,
equation (7.4) gives for the luminosity of the halo:
L h = (6.5 × 1042 erg s−1 ) fh f τ−1 . (7.18)
Here we have introduced two dimensionless constants f h and f τ in order to
parametrize the uncertainties in Mh and τν . For the ranges of values given earlier,
these take the values f h = 1.0 ± 0.6 and f τ = 1.0 ± 0.5 respectively. Setting
f h = fτ = 1 gives a halo luminosity of about 2 × 109 L , or less than 5% of the
optical luminosity of the Milky Way (L 0 = 2 × 1010h −2 0 L ), with h 0 as specified
by equation (7.10).
The combined bolometric intensity of decay photons from all the bound
neutrinos out to a redshift z f is given by (2.21) as usual:
zf
dz
Q bound = Q h (7.19)
0 (1 + z)2 H̃ (z)
where
cn 0 L h
Qh ≡ = (2.0 × 10−5 erg s−1 cm−2 )h 20 f h f τ−1 .
H0
The h 0 -dependence in this quantity comes from the fact that we have so far
considered only neutrinos in galaxy halos, whose number density n 0 goes as
h 30 . Since we follow Sciama in adopting the EdS cosmology in this chapter, the
Hubble expansion rate (2.40) is
H̃ (z) = (1 + z)3/2. (7.20)
Putting this into (7.19), we find
Q bound ≈ 25 Q h = (8.2 × 10−6 erg s−1 cm−2 )h 20 fh f τ−1 . (7.21)
(The approximation is good to better than 1% if z f 8.) Here we have neglected
absorption between the galaxies, an issue we will return to shortly. Despite
their size, dark-matter halos in the decaying-neutrino hypothesis are not very
bright. Their combined intensity is about 1% of that of the EBL due to galaxies,
Q ∗ ≈ 3 × 10−4 erg s−1 cm−2 (section 2.3). The main reason for this is the
long neutrino decay lifetime, five orders of magnitude longer than the age of the
galaxies. The ultraviolet decay photons have not had time to contribute as much
to the EBL as their counterparts in the optical part of the spectrum.
amounts to less than 6% of the total neutrino density, equation (7.7). Therefore,
as expected for hot dark-matter particles, the bulk of the EBL contributions in
the decaying-neutrino scenario come from neutrinos which are distributed on
larger scales. We will refer to these collectively as free-streaming neutrinos,
though some of them may actually be associated with larger scale systems such
as clusters of galaxies. (The distinction is not critical for our purposes, since
we are concerned with the summed contributions of all such neutrinos to the
diffuse background.) Their cosmological density is found using (7.7) as ν,free =
ν − ν,bound = 0.30h −2 0 f f , where the dimensionless constant f f parametrizes
the uncertainties in this quantity and takes the value f f = 1.00 ± 0.05.
To identify ‘sources’ of radiation in this section we follow the same
procedure as with vacuum regions (section 5.4) and axions (section 6.3) and
divide up the Universe into regions of comoving volume V0 = n −1 0 . The mass
of each region is
Mf = ν,free ρcrit,0 V0 = ν,free ρcrit,0 /n 0 . (7.22)
The luminosity of these sources has the same form as equation (7.4) except that
we put Mh → Mf and drop the efficiency factor since the density of intergalactic
hydrogen is too low to absorb a significant fraction of the decay photons within
each region. Thus,
ν,free ρcrit,0 c2
Lf = . (7.23)
2n 0 τν
With the values already mentioned for ν,free and τν , and with ρcrit,0 and n 0 given
by (2.36) and (6.12) respectively, equation (7.23) implies a comoving luminosity
density due to free-streaming neutrinos of
f = n 0 L f = (1.2 × 10−32 erg s−1 cm−3 ) ff fτ−1 . (7.24)
This is high: 0.5h −1
0 times the luminosity density of the Universe, as given by
equation (2.24). Based on this, we may anticipate strong constraints on the
decaying-neutrino hypothesis from observation. Similar conclusions follow from
the bolometric intensity due to free-streaming neutrinos. Replacing L h with L f in
(7.19), we find
2cn 0 L f
Q free = = (1.2 × 10−4 erg s−1 cm−2 )h −1 −1
0 ff f τ . (7.25)
5H0
This is of the same order of magnitude as Q ∗ and goes as h −1 2
0 rather than h 0 .
Taking into account the uncertainties in h 0 , fh , f f and f τ , the bolometric intensity
of bound and free-streaming neutrinos together is
Q = Q bound + Q free = (0.33 ± 0.17)Q ∗ . (7.26)
In principle, then, these particles are capable of producing a background as bright
as that from the galaxies themselves, equation (2.49). The vast majority of their
light comes from free-streaming neutrinos. These are more numerous than their
halo-bound counterparts, and are not appreciably affected by absorption at source.
Intergalactic absorption 135
Here we have used (7.20) for H̃ (z). The numerical value of the prefactor Iν is
given with the help of (7.18) for bound neutrinos and (7.23) for free-streaming
ones as follows:
cn 0 Lh
Iν = √ × (7.28)
32π H0σλ
3 L f
−1
(940 CUs)h 20 f h f τ−1 σ30 (λ0 /λν ) (bound)
= −1 −1 −1
(5280 CUs)h 0 f f f τ σ30 (λ0 /λν ) (free) .
Now, however, we must take into account the fact that decay photons (from both
bound and free-streaming neutrinos) encounter significant amounts of absorbing
material as they travel through the intergalactic medium (IGM) on the way to
our detectors. The wavelength of neutrino decay photons, λν = 860 ± 30 Å, is
just shortward of the Lyman-α line at 912 Å, which means that these photons are
absorbed almost as strongly as they can be by neutral hydrogen (this is, in fact, one
of the prime motivations of the theory). It is also very close to the waveband of
peak extinction by dust. The simplest way to handle both these types of absorption
is to include an opacity term τ (λ0 , z) inside the argument of the exponential, so
that (7.27) is modified to read
zf
−9/2 1 λ0 /(1 + z) − λν 2
Iλ (λ0 ) = Iν (1 + z) exp − − τ (λ0 , z) dz.
0 2 σλ
(7.29)
The optical depth τ (λ0 , z) can be broken into separate terms corresponding to
hydrogen gas and dust along the line of sight:
Our best information about both of these quantities comes from high-redshift
quasar spectra. The fact that these can be seen at all already puts limits on the
degree of attenuation due to intervening matter. And the shape of the observed
spectra can give us clues about the effects of the absorbing medium in specific
wavebands.
136 Neutrinos
Here ξ(λ) is the extinction of light by dust at wavelength λ relative to that in the
B-band (4400 Å). If τFP (z) = constant and ξ(λ) ∝ λ−1 , then τdust is proportional
to λ−1 −1
0 [(1 + z) − 1] or λ0 [(1 + z)
3 2.5 − 1], depending on cosmology [27, 29].
Intergalactic absorption 137
In the more general treatment of Fall and Pei [34], τFP (z) is parametrized as a
function of redshift so that
where τFP (0) and δ are adjustable parameters. Assuming an EdS cosmology
(m,0 = 1), the observational data are consistent with lower limits of τ∗ (0) =
0.005, δ = 0.275 (model A); best-fit values of τ∗ (0) = 0.016, δ = 1.240 (model
B); or upper limits of τ∗ (0) = 0.050, δ = 2.063 (model C). We will use all three
models in what follows.
To calculate the extinction ξ(λ) in the 300–2000 Å range, we use numerical
Mie scattering routines in conjunction with various dust populations. In
performing these calculations, we tacitly assume that intergalactic and interstellar
dust are similar in nature, which is a reasonable assumption that is, of course, very
difficult to test. Many people have constructed dust-grain models that reproduce
the average extinction curve for the diffuse interstellar medium (DISM) at λ >
912 Å [35] but there have been fewer studies at shorter wavelengths. One such
study is that of Martin and Rouleau [36], who extended earlier silicate/graphite
synthetic extinction curves due to Draine and Lee [37] assuming:
10
Figure 7.3. The FUV extinction (relative to that in the B-band) produced by five different
dust-grain populations. Standard grains (population 1) produce the least extinction,
while PAH-like carbon nanoparticles (population 3) produce the most extinction near the
decaying neutrino line at 860 Å (vertical line).
Extinction in the vicinity of 860 Å is also weak (with a peak value of τmax ∼
1.0 × 10−21 cm2 H−1 at 770 Å), so that this model is able to ‘hide’ very little of
the light from decaying neutrinos. Insofar as it almost certainly underestimates
the true extent of extinction by dust, this grain model provides a good lower limit
on absorption in the context of the decaying-neutrino hypothesis.
The silicate component of our population 2 grain model is modified along
the lines of the ‘fluffy silicate’ model which has been suggested as a resolution of
the heavy-element abundance crisis in the DISM [41]. We replace the standard
silicates of population 1 by silicate grains with a 45% void fraction, assuming a
silicon abundance of 32.5×10−6/H [20]. We also decrease the size of the graphite
grains (a = 50–250 Å) and reduce the carbon depletion to 60% to provide a better
match to the DISM curve. This mixture provides a better match to the interstellar
data at optical wavelengths, and also shows significantly more FUV extinction
than population 1, with a peak of τmax ∼ 2.0 × 10−21 cm2 H−1 at 750 Å. Results
are shown in figure 7.3 as a short-dashed line.
For population 3, we retain the standard silicates of population 1 but
modify the graphite component as an approximation to the polycyclic aromatic
hydrocarbon (PAH) nanostructures which have recently been proposed as carriers
of the 2175 Å absorption bump [42]. PAH nanostructures are thought to consist
of stacks of molecules such as coronene (C24 H12 ), circumcoronene (C54 H18 ) and
The ultraviolet background 139
larger species in various states of edge hydrogenation. They have been linked
to the 3.4 µm absorption feature in the DISM [43] as well as the extended
red emission in nebular environments [44]. With sizes in the range 7–30 Å,
these structures are much smaller than the canonical graphite grains. Their
dielectric functions, however, go over to that of graphite in the high-frequency
limit [42]. So as an approximation to these particles, we use spherical graphite
grains with extremely small radii (3–150 Å). This greatly increases extinction near
the neutrino-decay peak, giving τmax ∼ 3.9 × 10−21 cm2 H−1 at 730 Å. Results
are shown in figure 7.3 as a dotted line.
Our population 4 grain model, finally, combines the distinctive features of
populations 2 and 3. It has a graphite component made up of nanoparticles (as
in population 3) and a fluffy silicate component with a 45% porosity (like that
of population 2). Results (plotted as a long-dashed line in figure 7.3) are close to
those obtained with population 3. This is because extinction in the FUV waveband
is dominated by small-particle contributions, so that silicates (whatever their void
fraction) are of secondary importance. Absolute extinction rises slightly, with a
peak of τmax ∼ 4.1 × 10−21 cm2 H−1 at 720 Å. The quantity which is of most
interest to us, extinction relative to that in the B-band, drops slightly longward of
800 Å. Therefore it is the population 3 grains which provide us with the highest
value of ξ(λ0 ) near 860 Å, and hence a conservative upper limit on dust absorption
in the context of the decaying-neutrino scenario. Neither the population 3 nor the
population 4 grains fit the average DISM curve as well as those of population 2,
because the Mie scattering formalism cannot accurately reproduce the behaviour
of nanoparticles near the 2175 Å resonance. Their high levels of extinction in the
FUV region, however, suit these grain models for our purpose, which is to set the
strongest possible limits on the decaying-neutrino hypothesis.
We are now ready to specify the total optical depth (7.30) and hence to evaluate
the intensity integral (7.29). We will do this using three combinations of the
dust models just described, with a view to establishing lower and upper bounds
on the EBL intensity predicted by the theory. A minimum-absorption model is
obtained by combining Fall and Pei’s model A with the extinction curve of the
population 1 (standard) dust grains. At the other end of the spectrum, model C
of Fall and Pei together with the population 3 (nanoparticle) grains provides the
most conservative maximum-absorption model (for λ0 800 Å). Finally, as an
intermediate model, we combine model B of Fall and Pei with the ‘middle-of-the-
road’ extinction curve labelled as population 0 in figure 7.3.
The resulting predictions for the spectral intensity of the FUV background
due to decaying neutrinos are plotted in figure 7.4 (light lines) and compared
with observational limits (bold lines and points). The curves in the bottom half
of this figure refer to EBL contributions from bound neutrinos only, while those
140 Neutrinos
H86
100 WP94
Ma91 E01
MB89
Mu99
10
Figure 7.4. The spectral intensity Iλ of background radiation from decaying neutrinos as
a function of observed wavelength λ0 (light curves), plotted together with observational
upper limits on EBL intensity in the far ultraviolet (points and bold curves). The bottom
four theoretical curves refer to bound neutrinos only, while the top four refer to bound
and free-streaming neutrinos together. The minimum predicted signals consistent with
the theory are combined with the highest possible extinction in the intergalactic medium
and vice versa. The faint dotted lines show the signal that would be seen (in the
maximum-intensity case) if there were no intergalactic extinction at all.
in the top half correspond to contributions from both bound and free-streaming
neutrinos together.
We begin our discussion with the bound neutrinos. The key results are the
three widely-spaced curves in the lower half of the figure, with peak intensities
of about 6, 20 and 80 CUs at λ0 ≈ 900 Å. These are obtained by letting h 0 and
f h take their minimum, nominal and maximum values respectively in (7.29), with
the reverse order applying to f τ . Simultaneously we have adopted the maximum,
intermediate and minimum-absorption models for intergalactic dust, as described
earlier. Thus the highest-intensity model is paired with the lowest possible dust
extinction and vice versa. (The faint dotted line appended to the highest-intensity
curve is included for comparison purposes and shows the effects of neglecting
absorption by dust and gas altogether.) These curves should be seen as extreme
upper and lower bounds on the theoretical intensity of EBL contributions from
decaying neutrinos in galaxy halos.
The ultraviolet background 141
more points from the analysis of OAO-2 satellite data by Lillie and Witt ([47];
labelled ‘LW76’ in figure 7.4) which we have already encountered in chapter 3.
Nearby is an upper limit from the Russian Prognoz satellite by Zvereva et al
([48]; ‘Z82’). Considerably stronger broadband limits have come from rocket
experiments by Paresce et al ([49]; ‘P79’), Anderson et al ([50]; ‘A79’) and
Feldman et al ([51]; ‘Fe81’), as well as an analysis of data from the Solrad-11
spacecraft by Weller ([52]; ‘We83’).
A number of studies have proceeded by establishing a correlation between
background intensity and the column density of neutral hydrogen inside the
Milky Way, and then extrapolating this out to zero column density to obtain the
presumed extragalactic component. Martin et al [53] applied this method to data
taken by the Berkeley UVX experiment, setting an upper limit of 110 CUs on
the intensity of any unidentified EBL contributions over 1400–1900 Å (‘Ma91’).
The correlation method is subject to uncertainties involving the true extent of
scattering by dust, as well as absorption by ionized and molecular hydrogen at
high galactic latitudes. Henry [22] and Henry and Murthy [55] approach these
issues differently and raise the upper limit on background intensity to 400 CUs
over 1216–3200 Å. A good indication of the complexity of the problem is found
at 1500 Å, where Fix et al [56] used data from the DE-1 satellite to identify an
isotropic background flux of 530 ± 80 CUs (‘Fi89’), the highest value reported
so far. The same data were subsequently reanalysed by Wright [54] who found a
much lower best-fit value of 45 CUs, with a conservative upper limit of 500 CUs
(‘Wr92’). The former would rule out the decaying-neutrino hypothesis, while the
latter does not constrain it at all. A third treatment of the same data has led to an
intermediate result of 300 ± 80 CUs ([57]; ‘WP94’).
Limits on the FUV background shortward of Lyα have been even more
controversial. Several studies have been based on data from the Voyager 2
ultraviolet spectrograph, beginning with that of Holberg [58], who obtained limits
between 100 and 200 CUs over 500–1100 Å (labelled ‘H86’ in figure 7.4). A
reanalysis of the data over 912–1100 Å by Murthy et al [59] led to similar
numbers. In a subsequent reanalysis, however, Murthy et al [60] tightened this
bound to 30 CUs over the same waveband (‘Mu99’). The statistical validity of
these results has been vigorously debated [61, 62], with a second group asserting
that the original data do not justify a limit smaller than 570 CUs (‘E00’). Of
these Voyager-based limits, the strongest (‘Mu99’) is wholly incompatible with
the decaying-neutrino hypothesis, while the weakest (‘E00’) puts only a modest
constraint on the theory. Two new experiments have yielded results midway
between these extremes: the DUVE orbital spectrometer [63] and the EURD
spectrograph aboard the Spanish MINISAT 01 [64]. Upper limits on continuum
emission from the former instrument are 310 CUs over 980–1020 Å and 440 CUs
over 1030–1060 Å (‘K98’), while the latter has produced upper bounds of
280 CUs at 920 Å and 450 CUs at 1000 Å (‘E01’).
What do these observational data imply for the decaying-neutrino
hypothesis? Longward of Lyα, figure 7.4 shows that they span very nearly the
The ultraviolet background 143
wavelengths in the range 900–1200 Å, predicted intensities are either comparable
to or higher than those actually seen. Thus, while there is now good experimental
evidence that some of the dark matter is provided by massive neutrinos, the light
of the night sky tells us that these particles cannot have the rest masses and decay
lifetimes attributed to them in the decaying-neutrino hypothesis.
References
[1] Cowsik R 1977 Phys. Rev. Lett. 39 784
[2] de Rujula A and Glashow S L 1980 Phys. Rev. Lett. 45 942
[3] Pal P B and Wolfenstein L 1982 Phys. Rev. D 25 766
[4] Bowyer S et al 1995 Phys. Rev. D 52 3214
[5] Melott A L and Sciama D W 1981 Phys. Rev. Lett. 46 1369
[6] Sciama D W and Melott A L 1982 Phys. Rev. D 25 2214
[7] Melott A L, McKay D W and Ralston J P 1988 Astrophys. J. 324 L43
[8] Sciama D W 1990 Astrophys. J. 364 549
[9] Sciama D W 1993 Modern Cosmology and the Dark Matter Problem (Cambridge:
Cambridge University Press)
[10] Sciama D W 1997 Mon. Not. R. Astron. Soc. 289 945
[11] Fabian A, Naylor T and Sciama D 1991 Mon. Not. R. Astron. Soc. 249 21
[12] Davidsen A F et al 1991 Nature 351 128
[13] Sciama D W 1993 Pub. Astron. Soc. Pac. 105 102
[14] Stecker F W 1980 Phys. Rev. Lett. 45 1460
[15] Kimble R, Bowyer S and Jakobsen P 1981 Phys. Rev. Lett. 46 80
[16] Sciama D W 1991 The Early Observable Universe from Diffuse Backgrounds ed
B Rocca-Volmerange, J M Deharveng and J Van Tran Thanh (Paris: Edition
Frontières) p 127
[17] Overduin J M, Wesson P S and Bowyer S 1993 Astrophys. J. 404 460
[18] Dodelson S and Jubas J M 1994 Mon. Not. R. Astron. Soc. 266 886
[19] Overduin J M and Wesson P S 1997 Astrophys. J. 483 77
[20] Overduin J M, Seahra S S, Duley W W and Wesson P S 1999 Astron. Astrophys. 349
317
[21] Bowyer S 1991 Ann. Rev. Astron. Astrophys. 29 59
[22] Henry R C 1991 Ann. Rev. Astron. Astrophys. 29 89
[23] Salucci P and Sciama D W 1990 Mon. Not. R. Astron. Soc. 244 9P
[24] Gates E I, Gyuk G and Turner M S 1995 Astrophys. J. 449 L123
[25] Kochanek C S 1996 Astrophys. J. 457 228
[26] Zuo L and Phinney E S 1993 Astrophys. J. 418 28
[27] Ostriker J P and Cowie L L 1981 Astrophys. J. 243 L127
[28] Wright E L 1981 Astrophys. J. 250 1
[29] Ostriker J P and Heisler J 1984 Astrophys. J. 278 1
[30] Wright E L 1986 Astrophys. J. 311 156
[31] Wright E L and Malkan M A 1987 Bull. Am. Astron. Soc. 19 699
[32] Heisler J and Ostriker J P 1988 Astrophys. J. 332 543
[33] Wright E L 1990 Astrophys. J. 353 413
[34] Fall S M and Pei Y C 1993 Astrophys. J. 402 479
[35] Mathis J S 1990 Ann. Rev. Astron. Astrophys. 28 37
The ultraviolet background 145
[36] Martin P G and Rouleau F 1991 Extreme Ultraviolet Astronomy ed R F Malina and
S Bowyer (New York: Pergamon) p 341
[37] Draine B T and Lee H M 1984 Astrophys. J. 285 89
[38] Meyer J-P 1979 Les Elements et leurs Isotopes dan L’univers (Université de Liège)
p 153
[39] Snow T P and Witt A N 1996 Astrophys. J. 468 L65
[40] Draine B T 1995 http://www.astro.princeton.edu/∼draine/dust/
[41] Mathis J S 1996 Astrophys. J. 472 643
[42] Duley W W and Seahra S 1998 Astrophys. J. 507 874
[43] Duley W W and Seahra S S 1999 Astrophys. J. 522 L129
[44] Seahra S S and Duley W W 1999 Astrophys. J. 520 719
[45] Martin C and Bowyer S 1989 Astrophys. J. 338 677
[46] Sasseen T P, Lampton M, Bowyer S and Wu X 1995 Astrophys. J. 447 630
[47] Lillie C F and Witt A N 1976 Astrophys. J. 208 64
[48] Zvereva A M et al 1982 Astron. Astrophys. 116 312
[49] Paresce F, Margon B, Bowyer S and Lampton M 1979 Astrophys. J. 230 304
[50] Anderson R C et al 1979 Astrophys. J. 234 415
[51] Feldman P D, Brune W H and Henry R C 1991 Astrophys. J. 249 L51
[52] Weller C S 1983 Astrophys. J. 268 899
[53] Martin C, Hurwitz M and Bowyer S 1991 Astrophys. J. 379 549
[54] Wright E L 1992 Astrophys. J. 391 34
[55] Henry R C and Murthy J 1993 Astrophys. J. 418 L17
[56] Fix J D, Craven J D and Frank L A 1989 Astrophys. J. 345 203
[57] Witt A N and Petersohn J K 1994 The First Symposium on the Infrared Cirrus
and Diffuse Interstellar Clouds (Astronomical Society of the Pacific Conference
Series, Volume 58) ed R M Cutri and W B Latter (San Francisco, CA: ASP) p 91
[58] Holberg J B 1986 Astrophys. J. 311 969
[59] Murthy J, Henry R C and Holberg J B 1991 Astrophys. J. 383 198
[60] Murthy J et al 1999 Astrophys. J. 522 904
[61] Edelstein J, Bowyer S and Lampton M 2000 Astrophys. J. 539 187
[62] Murthy J et al 2001 Astrophys. J. 557 L47
[63] Korpela E J, Bowyer S and Edelstein J 1998 Astrophys. J. 495 317
[64] Edelstein J et al 2001 Astrophys. Space Sci. 276 177
[65] Bowyer S et al 1999 Astrophys. J. 526 10
Chapter 8
146
Neutralinos 147
There are at least three ways in which SUSY WIMPs could contribute to
the extragalactic background light (EBL). The first is by pair annihilation to
photons. This occurs in even the simplest or minimal SUSY model (MSSM) but
it is a very slow process because it takes place via intermediate loops of charged
particles such as leptons and quarks and their antiparticles. Processes involving
intermediate bound states contribute as well, but these are also suppressed by loop
diagrams and are equally suppressed. (Each vertex in such a diagram weakens
the interaction by one factor of the coupling parameter.) The reason for the
stability of the LSP in the MSSM is an additional new symmetry of nature,
known as R-parity, which is necessary (among other things) to protect the proton
from decaying via intermediate SUSY states. There are also non-minimal SUSY
theories which do not conserve R-parity (and in which the proton can decay).
In these theories, LSPs can contribute to the EBL in two more ways: by direct
decay into photons via loop diagrams; and also indirectly via tree-level decays
to secondary particles which then scatter off pre-existing background photons to
produce a signal.
In this chapter, we will estimate the strength of EBL contributions from
all three kinds of processes (annihilations, one-loop and tree-level decays). The
strongest signals come from decaying LSPs in non-minimal SUSY theories, and
we will find that these particles are strongly constrained by data on background
radiation in the x-ray and γ -ray bands. While our results depend on a number of
theoretical parameters, they imply in general that the LSP, if it decays, must do so
on timescales which are considerably longer than the lifetime of the Universe.
8.2 Neutralinos
The first step is to choose an LSP from the lineup of SUSY candidate particles.
Early workers variously identified this as the photino γ̃ [2], the gravitino g̃ [3],
the sneutrino ν̃ [7] or the selectron ẽ [8]. (SUSY superpartners are denoted by a
tilde and take the same names as their standard-model counterparts, with a prefix
‘s’ for superpartners of fermions and a suffix ‘ino’ for those of bosons.) Many of
these contenders were eliminated by Ellis et al, who showed in 1984 that the LSP
is, in fact, most likely to be a neutralino χ̃, a linear superposition of the photino
γ̃ , the zino Z̃ and two neutral higgsinos h̃ 01 and h̃ 02 [9]. (These are the SUSY
spin- 12 counterparts of the photon, Z 0 and Higgs bosons respectively.) There
are four neutralinos, each a mass eigenstate made up of (in general) different
amounts of photino, zino, etc, although in special cases a neutralino could be
‘mostly photino’, say, or ‘pure zino’. The LSP is, by definition, the lightest
such eigenstate. Accelerator searches place a lower limit on its rest energy which
currently stands at m χ̃ c2 > 46 GeV [12].
In minimal SUSY, the density of neutralinos drops only by way of the (slow)
pair-annihilation process, and it is quite possible for these particles to ‘overclose’
the Universe if their rest energy is too high. This does not change the geometry of
148 Supersymmetric weakly interacting particles
the Universe but rather speeds up its expansion rate, whose square is proportional
to the total matter density from equation (2.33). In such a situation, the Universe
would have reached its present size in too short a time. Lower bounds on the
age of the Universe thus impose an upper bound on the neutralino rest energy
which has been set at m χ̃ c2 º 3200 GeV [13]. Detailed exploration of the
parameter space of minimal SUSY theory shows that this upper limit tightens
in most cases to m χ̃ c2 º 600 GeV [14]. Much recent work is focused on a
slimmed-down version of the MSSM known as the constrained minimal SUSY
model (CMSSM), in which all existing experimental bounds and cosmological
requirements are comfortably met by neutralinos with rest energies in the range
90 GeV º m χ̃ c2 º 400 GeV [15].
Even in its constrained minimal version, SUSY theory contains at least
five adjustable input parameters, making the neutralino a considerably harder
proposition to test than the axion or the massive neutrino. Fortunately, there
are several other ways (besides accelerator searches) to look for these particles.
Because their rest energies are above the temperature at which they decoupled
from the primordial fireball, WIMPs have non-relativistic velocities and ought
to be found predominantly in gravitational potential wells like those of our
own galaxy. They will occasionally scatter against target nuclei in terrestrial
laboratories as the Earth follows the Sun around the Milky Way. Efforts to
detect such scattering events are hampered by both low event rates and the
relatively small amounts of energy deposited. But they could be given away by
the directional dependence of nuclear recoils, or by the seasonal modulation of
the event rate due to the Earth’s motion around the Sun. Both effects are now the
basis for direct detection experiments around the world [16]. Great excitement
followed the announcement by the DAMA team in 2000 of a WIMP signature
with rest energy m χ̃ c2 = 59+17
−14 GeV [17]. Two other groups (CDMS [18] and
EDELWEISS [19]) have, however, been unable to reproduce this result; and it
is generally regarded as a false start but one which is stimulating a great deal of
productive follow-up work.
A second, indirect search strategy is to look for annihilation byproducts
from neutralinos which have collected inside massive bodies. Most attention has
been directed at the possibility of detecting antiprotons from the galactic halo [20]
or neutrinos from the Sun [21] or Earth [22]. The heat generated in the cores of gas
giants like Jupiter or Uranus has also been considered as a potential annihilation
signature [23]. The main challenge in each case lies in separating the signal
from the background noise. Neutrinos from WIMP annihilations in the Sun or
Earth provide perhaps the best chance for a detection because backgrounds for
these cases are small and relatively well understood. In the case of the Earth,
one looks for a flux of neutrino-induced muons which can be distinguished from
the atmospheric background by the fact that they are travelling straight up. The
AMANDA team, whose detectors are buried deep in Antarctic ice, is one of
several which have reported preliminary results in experiments of this kind [24].
After various filters were applied to the first year’s worth of raw data, 15 candidate
Pair annihilation 149
Figure 8.1. Some Feynman diagrams corresponding to the annihilation of two neutralinos
(χ̃), producing a pair of photons (γ ). The process is mediated by fermions (f) and their
supersymmetric counterparts, the sfermions (f̃).
events remained. This was consistent with the expected statistical background of
17 events, leaving only an upper limit on the size of a possible WIMP population
inside the Earth at present.
of dark matter, has received far less attention. First to apply the problem to SUSY
WIMPs were Cabibbo et al [2] who assumed a WIMP rest energy (10–30 eV)
which we now know is far too low. Like the decaying neutrino (chapter 7), this
would have produced a background in the ultraviolet. It is excluded, however,
by an argument (sometimes known as the Lee–Weinberg bound) which restricts
WIMPs to rest energies above 2 GeV [6]. EBL contributions from SUSY
WIMPs in this range were first estimated by Silk and Srednicki [20]. Their
conclusion, and those of most workers who have followed them [39–41], is that
neutralino annihilations would be responsible for no more than a small fraction
of the observed γ -ray background. Here we review this argument, reversing our
usual procedure and attempting to set a reasonably conservative upper limit on
neutralino contributions to the EBL.
As usual, we take for our sources of background radiation the galactic dark-
matter halos with comoving number density n 0 . Each neutralino pair annihilates
into two monochromatic photons of energy E γ ≈ m χ̃ c2 [29]. We model this with
the Gaussian SED (3.19)
L h,ann 1 λ − λann 2
F(λ) = √ exp − . (8.1)
2πσλ 2 σλ
Using the values for ρ , r and a specified before and setting rh = 250 kpc to
get an upper limit, we find that (5 × 1065 cm−3 )m −210 . Putting this result
together with the cross section (8.5) into (8.3), we obtain:
Inserting Giudice and Griest’s [43] lower limit on the sfermion mass m̃ 10 (as
empirically fit before), we find that (8.9) gives an upper limit on halo luminosity
of L h,ann (5 × 1035 erg s−1 ) fχ m −0.2
10 . Higher estimates can be found in the
literature [48] but these assume a singular halo whose density drops off as only
ρχ̃ (r ) ∝ r −1.8 and extends out to a very large halo radius rh = 4.2h −1
0 Mpc. For
a standard isothermal distribution of the form (8.6), our results confirm that halo
luminosity due to neutralino annihilations alone is very low, amounting to less
than 10−8 times the total bolometric luminosity of the Milky Way.
The combined bolometric intensity of neutralino annihilations between
redshift z f and the present is given by substituting the comoving number density
n 0 and luminosity L h,ann into equation (2.20) to give
zf
dz
Q = Q χ̃ ,ann (8.10)
0 (1 + z) [m,0 (1 + z) + (1 − m,0 )]
2 3 1/2
where Q χ̃,ann = (cn 0 L h,ann f c )/H0 and we have assumed spatial flatness. With
values for all these parameters as specified earlier, we find
−12 erg s−1 cm−2 )h 2 f m −0.2 (
(1 × 10 0 χ 10 m,0 = 0.1h 0 )
Q = (3 × 10−12 erg s−1 cm−2 )h 0 fχ m −0.2 ( m,0 = 0.3) (8.11)
−11 −1 −2
10
−0.2
(1 × 10 erg s cm )h 0 fχ m 10 (m,0 = 1).
Here we have set z f = 30 (larger values do not substantially increase the value
of Q) and used values of f c = 1, 4h −1 −1
0 and 20h 0 respectively. The effects of
the cosmological enhancement factor f c are partially offset in (8.10) by the fact
that a Universe with higher matter density m,0 is younger, and hence contains
less background light in general. Even the highest value of Q given in (8.11) is
Pair annihilation 153
0.1
0.01
Iλ ( CUs )
0.001
N80
0.0001
SAS−2 (TF82)
COMPTEL (K96)
1e−05 EGRET (S98)
mχ c2 = 1000 GeV
mχ c2 = 300 GeV
1e−06
mχ c2 = 100 GeV
mχ c2 = 30 GeV
1e−07 mχ c2 = 10 GeV
1e−08 1e−07 1e−06 1e−05 0.0001 0.001 0.01
λ0 ( Å )
Figure 8.2. The spectral intensity of the diffuse γ -ray background due to neutralino
annihilations (lower left), compared with observational limits from high-altitude balloon
experiments (N80), as the SAS-2 spacecraft and the COMPTEL and EGRET instruments
aboard the Compton Gamma-Ray Observatory. The three plotted curves for each value of
m χ̃ c2 depend on the total density of neutralinos: galaxy halos only (m,0 = 0.1h 0 ; bold
lines), CDM model (m,0 = 0.3; medium lines) or EdS model (m,0 = 1; light lines).
Figure 8.3. Feynman diagrams corresponding to one-loop decays of the neutralino (χ̃)
into a neutrino (here ντ ) and a photon (γ ). The process can be mediated by the W boson
and a τ muon, or by the τ and its supersymmetric counterpart (τ̃ ).
Here we have divided through by the photon energy hc/λ0 to put results into
continuum units or CUs as usual (section 3.3). Equation (8.12) gives the
combined intensity of radiation from neutralino annihilations, emitted at various
wavelengths and redshifted by various amounts but observed at wavelength λ0 .
Results are plotted in figure 8.2 together with observational constraints. We defer
discussion of this plot (and the data) to section 8.7, where they are compared with
the other WIMP results for this chapter.
χ̃ → ν + γ . (8.14)
Feynman diagrams for this process are shown in figure 8.3. Because these decays
occur via loop diagrams, they are again subdominant. We consider theories in
which R-parity breaking occurs spontaneously. This introduces a scalar sneutrino
with a non-zero vacuum expectation value vR ≡ ν̃τ R , as discussed by Masiero
and Valle [50]. Neutralino decays into photons could be detectable if m χ̃ and vR
are large [51].
The photons produced in this way are again monochromatic, with E γ =
1 2
2 χ̃ . In fact the SED here is the same as (8.1) except that the peak wavelength
m c
is doubled, λloop = 2hc/m χ̃ c2 = (2.5 × 10−6 Å) m −1 10 . The only parameter that
needs to be recalculated is the halo luminosity L h . For one-loop neutralino decays
of lifetime τχ̃ , this takes the form:
Nχ̃ bγ E γ bγ Mh c2
L h,loop = = . (8.15)
τχ̃ 2τχ̃
One-loop decays 155
where the new parameter f R ≡ vR /(100 GeV). The requirement that SUSY
WIMPs do not carry too much energy out of stellar cores implies that f R is of
order ten or more [50]. We take f R > 1 as a lower limit.
For halo mass we adopt Mh = (2 ± 1) × 1012 M as usual, with rh =
(170 ± 80) kpc from the discussion following (8.7). As in the previous section,
we parametrize our lack of certainty about the distribution of neutralinos on
larger scales with the cosmological enhancement factor f c . Collecting these
results together and expressing the decay lifetime in dimensionless form as
f τ ≡ τχ̃ /(1 Gyr), we obtain for the luminosity of one-loop neutralino decays
in the halo:
L h,loop = (6 × 1040 erg s−1 )m 210 f R2 f τ−1 . (8.17)
With m 10 ∼ f R ∼ f τ ∼ 1, equation (8.17) gives L h,loop ∼ 2 × 107 L . This is
considerably brighter than the halo luminosity due to neutralino annihilations in
minimal SUSY models but still amounts to less than 10−3 times the bolometric
luminosity of the Milky Way.
The combined bolometric intensity is found as in the previous section, but
with L h,ann in (8.10) replaced by L h,loop so that
−7 −1 −2 2 2 2 −1
(1 × 10 erg s cm )h 0 m 10 f R fτ (m,0 = 0.1h 0 )
Q = (4 × 10 erg s cm−2 )h 0 m 210 f R2 fτ−1
−7 −1 (m,0 = 0.3) (8.18)
(2 × 10−6 erg s−1 cm−2 )h 0 m 210 f R2 fτ−1 (m,0 = 1).
This is again small. However, we see that massive (m 10 10) neutralinos which
provide close to the critical density (m,0 ∼ 1) and decay on timescales of order
1 Gyr or less ( f τ 1) could, in principle, rival the intensity of the conventional
EBL.
To obtain more quantitative constraints, we turn to spectral intensity. This
is given by equation (8.12) as before, except that the dimensional prefactor Iχ̃ ,ann
must be replaced by
n 0 L h,loop f c λ0
Iχ̃ ,loop = √
32π 3 h H0 σλ
λ0
= (30 CUs)h 20 m 310 fR2 f τ−1 f c σ9−1 . (8.19)
10−7 Å
Results are plotted in figure 8.4 for neutralino rest energies 1 m 10 100.
While their bolometric intensity is low, these particles are capable of significant
EBL contributions in narrow portions of the γ -ray background. To keep the
156 Supersymmetric weakly interacting particles
0.1
Iλ ( CUs )
N80
0.01 SAS−2 (TF82)
COMPTEL (K96)
EGRET (S98)
mχ c2 = 1000 GeV
mχ c2 = 300 GeV
0.001 2
mχ c = 100 GeV
2
mχ c = 30 GeV
mχ c2 = 10 GeV
1e−08 1e−07 1e−06 1e−05 0.0001 0.001
λ0 ( Å )
Figure 8.4. The spectral intensity of the diffuse γ -ray background due to neutralino
one-loop decays (lower left), compared with observational upper limits from high-altitude
balloon experiments (filled dots), SAS-2, EGRET and COMPTEL. The three plotted
curves for each value of m χ̃ c2 correspond to models with m,0 = 0.1h 0 (heavy lines),
m,0 = 0.3 (medium lines) and m,0 = 1 (light lines). For clarity we have assumed
decay lifetimes in each case such that the highest theoretical intensities lie just under the
observational constraints.
diagram from becoming too cluttered, we have assumed values of f τ such that
the highest predicted intensity in each case stays just below the EGRET limits.
Numerically, this corresponds to lower bounds on the decay lifetime τχ̃ between
100 Gyr (for m χ̃ c2 = 10 GeV) and 105 Gyr (for m χ̃ c2 = 300 GeV). For
rest energies at the upper end of this range, these limits are probably optimistic
because the decay photons are energetic enough to undergo pair production on
cosmic microwave background (CMB) photons. Some of the decay photons,
in other words, do not reach us from cosmological distances but are instead
reprocessed into lower-energy radiation along the way. As we show in the next
section, however, stronger limits arise from a different process in any case. We
defer further discussion of figure 8.4 (including the observational data and the
limits on neutralino lifetime that follow from them) to section 8.7.
Tree-level decays 157
Figure 8.5. The Feynman diagrams corresponding to tree-level decays of the neutralino
(χ̃) into a neutrino (ν) and a lepton–antilepton pair (here, the electron and positron). The
process can be mediated by the W or Z boson.
where λγ = hc/E max = (0.34 Å)m −2 10 (1 + z) and L h,tree is the halo luminosity
due to tree-level decays.
In the case of more massive neutralinos with m 10 10, the situation is
complicated by the fact that outgoing photons become energetic enough to initiate
pair production via γ + γcmb → e+ + e− . This injects new electrons into the ICS
process, resulting in electromagnetic cascades. For particles which decay at high
redshifts (z 100), other processes such as photon–photon scattering must also
be taken into account [55]. Cascades on non-CMB background photons may also
be important [56]. A full treatment of these effects requires detailed numerical
analysis as carried out for example in [57]. Here we simplify the problem by
assuming that the LSP is stable enough to survive into the late matter-dominated
(or vacuum-dominated) era. The primary effect of cascades is to steepen the decay
spectrum at high energies, so that [54]
E −3/2 (E E x )
Ncasc (E) ∝ E −2 (E x < E E c ) (8.23)
0 (E > E c )
where
1 E0 2
Ex = E cmb (1 + z)−1 E c = E 0 (1 + z)−1 .
3 m e c2
Here E 0 is a minimum absorption energy. We adopt the numerical expressions
E x = (1.8 × 103 GeV)(1 + z)−1 and E c = (4.5 × 104 GeV)(1 + z)−1
after Protheroe et al [58]. Employing the relation F(λ) dλ = E N(E) dE and
normalizing as before, we find:
√
L h,tree λx λ−3/2 (λ λx )
Fcasc (λ) = × λ−1 (λx > λ λc ) (8.24)
[2 + ln(λx /λc )]
0 (λ < λc )
Nχ̃ be E e 2be Mh c2
L h,tree = = . (8.25)
τχ̃ 3τχ̃
Here be is now the branching ratio for all processes of the form χ̃ → e + all, and
E e = 23 m χ̃ c2 is the total energy lost to the electrons. We assume that all of this
Tree-level decays 159
eventually finds its way into the EBL. Berezinsky et al [51] supply the following
branching ratio:
be ≈ 10−6 f χ f R2 m 210 . (8.26)
Here f χ parametrizes the composition of the neutralino, taking the value 0.4
for the pure higgsino case. With the halo mass specified by (6.15) and f τ ≡
τχ̃ /(1 Gyr) as usual, we obtain:
This is approximately four orders of magnitude higher than the halo luminosity
due to one-loop decays, and provides for the first time the possibility of significant
EBL contributions. With all adjustable parameters taking values of order unity, we
find that L h,tree ∼ 2 × 1010 L , which is comparable to the bolometric luminosity
of the Milky Way.
The combined bolometric intensity of all neutralino halos is computed as in
the previous two sections. Replacing L h,loop in (8.10) with L h,tree leads to the
values
−4 −1 −2 2 2 2 −1 (
(2 × 10 erg s cm )h 0 m 10 f χ f R fτ m,0 = 0.1h 0 )
Q = (5 × 10−4 erg s−1 cm−2 )h 0 m 210 f χ f R2 fτ−1 (m,0 = 0.3) (8.28)
−3 −1 −2 −1
(2 × 10 erg s cm )h 0 m 10 f χ f R fτ
2 2 (m,0 = 1).
These are of the same order as or higher than the bolometric intensity of the EBL
from ordinary galaxies, equation (2.25).
To obtain the spectral intensity, we substitute the SEDs Fics (λ) and Fcasc (λ)
into equation (3.6). The results can be written:
zf
(z) dz
Iλ (λ0 ) = Iχ̃,tree (8.29)
0 (1 + z)[ m,0 (1 + z)3 + (1 − m,0 )]1/2
where the quantities Iχ̃ ,tree and (z) are defined as follows. For neutralino rest
energies m 10 º 10 (ICS):
1/2
n 0 L h,tree f c 0.34 Å
Iχ̃ ,tree =
8πh H0m 10 λ0
−1/2
λ0
= (300 CUs)h 20 m 10 fχ fR2 f τ−1 fc (8.30)
Å
1 [λ0 λγ (1 + z)]
(z) =
0 [λ0 < λγ (1 + z)].
SAS−2 (TF82)
G92
COMPTEL (K96)
(upper limits)
EGRET (S98)
1 Fit to all (G99)
2
mχ c > 100 GeV
2
mχ c = 100 GeV
2
mχ c = 30 GeV
2
mχ c = 10 GeV
Iλ ( CUs )
0.1
0.01
Figure 8.6. The spectral intensity of the diffuse γ -ray and x-ray backgrounds due
to neutralino tree-level decays, compared with observational upper limits from SAS-2,
EGRET and COMPTEL in the γ -ray region, and from Gruber’s fits to the experimental
data in the x-ray region (G92,G99). The three plotted curves for each value of m χ̃ c2
correspond to models with m,0 = 0.1h 0 (bold lines), m,0 = 0.3 (medium lines) and
m,0 = 1 (light lines). For clarity we have assumed decay lifetimes in each case such that
the highest theoretical intensities lie just under the observational constraints.
−1/2
λ0
= (0.02 CUs)h 20 m 210 fχ fR2 f τ−1 f c (8.31)
Å
1 [λ 0 λ x (1 + z)]
λ0
(z) = [λx (1 + z) > λ0 λc (1 + z)]
λx (1 + z)
0 [λ0 < λc (1 + z)].
Numerical integration of equation (8.29) leads to the plots shown in figure 8.6.
Cascades (like the pair annihilations we have considered already) dominate the
γ -ray part of the spectrum. The ICS process, however, is most important at
lower energies, in the x-ray region. We discuss the observational limits and the
constraints that can be drawn from them in more detail in section 8.7.
Gravitinos 161
8.6 Gravitinos
Gravitinos (g̃) are the SUSY spin- 32 counterparts of gravitons. Although often
discussed along with neutralinos, they are not favoured as dark-matter candidates,
at least in the simplest SUSY theories. The reason for this, often called the
gravitino problem [5], boils down to the fact that they interact too weakly, not
only with other particles but with themselves as well. Hence they annihilate too
slowly and survive long enough to ‘overclose’ the Universe unless some other
way is found to reduce their numbers. Decays are one possibility, but not if the
gravitino is a stable LSP. Gravitino decay products must also not be allowed
to interfere with processes such as primordial nucleosynthesis [4]. Inflation,
followed by a judicious period of reheating, can thin out their numbers to almost
any desired level. But the reheat temperature TR must satisfy kTR º 1012 GeV
or gravitinos will once again become too numerous [9]. Related arguments
based on entropy production, primordial nucleosynthesis and the CMB power
spectrum force this number down to kTR º (109 –1010) GeV [59] or even
kTR º (106–109 ) GeV [60]. These temperatures are incompatible with the
generation of baryon asymmetry in the Universe, a process which is usually
taken to require kTR ∼ 1014 GeV or higher [61].
Recent developments are, however, beginning to loosen the baryogenesis
requirement [62], and there are alternative models in which baryon asymmetry
is generated at energies as low as ∼10 TeV [63] or even 10 MeV–1 GeV [64].
With this in mind we include a brief look at gravitinos here. There are two
possibilities: (1) If the gravitino is not the LSP, then it decays early in the
history of the Universe, well before the onset of the matter-dominated era. In
the model of Dimopoulos et al [65], for example, the gravitino decays both
radiatively and hadronically and is, in fact, ‘long-lived for its mass’ with a lifetime
of τg̃ = (2 − 9) × 105 s. Particles of this kind have important consequences
for nucleosynthesis, and might affect the shape of the CMB, if τg̃ were to exceed
∼107 s. However, they are irrelevant as far as the EBL is concerned. We therefore
restrict our attention to the case (2), in which the gravitino is the LSP. Moreover,
in light of the results we have already obtained for the neutralino, we disregard
annihilations and consider only models in which the LSP can decay.
The decay mode depends on the specific mechanism of R-parity violation.
We follow Berezinsky [66] and concentrate on dominant tree-level processes. In
particular we consider the decay:
g̃ → e+ + all (8.32)
followed by ICS off the CMB, as in the previous section on neutralinos. The
spectrum of photons produced by this process is identical to that in section 8.5,
except that the monoenergetic electrons have energy E e = 12 m g̃ c2 = (5 GeV)m 10
[66], where m g̃ c2 is the rest energy of the gravitino and m 10 ≡ m g̃ c2 /(10 GeV)
as before. This, in turn, implies that E max = (81 keV)m 210 (1 + z)−1 and
162 Supersymmetric weakly interacting particles
λγ = hc/E max = (0.15 Å)m −210 (1 + z). The values of λx and λc are unchanged.
The SED comprises equations (8.22) for ICS and (8.24) for cascades, as
before. Only the halo luminosity needs to be recalculated. This is similar to
equation (8.25) for neutralinos, except that the factor of 23 becomes 12 , and the
branching ratio can be estimated at [66]
α 2
be ∼ = 5 × 10−6 . (8.33)
π
Using our standard value for the halo mass Mh , and parametrizing the gravitino
decay lifetime by f τ ≡ τg̃ /(1 Gyr) as before, we obtain the following halo
luminosity due to gravitino decays:
This is higher than the luminosity due to neutralino decays and exceeds the
luminosity of the Milky Way by several times if f τ ∼ 1.
The bolometric intensity of all gravitino halos is computed exactly as before.
Replacing L h,tree in (8.10) with L h,grav , we find:
(7 × 10−4 erg s−1 cm−2 )h 20 f τ−1 (m,0 = 0.1h 0)
Q = (2 × 10−3 erg s−1 cm−2 )h 0 f τ−1 (m,0 = 0.3) (8.35)
(8 × 10−3 erg s−1 cm−2 )h 0 f τ−1 (m,0 = 1).
It is clear that gravitinos must decay on timescales longer than the lifetime of the
Universe ( f τ 16), or they would produce a background brighter than that of the
galaxies.
The spectral intensity is the same as before, equation (8.29), but with the
new numbers for λγ and L h . This results in
zf
(z) dz
Iλ (λ0 ) = Ig̃ (8.36)
0 (1 + z)[ m,0 (1 + z)3 + (1 − m,0 )]1/2
where the prefactor Ig̃ is defined as follows. For m 10 º 10 (ICS):
1/2
n 0 L h,grav f c 0.15 Å
Ig̃ =
8πh H0m 10 λ0
−1/2
λ0
= (800 CUs)h 20 m −1 −1
10 f τ f c . (8.37)
Å
Conversely, for m 10 ² 10 (cascades):
1/2
n 0 L h,grav f c 7 × 10−9 Å
Ig̃ =
4πh H0[2 + ln(λx /λc )] λ0
−1/2
λ0
= (0.06 CUs)h 20 f τ−1 f c . (8.38)
Å
The x-ray and γ -ray backgrounds 163
SAS−2 (TF82)
G92
COMPTEL (K96)
(upper limits)
EGRET (S98)
1 Fit to all (G99)
2
mg c > 100 GeV
2
mg c = 100 GeV
2
mg c = 30 GeV
2
mg c = 10 GeV
Iλ ( CUs )
0.1
0.01
Figure 8.7. The spectral intensity of the diffuse γ -ray and x-ray backgrounds due
to gravitino tree-level decays, compared with observational upper limits. These come
from SAS-2, COMPTEL and EGRET in the γ -ray region, and from Gruber’s fits to the
experimental data in the x-ray region. The three plotted curves for each value of m g̃ c2
correspond to models with m,0 = 0.1h 0 (bold lines), m,0 = 0.3 (medium lines) and
m,0 = 1 (light lines). For clarity we have assumed decay lifetimes in each case such that
the highest theoretical intensities lie just under the observational constraints.
The function (z) has the same form as in equations (8.30) and (8.31) and does
not need to be redefined (requiring only the new value for the cut-off wavelength
λγ ). Because the branching ratio be in (8.33) is independent of the gravitino
rest mass, m 10 appears in these results only through λγ . Thus the ICS part of
the spectrum goes as m −110 while the cascade part does not depend on m 10 at
all. As with neutralinos, cascades dominate the γ -ray part of the spectrum and
the ICS process is most important in the x-ray region. Numerical integration of
equation (8.36) leads to the results plotted in figure 8.7. We proceed to discuss
these next, beginning with the observational data.
variable AGN whose relativistic jets are pointed in nearly our direction [74].
Finally, we have made use of some observations in the very high-energy
(VHE) γ -ray region (30 GeV–30 TeV or 4 × 10−10 –4 × 10−7 Å). Because the
extragalactic component of the background has not yet been measured beyond
120 GeV, we have fallen back on measurements of total γ -ray flux, obtained by
Nishimura et al in 1980 [75] using a series of high-altitude balloon experiments.
These are shown as filled dots (‘N80’) in figures 8.2 and 8.4. They constitute a
robust upper limit on EBL flux, since the majority of the observed signals are due
to cosmic-ray interactions in the atmosphere of the Earth.
Some comments are in order here about notation and units in this part of
the spectrum. For experimental reasons, measurements are often expressed in
terms of integral flux E IE (>E 0 ) or number of particles with energies above
E 0 . This presents no difficulties in the high-energy γ -ray region where the
differential spectrum is well approximated with a single power-law component,
IE (E 0 ) = I∗ (E 0 /E ∗ )−α . The conversion to integral form is then given by
∞ 1−α
E ∗ I∗ E0
E IE (>E 0 ) = IE (E) dE = . (8.39)
E0 α−1 E∗
The spectrum in either case can be specified by its index α, together with the
values of E ∗ and I∗ (or E 0 and E IE the integral case). Thus the final SAS-2 results
were reported as α = 2.35+0.4−0.3 with E IE = (5.5 ± 1.3) × 10
−5 s−1 cm−2 ster−1
1e+07
1e+06
100000
τχ or τg ( Gyr )
10000
1000
χ̃ (loop): Ωm,0 = 1
100 Ωm,0 = 0.3
Ωm,0 = 0.1 h0
10 χ
˜ (tree): Ωm,0 = 1
Ωm,0 = 0.3
1 Ωm,0 = 0.1 h0
g˜ (tree): Ωm,0 = 1
Ωm,0 = 0.3
Ωm,0 = 0.1 h0
10 100 1000
2 2
mχ c or mg c ( GeV )
Figure 8.8. The lower limits on WIMP decay lifetime derived from observations of
the x-ray and γ -ray backgrounds. Neutralino bounds are shown for both one-loop
decays (dotted lines) and tree-level decays (dashed lines). For gravitinos we show only
the tree-level constraints (unbroken lines). For each process there are three curves
corresponding to models with m,0 = 1 (light lines), m,0 = 0.3 (medium lines) and
m,0 = 0.1h 0 (bold lines).
Following the same procedure here as we did for axions in chapter 6, we can
repeat this calculation over more finely-spaced intervals in neutralino rest mass,
obtaining a lower limit on decay lifetime τχ̃ as a function of m χ̃ . Results are
shown in figure 8.8 (dotted lines). The lower limits obtained in this way range
from 4 Gyr for the lightest neutralinos (assumed to be confined to galaxy halos
with a total matter density of m,0 = 0.1h 0 ) to 70 000 Gyr for the heaviest (if
these provide enough CDM to put m,0 = 1). A typical intermediate limit (for
m χ̃ c2 = 100 GeV and m,0 = 0.3) is τχ̃ > 2000 Gyr.
Figure 8.6 is a plot of EBL flux from indirect neutralino decays via the tree-
level, ICS and cascade processes described in section 8.5. These provide us with
our strongest constraints on non-minimal SUSY WIMPs. We have set h 0 = 0.75,
z f = 30 and f χ = f R = 1, and assumed values of f τ such that the highest
predicted intensities lie just under observational limits, as before. Contributions
from neutralinos at the light end of the mass range are constrained by x-ray data,
while those at the heavy end are checked by the EGRET measurements. Both
the shape and absolute intensity of the ICS spectra depend on the neutralino rest
The x-ray and γ -ray backgrounds 167
References
[1] Jungman G, Kamionkowski M and Griest K 1996 Phys. Rep. 267 195
[2] Cabibbo N, Farrar G R and Maiani L 1981 Phys. Lett. B 105 155
[3] Pagels H and Primack J R 1982 Phys. Rev. Lett. 48 223
[4] Weinberg S 1982 Phys. Rev. Lett. 48 1303
[5] Ellis J, Linde A D and Nanopoulos D V 1982 Phys. Lett. B 118 59
[6] Goldberg H 1983 Phys. Rev. Lett. 50 1419
[7] Ibáñez L E 1984 Phys. Lett. B 137 160
[8] Sciama D W 1984 Phys. Lett. B 137 169
[9] Ellis J et al 1984 Nucl. Phys. B 238 453
[10] Steigman G and Turner M S 1985 Nucl. Phys. B 253 375
[11] Ellis J and Olive K A 2001 Phys. Lett. B 514 114
[12] Ellis J, Falk T, Ganis G, Olive K A 2000 Phys. Rev. D 62 075010
[13] Griest K, Kamionkowski M and Turner M S 1990 Phys. Rev. D 41 3565
[14] Ellis J, Falk T and Olive K A 1998 Phys. Lett. B 444 367
[15] Roszkowski L, de Austri R R and Nihei T 2001 J. High Energy Phys. 08 024
[16] Baudis L and Klapdor-Kleingrothaus H V 2000 Beyond the Desert 1999 ed
H V Klapdor-Kleingrothaus and I V Krivosheina (Oxford: Institute of Physics
Press) p 881
[17] Belli P et al 2000 Beyond the Desert 1999 ed H V Klapdor-Kleingrothaus and
I V Krivosheina (Oxford: Institute of Physics Press) p 869
[18] Abusaidi R et al 2000 Phys. Rev. Lett. 84 5699
[19] Benoit A et al 2001 Phys. Lett. B 513 15
[20] Silk J and Srednicki M 1984 Phys. Rev. Lett. 53 624
[21] Silk J, Olive K and Srednicki M 1985 Phys. Rev. Lett. 55 257
[22] Freese K 1986 Phys. Lett. B 167 295
[23] Krauss L M, Srednicki M and Wilczek F 1986 Phys. Rev. D 33 2079
[24] Bai X et al 2001 Dark Matter in Astro- and Particle Physics ed H V Klapdor-
Kleingrothaus (Heidelberg: Springer) p 699
[25] Silk J and Bloemen H 1987 Astrophys. J. 313 L47
[26] Stecker F W 1988 Phys. Lett. B 201 529
[27] Rudaz S 1989 Phys. Rev. D 39 3549
[28] Stecker F W and Tylka A J 1989 Astrophys. J. 343
[29] Bouquet A, Salati P and Silk J 1989 Phys. Rev. D 40 3168
[30] Bergström L 1989 Nucl. Phys. B 325 647
[31] Freese K and Silk J 1989 Phys. Rev. D 40 3828
[32] Berezinsky V, Bottino A and Mignola G 1994 Phys. Lett. B 325 136
[33] Gondolo P and Silk J 1999 Phys. Rev. Lett. 83 1719
[34] Urban M et al 1992 Phys. Lett. B 293 149
[35] Chardonnet P et al 1995 Astrophys. J. 454 774
[36] Lake G 1990 Nature 346 39
[37] Gondolo P 1994 Nucl. Phys. Proc. Suppl. B B35 148
[38] Baltz E A et al 1999 Phys. Rev. D 61 023514
[39] Cline D B and Gao Y-T 1990 Astron. Astrophys. 231 L23
[40] Gao Y-T, Stecker F W and Cline D B 1991 Astron. Astrophys. 249 1
[41] Overduin J M and Wesson P S 1997 Astrophys. J. 480 470
[42] Berezinsky V S, Bottino A and de Alfaro V 1992 Phys. Lett. B 274 122
The x-ray and γ -ray backgrounds 169
Black holes
170
Initial mass distribution 171
Our sources of background radiation in this chapter are the PBHs themselves.
We will take these to be distributed homogeneously throughout space. This is
not necessarily realistic since their low velocities mean that PBHs will tend to
collect inside the potential wells of galaxies and galaxy clusters. Clustering is
not of great concern to us here, however, for the same reason already discussed
in connection with axions and WIMPs (i.e. we are interested primarily in the
combined contributions of all PBHs to the diffuse background). A complication
is introduced, however, by the fact that PBHs cover such a wide range of masses
and luminosities that we can no longer treat all sources in the same way (as we
did for the galaxy halos in previous chapters). Instead we must define quantities
like number density and energy spectrum as functions of PBH mass as well as
time, and integrate our final results over both parameters.
The first step is to identify the distribution of PBH masses at the time when
they formed. There is little prospect of probing the time before nucleosynthesis
experimentally, so any theory of PBH formation is necessarily speculative to
some degree. However, the scenario requiring the least extrapolation from known
physics would be one in which PBHs arose via the gravitational collapse of
172 Black holes
Here Mi is the initial mass of the PBH, Mf is the mass lying inside the particle
horizon (or causally connected Universe) at PBH formation time, and is a
proportionality constant.
An investigation of PBH formation under these conditions was carried out in
1975 by Carr [4], who showed that the process is favoured over an extended range
of masses only if n = 23 . Proceeding on this assumption, he found that the initial
mass distribution of PBHs formed with masses between Mi and Mi + dMi per
unit comoving volume is
−β
Mi
n(Mi ) dMi = ρf Mf−2 ζ dMi (9.2)
Mf
where ρf is the mean density at formation time. The parameters β and ζ are
formally given by 2(2γ − 1)/γ and exp[−(γ − 1)2 /2 2 ] respectively, where
γ is the equation-of-state parameter in (2.27). However, in the interests of lifting
every possible restriction on conditions prevailing in the early Universe, we follow
Carr [5] in treating β and ζ as free parameters, not necessarily related to γ and
. Insofar as the early Universe was governed by the equation of state (2.27), β
will take values between 2 (dustlike or ‘soft’) and 3 (stiff or ‘hard’), with β = 52
corresponding to the most natural situation (i.e. γ = 43 as for a non-interacting
relativistic gas). But we will allow β to take values up to 4, corresponding to
‘superhard’ early conditions. The parameter ζ can be understood physically as
the fraction of the Universe which goes into PBHs of mass Mf at time tf . It is a
measure of the initial inhomogeneity of the Universe.
The fact that equation (9.2) has no exponential cut-off at high mass is
important because it allows us (at least in principle) to obtain a substantial
cosmological density of PBHs. Since 2 β 4, however, the power-law
distribution is dominated by PBHs of low mass. This is the primary reason why
PBHs turn out to be so tightly constrained by data on background radiation. It is
the low-mass PBHs whose contributions to the EBL via Hawking evaporation are
the strongest.
Much subsequent effort has gone into the identification of alternative
formation mechanisms which could give rise to a more favourable distribution of
PBH masses (i.e. one peaked at sufficiently high mass to provide the requisite
CDM density without the unwanted background radiation from the low-mass
tail). We pause here to survey some of these developments. Perhaps the least
speculative possibility is that PBHs arise from a post-inflationary spectrum of
density fluctuations which is not perfectly scale-invariant but has a characteristic
length of some kind [6]. The parameter ζ in (9.2) would then depend explicitly on
Evolution and number density 173
the inflationary potential (or analogous quantities). This kind of dependence has
been discussed, for example, in the context of two-stage inflation [7], extended
inflation [8], chaotic inflation [9], ‘plateau’ inflation [10], hybrid inflation [11]
and inflation via isocurvature fluctuations [12].
A narrow spectrum of masses might also be expected if PBHs formed during
a spontaneous phase transition rather than arising from primordial fluctuations.
The quark–hadron transition [13], grand unified symmetry-breaking transition
[14] and Weinberg–Salam phase transition [15] have all been considered in this
regard. The initial mass distribution in each case would be peaked near the
horizon mass Mf at transition time. The quark–hadron transition has attracted
particular attention because PBH formation at this time would be enhanced by
a temporary softening of the equation of state; and because Mf for this case
is coincidentally close to M , so that PBHs might be responsible for MACHO
observations of microlensing in the halo [16]. Cosmic string loops have also
been explored as possible seeds for PBHs with a peaked mass spectrum [17, 18].
And considerable interest has recently been generated by the discovery that PBHs
could provide a physical realization of the theoretical phenomenon known as
critical collapse [19]. If this is so, then initial PBH masses would no longer
necessarily be clustered near Mf .
While any of these proposals can, in principle, concentrate the PBH
population within a narrow mass range, all of them face the same problem of fine-
tuning if they are to produce the desired present-day density of PBHs. In the case
of inflationary mechanisms it is the form of the potential which must be adjusted.
In others it is the bubble nucleation rate, the string mass per unit length or the
fraction of the Universe going into PBHs at formation time. The degree of fine-
tuning required is typically of order one part in 109 . Thus, while modifications
of the initial mass distribution can weaken the ‘standard’ constraints on PBH
properties (which we derive later), they do not as yet have a compelling physical
basis. Similar comments apply to PBH-based explanations for specific classes of
observational phenomena. It has been suggested, for instance, that PBHs with the
right mass could be responsible for certain kinds of γ -ray bursts [20–22] or for
long-term quasar variability via microlensing [23, 24]. Other connections have
been drawn to diffuse γ -ray emission from the galactic halo [25, 26] as well as
the MACHO events mentioned earlier [27, 28]. These suggestions are intriguing,
but experimental confirmation of any one (or more) of them would raise almost
as many questions as it answers.
In order to obtain the comoving number density of PBHs from their initial mass
distribution, we use the fact that PBHs evaporate at a rate which is inversely
174 Black holes
dM α
= − 2. (9.3)
dt M
This applies to uncharged, non-rotating black holes, which is a reasonable
approximation in the case of PBHs since these objects discharge quickly relative
to their lifetimes [29] and also give up angular momentum by preferentially
emitting particles with spin [30]. The parameter α depends in general on the
PBH mass M and its behaviour was worked out in detail by Page in 1976
[31]. The PBHs which are of most importance for our purposes are those with
4.5 × 1014 g M 9.4 × 1016 g. Black holes in this range are light enough (and
therefore ‘hot’ enough) to emit massless particles (including photons) as well as
ultrarelativistic electrons and positrons. The corresponding value of α is
For M > 9.4 × 1016 g, the value of α drops to 3.8 × 1025 g3 s−1 because the
larger black hole is ‘cooler’ and no longer able to emit electrons and positrons.
EBL contributions from PBHs of this mass are of lesser importance because of
the shape of the mass distribution.
As the PBH mass drops below 4.5 × 1014 g, its energy kT climbs above the
rest energies of successively heavier particles, beginning with muons and pions.
As each mass threshold is passed, the PBH is able to emit more particles and the
value of α increases further. At temperatures above the quark–hadron transition
(kT ≈ 200 MeV), MacGibbon and Webber have shown that relativistic quark and
gluon jets are likely to be emitted rather than massive elementary particles [32].
These jets subsequently fragment into stable particles, and the photons produced
in this way are actually more important (at these energies) than the primary photon
flux. The precise behaviour of α in this regime depends on one’s choice of particle
physics. A plot of α(M) for the standard model is found in the review by Halzen
et al [33], who also note that α climbs to 7.8 × 1026 g3 s−1 at kT = 100 GeV, and
that its value would be at least three times higher in supersymmetric extensions
of the standard model where there are many more particle states to be emitted.
As we will shortly see, however, EBL contributions from PBHs at these
temperatures are suppressed by the fact that the latter have already evaporated
away all their mass and vanished. If we assume for the moment that PBH
evolution is adequately described by (9.3) with α = constant as given by (9.4),
then integration gives
M(t) = (Mi3 − 3αt)1/3 . (9.5)
The lifetime tpbh of a PBH is found by setting M(tpbh ) = 0, giving tpbh = Mi3 /3α.
Therefore the initial mass of a PBH which is just disappearing today (tpbh = t0 ) is
given by
M∗ = (3αt0 )1/3 . (9.6)
Cosmological density 175
Taking t0 = 16 Gyr and using (9.4) for α, we find that M∗ = 4.7 × 1014 g. A
numerical analysis allowing for changes in the value of α over the full range of
PBH masses with 0.06 m,0 1 and 0.4 h 0 1 leads to a somewhat larger
result [33]:
M∗ = (5.7 ± 1.4) × 1014 g. (9.7)
2 3 −(β+2)/3
n(M, t) dM = M
M∗
M
M∗
+
t
t0
d
M
M∗
(9.9)
where we have used (9.6) to replace M∗3 with 3αt0 . Here the parameter is
formally given in terms of the various parameters at PBH formation time by
= (ζρf /Mf )(Mf /M∗ )β−1 and has the dimensions of a number density. As
we will see, it corresponds roughly to the comoving number density of PBHs of
mass M∗ . Following Page and Hawking [34], we allow to move up or down as
required by observational constraints. The theory to this point is thus specified by
two adjustable input parameters: the PBH normalization
of-state parameter β. In terms of the dimensionless variables
τ ≡ t/t0 (9.9) reads
and the equation-
≡ M/M∗ and
n( , τ ) d = ( + τ ) 2 3 −(β+2)/3
d. (9.10)
To obtain the present mass density of PBHs with mass ratios between
+ d , we multiply equation (9.10) by the PBH mass M = M∗
τ = 1 so that
and
and put
ρpbh ( , 1) d = M ∗
1−β
(1 + −3 −(β+2)/3
) d . (9.11)
176 Black holes
ρpbh = kβ M∗ kβ ≡
(a)(b)
3(a + b)
(9.13)
where (x) is the gamma function. Allowing β to take values from 2 through 5
2
(the most natural situation) and up to 4, we obtain:
∞ (β = 2)
1.87 (β = 2.5)
kβ = 0.88 (β = 3) (9.14)
0.56 (β = 3.5)
0.40 (β = 4).
latter
M . That is, can be thought of as the present number density of PBHs, if the
The combined mass density of all PBHs in the Universe is thus of order ρ ≈
∗
pbh
(9.13) can be recast as a relation between and the PBH density parameter
are imagined to consist predominantly of objects with M ≈ M . Equation
∗
k M
= ρ /ρ
pbh pbh :
crit,0
β ∗
= pbh . (9.15)
ρ crit,0
this upper limit on holds (as we will confirm under the simplest assumptions),
(9.7), (9.14) and (9.15) together imply that is, at most, of order ∼10 h . If
pbh 0
then there is little hope for PBHs to make up the dark matter.
Equation (9.14) shows that one way to boost their importance would be to
assume a soft equation of state at formation time (i.e. a value of β close to 2 as
for dustlike matter, rather than 2.5 as for radiation). Physically this is related to
the fact that low-pressure matter offers little resistance to gravitational collapse.
Such a softening has, in fact, been shown to occur during the quark–hadron
transition [16], leading to increases in pbh for PBHs which form at that time
(subject to the fine-tuning problem noted in section 9.2). For PBHs which arise
from primordial density fluctuations, however, conditions of this kind are unlikely
to hold throughout the formation epoch. In the limit β → 2 equation (9.2) breaks
down in any case because it becomes possible for PBHs to form on scales smaller
than the horizon [4].
Spectral energy distribution 177
hole if incident upon it in this state. The function d Ṅ is related to the spectral
energy distribution (SED) of the black hole by d Ṅ = F(λ, ) dλ/E, since
we have defined F(λ, ) dλ as the energy emitted between wavelengths λ and
λ + dλ. Here we anticipate the fact that F will depend explicitly on the PBH mass
as well as wavelength. The PBH SED thus satisfies
F(λ, ) dλ = 2π [exp(E/kT
E dE
s
) − 1]
. (9.17)
(axial angular momentum) of the emitted particles. Its form was first calculated
by Page [31]. At high energies, and in the vicinity of the peak of the emitted
spectrum, a good approximation is given by [36]
s ∝ M 2 E 2 . (9.18)
This approximation breaks down at low energies, where it gives rise to errors of
order 50% for (GME/ c3 ) ∼ 0.05 [37] or (with E = 2π c/λ and M ∼ M∗ ) for
λ ∼ 10−3 Å. This is adequate for our purposes, as we will find that the strongest
constraints on PBHs come from those with masses M ∼ M∗ at wavelengths
λ ∼ 10−4 Å.
Putting (9.18) into (9.17) and making the change of variable to wavelength
λ = hc/E, we obtain the SED
C λ dλ
F(λ, ) dλ = exp(hc/kT 2 −5
λ) − 1
(9.19)
where C is a proportionality constant. This has the same form as the blackbody
spectrum, equation (3.22). One should, however, keep in mind that we have
made three simplifying assumptions in arriving at this equation. First, we have
neglected the black-hole charge and spin (as justified in section 9.3). Second, we
have used an approximation for the absorption coefficient s . And third, we have
treated all the emitted photons as if they are in the same quantum state whereas, in
fact, the emission rate (9.16) applies separately to the = s (= 1), = s + 1 and
= s + 2 modes. There are thus actually three distinct quasi-blackbody photon
178 Black holes
spectra with different characteristic temperatures for any single PBH. However,
Page [31] has demonstrated that the = s mode is overwhelmingly dominant,
with the = s+1 and = s+2 modes contributing less than 1% and 0.01% of the
total photon flux respectively. Equation (9.19) is thus a reasonable approximation
to the SED of the PBH as a whole.
To fix the value of C we use the fact that the total flux of photons (in all
modes) radiated by a black hole of mass M is given [31]
Ṅ = d Ṅ =
∞
F(λ, ) dλ
= 5.97 × 1034 s−1
−1
M
. (9.20)
λ=0 hc/λ 1 g
Inserting (9.19) and recalling that M = M∗ , we find that
∞
λ−4 dλ
.
hc
C = (5.97 × 1034 g s−1 ) (9.21)
0 exp(hc/kT λ) − 1 M∗ 3
The definite integral on the left-hand side of this equation can be solved by
switching variables to ν = c/λ:
∞ −3
ν 2 dν/c3 hc
= (3)ζ(3) (9.22)
0 exp(hν/kT ) − 1 kT
where (n) and ζ (n) are the gamma function and Riemann zeta function
respectively. We then apply the fact that the temperature T of an uncharged,
non-rotating black hole is given by
c3
T = . (9.23)
8πκ G M
Putting (9.22) and (9.23) into (9.21) and rearranging terms leads to
(4π)6 hG 3 M∗2
C = (5.97 × 1034 g s−1 ) . (9.24)
c5 (3)ζ(3)
Using (3) = 2! = 2 and ζ (3) = ∞k=1 k
−3 = 1.201 along with (9.7) for M ,
∗
we find
C = (270 ± 120) erg Å s−1 .
4
(9.25)
We can also use the definitions (9.23) to define a useful new quantity:
2
hc 4π
λpbh ≡ = G M∗ = (6.6 ± 1.6) × 10−4 Å. (9.26)
kT c
The size of this characteristic wavelength tells us that we will be concerned
primarily with the high-energy γ -ray portion of the spectrum. In terms of C
and λpbh the SED (9.19) now reads
) = exp(C λ /λ/λ) − 1 .
F(λ,
2
pbh
5
(9.27)
9.6 Luminosity
To compute the PBH luminosity we employ equation (3.1), integrating the
SED F(λ, ) over all wavelengths to obtain:
∞
L( ) = C 2
0
λ−5 dλ
exp( λpbh /λ) − 1
. (9.28)
L( ) = C (λ
2
pbh )
−4
(4)ζ(4). (9.29)
Using equations (9.7), (9.24) and (9.26) along with the values (4) = 3! = 6 and
ζ (4) = π 4 /96, we can put this into the form
L( ) = L
pbh
−2
(9.30)
where
(5.97 × 1034 g s−1 )π 2 hc3
L pbh = = (1.0 ± 0.4) × 1016 erg s−1 .
512ζ (3)G M∗2
−2
L M
= 1.7 × 10−55 . (9.31)
L M
This expression is not strictly valid for PBHs of masses near M , having been
derived for those with M ∼ M∗ ∼ 1015 g. For more massive PBHs, the
luminosity is if anything lower. (In fact, a black hole of Solar mass would be
colder than the CMB and would absorb radiation faster than it could emit it.) So,
Hawking evaporation or not, most black holes are indeed very black.
integral (2.14) as usual. This time, however, number density n(t) is to be replaced
by n( , τ ) d , the number density of PBHs with mass ratios between and
180 Black holes
integrate
+ d at time t = t τ . Similarly, L() takes the place of L(t). We then
over PBH mass as well as time t in order to obtain the total intensity.
0
Here ε ≡ (β + 2)/3 and c (τ ) is a minimum cut-off mass, equal to the mass of
the lightest PBH which has not yet evaporated at time τ . This arises because the
initial PBH mass distribution (9.2) must, in general, have a non-zero minimum
Mmin in order to avoid divergences at low mass. As soon as the lightest PBHs
have formed and started to evaporate, however, the cut-off begins to drop from its
initial value of c (0) = Mmin /M∗ . If Mmin is of the order of the Planck mass
as suggested by Barrow et al [38], then one finds that c (τ ) drops to zero well
before the end of the radiation-dominated era. Since we are concerned with times
later than this, we can safely set c (τ ) = 0 in what follows.
Using (9.15) for , we therefore rewrite equation (9.32) as
1 ∞
Q = Q pbh pbh R̃(τ ) dτ
d
τf 0 ( 3 + τ )ε (9.33)
where
ct0 ρcrit,0 L pbh
Q pbh = . (9.34)
kβ M∗
In this form, equation (9.33) can be used to put a rough upper limit on pbh from
the bolometric intensity of the background light [39]. Let us assume that the
Universe is flat, as suggested by most observations (chapter 4). Then its age t0
can be obtained from equation (2.70) as
2t˜0
t0 = . (9.35)
3H0
Here t˜0 ≡ t˜m (0) where t˜m (z) is the dimensionless function
1 −1 1 − m,0
t˜m (z) ≡ sinh . (9.36)
1 − m,0 m,0 (1 + z)3
We are now ready to evaluate equation (9.33). To begin with we note that the
integral over mass has an analytic solution:
∞ ( 13 )(ε − 13 )
+ τ)
d 1
ε
= kε τ 3 −ε kε ≡ . (9.38)
0 ( 3 3(ε)
The parameter τf is simply obtained for the EdS case by inverting (9.39) to give
τf = (1 + z f )−3/2 . The subscript ‘f’ (by which we usually mean ‘formation’) is
here a misnomer since we do not integrate back to PBH formation time, which
occurred in the early stages of the radiation-dominated era. Rather we integrate
back to the redshift at which processes like pair production become significant
enough to render the Universe approximately opaque to the (primarily γ -ray)
photons from PBH evaporation. Following Kribs et al [37] this is z f ≈ 700.
Using this value of z f and substituting equations (9.37) and (9.38) into (9.40),
we find that the bolometric intensity of background radiation due to evaporating
PBHs in an EdS Universe is
0 (β = 2)
2.3 ± 1.4 erg s−1 cm−2 (β = 2.5)
Q = h 0 pbh × 6.6 ± 4.2 erg s−1 cm−2 (β = 3) (9.41)
−1 cm−2
17 ± 10 erg s (β = 3.5)
45 ± 28 erg s−1 cm−2 (β = 4).
This vanishes for β = 2 because kβ → ∞ in this limit, as discussed in section 9.4.
The case β = 4 (i.e. ε = 2) is evaluated with the help of L’Hôpital’s rule, which
gives limε→2 (1 − τf2−ε )/(2 − ε) = − ln τf .
All the values of Q in (9.41) are far higher than the actual bolometric EBL
intensity in an EdS Universe, which is 25 Q ∗ = 1.0 × 10−4 erg s−1 cm−2 from
(2.49). Moreover this background is already well accounted for by the integrated
light from galaxies. A firm upper bound on pbh (for the most natural situation
with β = 2.5) is therefore
For harder initial equations of state (β > 2.5) the PBH density would have to
be even lower. PBHs in the simplest formation scenario are thus eliminated as
important dark-matter candidates, even without reference to the γ -ray spectrum.
182 Black holes
β=4
10 β = 3.5
β=3
1 β = 2.5
β = 2.2
)
Q*
−2
0.1
cm
−1
Q ( h0 erg s
0.01
0.001
0.0001
1e−05
Figure 9.1. The bolometric intensity due to evaporating primordial black holes as a
function of their collective density pbh and the equation-of-state parameter β. We
have assumed that m,0 = bar + pbh with bar = 0.016h −2 0 , h 0 = 0.75 and
,0 = 1 − m,0 . The horizontal dotted line indicates the approximate bolometric
intensity (Q ∗ ) of the observed EBL.
This conclusion is based on equation (9.30) for PBH luminosity, which is valid
over the most important part of the integral. Corrections for higher and lower
masses will not alter the bound (9.42) by more than an order of magnitude.
The results in (9.41) assume an EdS cosmology. For more general
flat models containing both matter and vacuum energy, integrated background
intensity would go up, because the Universe is older, and down because Q ∝
pbh . The latter effect is the stronger one, so that our constraints on pbh will
be weakened in a model such as CDM (with m,0 = 0.3, ,0 = 0.7). To
determine the importance of this effect, we can re-evaluate the integral (9.33)
using the general formula (2.68) for R̃(τ ) in place of (9.39). We will make
the minimal assumption that PBHs constitute the only CDM, so that m,0 =
pbh + bar with bar given by (4.4) as usual. Equation (2.70) shows that the
parameter τf is given for arbitrary values of m,0 by τf = t˜m (z f )/t˜0 where the
function t˜m (z) is defined as before by (9.36).
Carrying out the integration in (9.33), we obtain the plots of bolometric
intensity Q as a function of pbh shown in figure 9.1. As before Q is proportional
to h 0 because it goes as both ρpbh = pbh ρcrit,0 ∝ h 20 and t0 ∝ h −1
0 . Since Q → 0
for β → 2 we have chosen a minimum value of β = 2.2 as being representative
of ‘soft’ conditions.
Spectral intensity 183
exp[λ / R̃(τ )λ ] − 1
Iλ (λ0 ) = Ipbh pbh R̃ −3 (τ ) dτ (9.44)
τf 0 pbh 0
Iλ ( CUs )
0.1 0.1
0.01 0.01
1e−06 1e−05 0.0001 0.001 0.01 1e−06 1e−05 0.0001 0.001 0.01
λ(Å) λ(Å)
Figure 9.2. The spectral intensity of the diffuse γ -ray background from evaporating
primordial black holes in flat models, as compared with experimental limits from
the SAS-2, COMPTEL and EGRET instruments. The left-hand panel (a) assumes
m,0 = 0.06, while the right-hand panel (b) is plotted for m,0 = 1 (the EdS case).
All curves assume pbh = 10−8 and h 0 = 0.75.
These limits are three orders of magnitude stronger than the one from bolometric
intensity, again confirming that PBHs in the simplest formation scenario cannot
be significant contributors to the dark matter. Equation (9.46) can be put into the
form of an upper limit on the PBH number-density normalization with the help
of (9.15), giving
(2.2 ± 0.8) × 104h 0 pc−3 (m,0 = 0.06)
<
(3.2 ± 1.1) × 104h 0 pc−3 (m,0 = 1).
(9.47)
These numbers are in good agreement with the original Page–Hawking bound of
< 1 × 104 pc−3 , which was obtained for h 0 = 0.6 [34].
Subsequent workers have refined the γ -ray background constraints on pbh
and in a number of ways. An important development was the realization
by MacGibbon and Webber [32] that PBHs whose effective temperatures have
climbed above the rest energy of hadrons are likely to give off more photons by
indirect processes than by direct emission. This is because hadrons are composite
particles, made up of quarks and gluons. It is these elementary constituents
(rather than their composite bound states) which should be emitted from a very
hot PBH in the form of relativistic quark and gluon jets. Numerical simulations
and accelerator experiments indicate that these jets subsequently fragment into
secondary particles whose decays (especially those of the pions) produce a far
greater flux of photons than that emitted directly from the PBH. The net effect is
to increase the PBH luminosity, particularly in low-energy γ -rays, strengthening
the constraint on pbh by about an order of magnitude [36]. The most recent
upper limit obtained in this way using EGRET data (assuming m,0 = 1) is
pbh < (5.1 ± 1.3) × 10−9 h −20 [43].
Complementary bounds on PBH contributions to the dark matter have come
from direct searches for those evaporating within a few kpc of the Earth. Such
limits are subject to more uncertainty than ones based on the EBL because they
depend on assumptions about the degree to which PBHs are clustered. If there
is no clustering then (9.42) can be converted into a stringent upper bound on the
local PBH evaporation rate, ˙ < 10−7 pc−3 yr−1 . This, however, relaxes to
˙ º 10 pc−3 yr−1 if PBHs are strongly clustered [33], in which case limits
from direct searches could potentially become competitive with those based on
the EBL. Data taken at energies near 50 TeV with the CYGNUS air-shower array
have led to a bound of ˙ < 8.5 × 105 pc−3 yr−1 [44]; and a comparable limit of
˙ < (3.0 ± 1.0) × 106 pc−3 yr−1 has been obtained at 400 GeV using an imaging
atmospheric Čerenkov technique developed by the Whipple collaboration [45].
The strongest constraint yet reported has come from balloon measurements of the
186 Black holes
cosmic-ray antiproton flux below 0.5 GeV, from which it has been inferred that
˙ < 0.017 pc−3 yr−1 [46].
Other ideas have been advanced which could weaken the bounds on PBHs as
dark-matter candidates. It might be, for instance, that these objects leave behind
stable relics rather than evaporating completely [47]. This, however, raises a new
problem (similar to the ‘gravitino problem’ discussed in section 8.6) because such
relics would have been overproduced by quantum and thermal fluctuations in the
early Universe. Inflation can be invoked to reduce their density but must be finely
tuned if the same relics are to make up an interesting fraction of the dark matter
today [48]. A more promising possibility has been opened up by the suggestion of
Heckler [49, 50] that particles emitted from the surface interact strongly enough
above a critical temperature to form a black-hole photosphere. This would make
the PBH appear cooler as seen from a distance than its actual surface temperature,
just as the Solar photosphere makes the Sun appear cooler than its core. (In the
case of the black hole, however, one has not only an electromagnetic photosphere
but a QCD ‘gluosphere’.) The reality of this effect is still under debate [43]
but preliminary calculations indicate that it could reduce the intensity of PBH
contributions to the γ -ray background by 60% at 100 MeV and by as much as
two orders of magnitude at 1 GeV [51].
Finally, as discussed already in section 9.2, the limits obtained here can be
weakened or evaded if PBH formation occurs in such a way as to produce fewer
low-mass objects. The challenge faced in such proposals is to explain how a
distribution of this kind comes about in a natural way. A common procedure is to
turn the question around and use observational data on the present intensity of the
γ -ray background as a probe of the original PBH formation mechanism. Such an
approach has been applied, for example, to put constraints on the spectral index of
density fluctuations in the context of PBHs which form via critical collapse [37],
or inflation with a ‘blue’ or tilted spectrum [52]. Even if they do not exist, in
other words, primordial black holes provide a valuable window on conditions in
the early Universe, where information is otherwise scarce.
In view of the fact that conventional black holes are disfavoured as dark-matter
candidates, it is worthwhile to consider alternatives. One of the simplest of
these is the extension of the black-hole concept from the four-dimensional
(4D) spacetime of general relativity to higher dimensions. Higher-dimensional
relativity, also known as Kaluza–Klein gravity, has a long history and underlies
modern attempts to unify gravity with the standard model of particle physics
(see [53] for a review). The extra dimensions have traditionally been assumed
to be compact, in order to explain their non-appearance in low-energy physics.
The past few years, however, have witnessed a surge of interest in non-
compactified theories of higher-dimensional gravity [54–56]. In such theories
Higher-dimensional ‘black holes’ 187
Ê AB = 0. (9.50)
Ê
Here AB is the 5D Ricci tensor, defined in exactly the same way as the 4D one
except that spacetime indices A, B run over 0–4 instead of 0–3. Putting a 5D
metric such as (9.49) into the vacuum 5D field equations (9.50), we find that we
188 Black holes
c2 ξ 2 κ 2c2ξ κ
ρs (r ) → Mg (r ) → Mg (∞) = . (9.53)
2π Ga 2r 4 Ga
The second of these expressions shows that the asymptotic value of Mg is, in
general, not the same as Ms [Mg (∞) = ξ κ Ms for r 1/a], but reduces to it in
the limit ξ κ → 1. Viewed in four dimensions, the soliton resembles a hole in the
geometry surrounded by a spherically-symmetric ball of ultrarelativistic matter
whose density falls off at large distances as 1/r 4 . If the Universe does have more
than four dimensions, then objects like this should be common, being generic to
5D Kaluza–Klein gravity in exactly the same way black holes are to 4D general
relativity.
Higher-dimensional ‘black holes’ 189
Let us, therefore, assess their impact on the background radiation, using the
same methods as usual, and assuming that the fluid making up the soliton is in fact
composed of photons (although one might also consider ultrarelativistic particles
such as neutrinos in principle). We do not have spectral information on these so
we proceed bolometrically. Putting the second of equations (9.53) into the first
gives
G Mg2
ρs (r ) ≈ . (9.54)
8πc2 κr 4
Numbers can be attached to the quantities κ, r and Mg as follows. The first
(κ) is technically a free parameter. However, a natural choice from the physical
point of view is κ ∼ 1. For this case the consistency relation implies ξ ∼ 1
also, guaranteeing that the asymptotic gravitational mass of the soliton is close
to its Schwarzschild one. To obtain a value for r , let us assume that solitons are
distributed homogeneously through space with average separation d and mean
density ρ̄s = s ρcrit,0 = Ms /d 3 . Since ρs drops as r −4 whereas the number
of solitons at a distance r climbs only as r 3 , the local density of solitons is
largely determined by the nearest one. We can therefore replace r by d =
(Ms /s ρcrit,0 )1/3 . The last unknown in (9.54) is the soliton mass Mg (= Ms if
κ = 1). The fact that ρs ∝ r −4 is reminiscent of the density profile of the galactic
dark-matter halo, equation (6.14). Theoretical work on the classical tests of 5D
general relativity [64] and limits on violations of the equivalence principle [65]
also suggests that solitons are likely to be associated with dark matter on galactic
or larger scales. Let us therefore express Ms in units of the mass of the Galaxy,
which from (6.15) is Mgal ≈ 2 × 1012 M . Equation (9.54) then gives the local
energy density of solitonic fluid as
Ms 2/3
ρs c2 ≈ (3 × 10−17 erg cm−3 )h 0 s
8/3 4/3
. (9.55)
Mgal
To get a characteristic value, we take Ms = Mgal and adopt our usual values
h 0 = 0.75 and s = cdm = 0.3. Let us, moreover, compare our result to the
average energy density of the CMB, which dominates the spectrum of background
radiation (figure 1.2). The latter is found from (5.37) as ρcmb c2 = γ ρcrit,0 c2 =
4 × 10−13 erg cm−3 . We therefore obtain
ρs
≈ 7 × 10−6. (9.56)
ρcmb
This is of the same order of magnitude as the limit set on anomalous contributions
to the CMB by COBE and other experiments. We infer, therefore, that the
dark matter may consist of solitons but that they are probably not more massive
than galaxies. Similar arguments can be made on the basis of tidal effects and
gravitational lensing [63]. To go further and put more detailed constraints on
these candidates from background radiation or other considerations will require a
more detailed investigation of their microphysical properties.
190 Black holes
Let us summarize our results for this chapter. We have noted that standard
(stellar) black holes cannot provide the dark matter insofar as their contributions to
the density of the Universe are effectively baryonic. Primordial black holes evade
this constraint but we have reconfirmed the classic results of Page, Hawking and
others: the collective density of such objects must be negligible, for otherwise
their presence would have been obvious in the γ -ray background. In fact, we
have shown that their bolometric intensity alone is sufficient to rule them out as
important dark-matter candidates. These constraints may be relaxed if primordial
black holes form in such a way as to favour objects of higher mass, but it is
not clear that such a distribution can be shown to arise in a natural way. As
an alternative, we have considered black-hole like objects in higher-dimensional
gravity. If the world does have more than four dimensions, as suggested by
modern unifield-field theories, then these should exist and could be the dark
matter. This is consistent with the bolometric intensity of the EBL, but there are a
number of theoretical issues to be worked out before a more definitive assessment
of their potential can be made.
References
[1] Y B Zeldovich and I D Novikov 1966 Sov. Astron. 10 602
[2] Hawking S W 1971 Mon. Not. R. Astron. Soc. 152 75
[3] Hawking S W 1974 Nature 248 30
[4] Carr B J 1975 Astrophys. J. 201 1
[5] Carr B J 1976 Astrophys. J. 206 8
[6] Khlopov M Y, Malomed B A and Zeldovich Y B 1985 Mon. Not. R. Aston. Soc. 215
575
[7] Nasel’skii P D and Polnarëv A G 1985 Sov. Astron. 29 487
[8] Hsu S D H 1990 Phys. Lett. B 251 343
[9] Carr B J and Lidsey J E 1993 Phys. Rev. D 48 543
[10] Ivanov P, Naselsky P and Novikov I 1994 Phys. Rev. D 50 7173
[11] Garcı́a-Bellido J, Linde A and Wands D 1996 Phys. Rev. D 54 6040
[12] Yokoyama J 1997 Astron. Astrophys. 318 673
[13] Crawford M and Schramm D N 1982 Nature 298 538
[14] Kodama H, Sasaki M and Sato K 1982 Prog. Theor. Phys. 68 1979
[15] Hawking S W, Moss I G and Stewart J M 1982 Phys. Rev. D 26 2681
[16] Jedamzik K 1997 Phys. Rev. D 55 R5871
[17] Hawking S W 1989 Phys. Lett. B 231 237
[18] Polnarev A and Zembowicz R 1991 Phys. Rev. D 43 1106
[19] Niemeyer J C and Jedamzik K 1998 Phys. Rev. Lett. 80 5481
[20] Cline D B and Hong W 1992 Astrophys. J. 401 L57
[21] Cline D B, Sanders D A and Hong W 1997 Astrophys. J. 486 169
[22] Green A M 2002 Phys. Rev. 65 027301
[23] Hawkins M R S 1993 Nature 366 242
[24] Hawkins M R S 1996 Mon. Not. R. Astron. Soc. 278 787
[25] Wright E L 1996 Astrophys. J. 459 487
[26] Cline D B 1998 Astrophys. J. 501 L1
Higher-dimensional ‘black holes’ 191
Conclusions
Why is the sky dark at night, rather than being filled with the light of distant
galaxies? We have answered this question both qualitatively and quantitatively.
The brightness of the night sky is set by the finite age of the Universe, which
limits the number of photons that galaxies have been able to contribute to the
extragalactic background light. Expansion further darkens an already-black sky
by stretching and dimming the light from distant galaxies. This is, however, a
secondary effect. If we could freeze the expansion without altering the lifetime
of the galaxies, then the night sky would brighten by no more than a factor of
two to three at optical wavelengths, depending on the evolutionary history of the
galaxies and the makeup of the cosmological fluid.
What makes up the dark matter? This question still awaits a definitive
answer. However, we have seen that the machinery which settles the first
question goes some way toward resolving this one as well. Most dark-matter
candidates are not, in fact, perfectly black. Like the galaxies, they contribute
to the extragalactic background light with characteristic signatures at specific
wavelengths. Experimental data on the spectral intensity of this light therefore tell
us what the dark matter can (or cannot) be. It cannot be vacuum energy decaying
into photons, because this would lead to levels of microwave background radiation
in excess of those observed. It cannot consist of axions or neutrinos with
rest energies in the eV range, because these would produce too much infrared,
optical or ultraviolet background light, depending on their lifetimes and coupling
parameters. It could consist of supersymmetric weakly interacting massive
particles (WIMPs) such as neutralinos or gravitinos, but data on the x-ray and
γ -ray backgrounds imply that these must be very nearly stable. The same data
exclude a significant role for primordial black holes, whose Hawking evaporation
produces too much light at γ -ray wavelengths. Higher-dimensional analogues of
black holes known as solitons are more difficult to constrain but an analysis based
on the integrated intensity of the background radiation at all wavelengths suggests
that they could be dark-matter objects if their masses are not larger than those of
galaxies.
192
Conclusions 193
relics with rest energies in the range 1012–1016 GeV [3]. Gluinos [4] and axinos
[5], the supersymmetric counterparts of gluons and axions, are other examples.
High-energy γ -rays also provide us with a crucial experimental window on
some of the dark-matter candidates predicted by higher-dimensional unified-field
theories such as string and M theory [6] and brane-world scenarios [7, 8].
The dark night sky is thus our most versatile dark-matter detector. Its
potential in this regard is summed up nicely by a quote from the eighteenth-
century English amateur astronomer and cosmologist Thomas Wright. In his
1750 book titled An original theory or new hypothesis of the Universe, he wrote
as follows:
We may justly suppose that so many radiant bodies were not created
barely to to enlighten an infinite void, but to make their much more
numerous attendants visible. [9]
Of the attendants we have discussed, which is most likely to make up the dark
matter? It would be foolhardy to answer this too confidently, but we will close
this book with some opinions of our own. The results we have obtained here
support the view that the dark matter cannot involve ‘ordinary’ particles, by which
we mean those of the standard model of particle physics (including those which
may have fallen into black holes). We think it unlikely that baryons, neutrinos
or black holes will be found to form the bulk of the dark matter. Vacuum energy,
similarly, is a possibility only insofar as it does not decay into baryons or photons.
This leaves us with the other ‘exotic’ candidates. There is no doubt that the
frontrunners here are axions and supersymmetric WIMPs. These particles are
theoretically well motivated and physically well defined enough to be testable.
Both, however, arise in the context of particle-physics theories which do not yet
make room for gravity. It may be that the nature of the dark matter will continue
to resist understanding until a more fully unified theory of all the interactions is
at hand. In our view such a theory is likely to involve more than four dimensions.
Thus, while we are justified at present in directing most of our theoretical and
experimental efforts at WIMPs and axions, experience also suggests to us that
the dark matter may involve higher-dimensional objects such as solitons and
branes.
A final word is in order about the phrase ‘dark matter’, which while natural is
perhaps semantically unfortunate. It has been known since Einstein that mass and
energy are equivalent: in recent years particle physicists have found it increasingly
difficult to define the mass of a particle (as opposed, for example, to the energy
of a resonance); it is now acknowledged by workers in general relativity that the
energy of the gravitational field in that theory cannot be localized; and in higher-
dimensional extensions of general relativity, energy in spacetime is derived from
the geometry of higher dimensions in a way that intimately mixes the properties
of ‘matter’ and ‘vacuum’. The darkness of the night sky is consistent with other
cosmological data in telling us that there is something out there which gravitates
or curves spacetime. We are of the opinion that this substance, whatever we call it,
Conclusions 195
References
[1] Carr B J and Sakellariadou M 1999 Astrophys. J. 516 195
[2] Abazajian K, Fuller G M and Tucker W H 2001 Astrophys. J. 562 593
[3] Ziaeepour H 2001 Astropart. Phys. 16 101
[4] Berezinsky V, Kachelrieß M and Ostapchenko S 2002 Phys. Rev. D 65 083004
[5] Kim H B and Kim J E 2002 Phys. Lett. B 527 18
[6] Benakli K, Ellis J and Nanopoulos D V 1999 Phys. Rev. D 59 047301
[7] Hall L J and Smith D 1999 Phys. Rev. D 60 085008
[8] Hannestad S and Raffelt G G 2001 Phys. Rev. Lett. 87 051301
[9] Jaki S L 2001 The Paradox of Olbers’ Paradox 2nd edn (Pinckney, MI: Real View
Books) p 112
Appendix A
We assume that the density of relativistic matter is positive. Equation (A.1) then
has two solutions, depending on whether the Universe is open or closed. For open
models (r,0 < 1),
Q 1 1 + r,0 z f (2 + z f ) r,0
= − +
Q∗ 2(1 − r,0 ) 2(1 − r,0 )(1 + z f )2 4(1 − r,0 )3/2
1 + r,0 z f (2 + z f ) + 1 − r,0
× ln
1 + r,0 z f (2 + z f ) − 1 − r,0
1 − 1 − r,0
× . (A.3)
1 + 1 − r,0
196
Matter-dominated models 197
For the static case, (A.2) has a single solution for both open and closed models,
as follows:
Q stat 1 1 + r,0 z f (2 + z f )
= − . (A.7)
Q∗ 1 − r,0 (1 − r,0 )(1 + z f )
In the limit z f → ∞, this reduces to
Q stat 1 − r,0
→ . (A.8)
Q∗ 1 − r,0
These solutions are plotted in figure 2.2 of the main text.
We assume that the matter density is positive, as before. Equation (A.9) then has
two solutions, depending on whether the Universe is open or closed. For open
198 Bolometric intensity integrals
For closed models (m,0 > 1), the solution is somewhat different:
Q −1 3m,0 1 + m,0 z f
= 1+ +
Q∗ 2(m,0 − 1) 2(m,0 − 1) 2(m,0 − 1)(1 + z f )
1 3m,0 32m,0
× + +
1 + zf 2(m,0 − 1) 4(m,0 − 1)5/2
−1 1 + m,0 z f −1 1
× tan − tan . (A.13)
m,0 − 1 m,0 − 1
The solution of (A.10) for a static Universe also depends on the spatial curvature.
For open models,
Q stat 1 1 + m,0 z f m,0
= − +
Q∗ 1 − m,0 (1 − m,0 )(1 + z f ) 2(1 − m,0 )3/2
1 + m,0 z f + 1 − m,0 1 − 1 − m,0
× ln . (A.15)
1 + m,0 z f − 1 − m,0 1 + 1 − m,0
Vacuum-dominated models 199
For models with ,0 > 1, limiting values of EBL intensity occur as z f →
z max (,0 ) < ∞, as discussed in section 2.7 in the main text. In the expanding
case equation (A.21) gives simply
Q 1
→ (,0 > 1) (A.23)
Q∗ ,0
where we have used (2.58) for z max . For the static case, equation (A.20) has three
possible solutions, depending whether ,0 < 0, 0 < ,0 < 1, or ,0 > 1.
For models of the first kind,
Q stat 1 −1 −,0 1 −1 −,0
= cos − cos
Q∗ −,0 1 − ,0 1 + z f 1 − ,0
(A.24)
which has the following limit as z f → ∞:
Q stat 1 π −1 −,0
→ − cos . (A.25)
Q∗ −,0 2 1 − ,0
For models in which 0 < ,0 < 1, the solution of (A.20) reads
Q stat 1 1 + (1 − ,0 )(2 + z f )z f − ,0
= ln
Q∗ 2 ,0 1 + (1 − ,0 )(2 + z f )z f + ,0
1 + ,0
× (A.26)
1 − ,0
In the limit z f → z max , where the latter is defined as in (2.58), this expression
reduces to
Q stat 1 ,0 + 1
→ ln (A.29)
Q∗ 2 ,0 ,0 − 1
a result close to that in (A.27). Figure 2.4 in the main text summarizes these
results.
Appendix B
This appendix outlines the solution of equations (5.15), (5.16), (5.18) and (5.19)
in the flat decaying-vacuum cosmology which is the subject of chapter 5. These
equations specify the scale factor R and densities ρm , ρr and ρv of matter,
radiation and decaying vacuum energy respectively. We apply the boundary
conditions R(0) = 0, ρm (t0 ) = ρm,0 and ρr (t0 ) = ρr,0 .
The key is the differential equation (5.19), which gives R(t). This may be
solved analytically in three different regimes:
Regime 1: ρr + ρv ρm + ρc
Regime 2: ρr + ρv ρm , ρc = 0 (B.1)
Regime 3: ρr + ρv ρm + ρc , ρc = 0
Here we distinguish between the time-varying part of the vacuum energy density
(ρv ) and its constant part (ρc ). Regime 1 describes the radiation-dominated era.
Regime 2 is a good approximation to the matter-dominated era in the Einstein–
de Sitter (EdS) model, with ρm,0 = ρcrit,0 . Regime 3 should be used instead when
ρm,0 < ρcrit,0 , as in the CDM model (with ρm,0 = 0.3ρcrit,0 ) or the BDM
model (with ρm,0 = 0.03ρcrit,0 ). For brevity we refer to this as the vacuum-
dominated regime.
202
Matter-dominated regime 203
Substitution back into (5.18) then gives the vacuum energy density:
αx r
ρv,1 (t) = t −2 (B.4)
(1 − x r )2
ρm (R) = αm R −3 (B.8)
Substitution back into (5.18) then gives the vacuum energy in terms of the
constants αv and αm as
One does not need to solve for αv and αm in this expression. The dependence on t
is enough, as we show by using (5.16) to write down the corresponding radiation
density:
1 − xm
ρr,2 (t) = ρv,2 (t) = αr t −8(1−xm )/3 (B.11)
xm
204 Dynamics with a decaying vacuum
where αr is a new constant. This is fixed in terms of the present radiation density
8(1−x m )/3
ρr,0 by the boundary condition ρr,2 (t0 ) = ρr,0 , so that αr = ρr,0 t0 . The
vacuum density can then be written in terms of αr as well. Using (5.16) again one
has
xm αr x m
ρv,2 (t) = ρr,2 (t) = t −8(1−xm )/3 . (B.12)
1 − xm 1 − xm
The quantities ρr and ρv will not necessarily be continuous across the phase
transition t = teq , but R 4(1−x)ρv will be, since it is conserved by (5.17).
Matter density during regime 2 is found by substituting (B.9) directly into
(B.8). This gives
1 −2
ρm,2 = t . (B.13)
6π G
To check this one can divide through by the critical density to obtain m,2 (t).
Recalling that the lifetime of a flat, matter-dominated Universe is t0 = 2/3H0,
one finds simply
−2
t
m,2 (t) = . (B.14)
t0
Thus m,2 = 1 in the limit t → t0 as required.
We do not need an expression for matter density in the radiation-dominated
era. For the sake of the completeness of the plots in figure 5.1, however, we
can extend ρm (t) into regime 1 by imposing continuity across the phase transition
t = teq . (This is an extra condition on the theory, which conserves particle number
R 3 ρm rather than density ρm .) Using equations (B.3), (B.8) and (B.13), we obtain
−3/2(1−xr)
1 t
ρm,1 (t) = 2
. (B.15)
6π Gteq teq
The matter density is similarly found by putting (B.17) into (B.8) and applying
the boundary condition ρm,3 (t0 ) = ρm,0 to obtain
√ −2
sinh( 6π Gρc t)
ρm,3 (t) = ρm,0 √ . (B.20)
sinh( 6π Gρc t0 )
For plotting purposes, this can be extended into regime 1 by imposing continuity
on ρm at teq , as in the preceding section. The result is
√ −2 −3/2(1−xr)
sinh( 6π Gρc teq ) t
ρm,1 (t) = ρm,0 √ . (B.21)
sinh( 6π Gρc t0 ) teq
206
Absorption by galactic hydrogen 207
Here n d = 0.35 cm−3 , r1 = 5 kpc, r2 = 13 kpc, r3 = 1.9 kpc and r0 = 9.5 kpc.
The constants σ = 5.3 × 10−18 cm2 (for 14.4 eV photons) and n d = 0.35 cm−3
may be usefully combined into a single absorption coefficient kd ≡ σ n d =
5730 kpc−1 . The large size of this number indicates that regions close to the
galactic plane (where n H ≈ n d ) will be totally opaque to decay photons.
The optical depths of columns P Q + and P Q − are:
d± 2
1 z
τp± (θ, φ) = kd f d (r ) exp − dr (C.4)
0 2 z d (r )
Thanks to cylindrical symmetry, we need integrate only over one quadrant of the
sky. It remains to average e over all points P in the halo. The average must be
weighted by the number density of decay photons ν to reflect the fact that more
photons are emitted near the centre ofthe halo than at its edges. Using (7.11) this
may be written as ν (rp , zp ) = [1 + (rp /r )2 (z/ h)2 ]−2 . So the final efficiency
factor is rh zmax (rp )
= 0 0
e (rp , z p ) ν (rp , z p )rp drp dz p
rh zmax (rp ) (C.6)
0 0 ν (rp , z p )rp drp dz p
where z max (rp = h ξ 2 − (rp /r )2 . Numerical integration of (C.6) produces the
values quoted in equation (7.17) of the main text. Figure C.1 illustrates the shape
208 Absorption by galactic hydrogen
10
-5
-10
-20 0 20
Figure C.1. Iso-absorption contours (light short-dashed lines) indicating the percentage
of decay photons produced inside the halo which escape absorption by neutral hydrogen
(solid lines) and reach the edge of the halo (heavy long-dashed lines). This plot assumes a
small dark-matter halo of radial size rh = 28 kpc.
References
[1] Bowers R and Deeming T 1984 Astrophysics II: Interstellar Matter and Galaxies
(Boston, MA: Jones and Bartlett) pp 383–91
[2] Scheffler H and Elsässer H 1988 Physics of the Galaxy and Interstellar Medium
(Berlin: Springer) pp 350–64
[3] Overduin J M and Wesson P S 1997 Astrophys. J. 483 77
Index
209
210 Index
efficiency factor (), 129, 132, Pair production, 156, 158, 181
206 PBHs, see Primordial black holes
ellipsoidal density profile, 131, Peccei-Quinn (PQ) symmetry, 113
206 Phase transition, 97, 173
luminosity (L h ), 129, 133 Photinos, 147, 150, 151
mass (Mh ), 130 Pioneer 10, 46, 49, 121
spectral intensity due to, 140 Planck, 84
Neutrino oscillation, 78 Planck function, 51
Neutrinoless double-beta decay, 78 Planck’s law, 8
Neutrinos, 13, 67, 192 Primordial black holes (PBHs),
blackbody temperature (Tν ), 103 170, 192
bound, see Neutrino halos and γ -ray bursts, 173
density parameter (ν ), 67, 77, and γ -ray emission from halo,
78, 103, 130 173
free-streaming, see and MACHO events, 173
Free-streaming neutrinos and quasar variability, 173
in clusters of galaxies, 128 bolometric intensity due to,
number density (n ν ), 76 180–182
rest mass (m ν ), 77, 127 comoving number density, 175
sterile, 193 cut-off mass, 180
Neutron stars, 72 density (ρpbh ), 176
Non-minimal coupling, 92 density parameter (pbh ), 176,
Nucleosynthesis 181, 185
primordial, 71, 94, 98, 161 direct detection, 185
stellar, 5 equation-of-state parameter (β),
175
formation of
OAO-2, 49, 121, 142 during phase transition, 173
OGLE microlensing survey, 69 via collapse of small density
Olbers’ paradox, 1 fluctuations, 172
and absorption, 3, 64 via cosmic string loops, 173
and astronomy textbooks, 4 via critical collapse, 173, 186
and expansion, 4, 8, 35, 64, 192 via inflationary mechanisms,
and finite age, 4, 7, 35, 64, 192 172, 186
and galaxy distribution, 3 initial mass distribution, 172
and spatial curvature, 4 lifetime (tpbh ), 174
and the laws of physics, 4 local evaporation rate, 185
bolometric resolution of, 35 luminosity, 179
in de Sitter model, 31
normalization ( ), 175, 185
spectral resolution of, 63 photosphere, 186
Optical band, 10, 120 spectral intensity due to, 183, 184
Optical depth, 135, 207 stable relics, 186
in dust, 136 typical mass (M∗ ), 175
in neutral hydrogen, 136 Prognoz, 142
Index 215