Cosmic
Cosmic
Cosmic
An overview of cosmology
Julien Lesgourgues
Theoretical Physics Division, CERN, CH-1211 Genève 23, Switzerland
LAPTH, Chemin de Bellevue, B.P. 110, F-74941 Annecy-Le-Vieux Cedex, France
Preprint number CERN-TH/2002-249
2
2.3.5 Large Scale Structure . . . . . . . . . . . . . . . . . . . . 52
2.4 The Inflationary Universe . . . . . . . . . . . . . . . . . . . . . . 53
2.4.1 Problems with the Standard Cosmological Model . . . . . 53
2.4.2 An initial stage of inflation . . . . . . . . . . . . . . . . . 54
2.4.3 Scalar field inflation . . . . . . . . . . . . . . . . . . . . . 56
2.4.4 Quintessence ? . . . . . . . . . . . . . . . . . . . . . . . . 58
3
Chapter 1
where c is the celerity of the wave (See figure 1.1). He suggested that this
effect could be observable for sound waves, and maybe also for light. The later
assumption was checked experimentally in 1868 by Sir William Huggins, who
found that the spectral lines of some neighboring stars were slightly shifted
toward the red or blue ends of the spectrum. So, it was possible to know the
projection along the line of sight of star velocities, vr , using
z ≡ ∆λ/λ = vr /c (1.2)
where z is called the redshift (it is negative in case of blue-shift) and c is the
speed of light. Note that the redshift gives no indication concerning the distance
of the star. At the beginning of the XX-th century, with increasingly good
instruments, people could also measure the redshift of some nebulae. The first
measurements, performed on the brightest objects, indicated some arbitrary
distribution of red and blue-shifts, like for stars. Then, with more observations,
4
n
v
Figure 1.1: The Doppler effect
it appeared that the statistics was biased in favor of red-shifts, suggesting that
a majority of nebulae were going away from us, unlike stars. This was raising
new questions concerning the distance and the nature of nebulae.
L = l × 4πr2 .
(1.3)
5
A B C A B C
be nothing special about the region or the galaxy in which we leave. This in-
tuitive idea was formulated by the astrophysicist Edward Arthur Milne as the
“Cosmological Principle”: the Universe as a whole should be homogeneous, with
no privileged point playing a particular role.
Was the apparently observed expansion of the Universe a proof against the
Cosmological Principle? Not necessarily. The homogeneity of the Universe is
compatible either with a static distribution of galaxies, or with a very special
velocity field, obeying to a linear distribution:
~v = H ~r (1.4)
where ~v denotes the velocity of an arbitrary body with position ~r, and H is a
constant of proportionality. An expansion described by this law is still homo-
geneous because it is left unchanged by a change of origin. To see this, one can
make an analogy with an infinitely large rubber grid, that would be stretched
equally in all direction: it would expand, but with no center (see figure 1.2).
This result is not true for any other velocity field. For instance, the expansion
law
~v = H |~r| ~r (1.5)
is not invariant under a change of origin: so, it has a center.
6
Figure 1.3: The diagram published by Hubble in 1929. The labels of the hori-
zontal (resp. vertical) axis are 0, 1, 2 Mpc (resp. 0, 500, 1000 km.s−1 ). Hubble
estimated the expansion rate to be 500 km.s−1 Mpc−1 . Today, it is known to be
around 70 km.s−1 Mpc−1 .
7
smoothing
radius
randomly distributed mass (with all masses obeying to the same distribution of
probability). Then, make a random displacement of each object (again with all
displacements obeying to the same distribution of probability). At small scales,
the mass density is obviously inhomogeneous for three reasons: the objects
are compact, they have different masses, and they are separated by different
distances. However, since the distribution has been obtained by performing a
random shift in mass and position, starting from an homogeneous structure, it
is clear even intuitively that the mass density smoothed over some large scale
will remain homogeneous again.
The Cosmological Principle should be understood in this sense. Let us sup-
pose that the Universe is almost homogeneous at a scale corresponding, say, to
the typical intergalactic distance, multiplied by thirty or so. Then, the Hub-
ble law doesn’t have to be verified exactly for an individual galaxy, because
of peculiar motions resulting from the fact that galaxies have slightly different
masses, and are not in a perfectly ordered phase like a grid. But the Hubble
law should be verified in average, provided that the maximum scale of the data
is not smaller than the scale of homogeneity. The scattering of the data at a
given scale reflects the level of inhomogeneity, and when using data on larger
and larger scales, the scattering must be less and less significant. This is exactly
what is observed in practice. An even better proof of the homogeneity of the
Universe on large scales comes from the Cosmic Microwave Background, as we
shall see in section 2.2.
We will come back to these issues in section 2.2, and show how the forma-
tion of inhomogeneities on small scales are currently understood and quantified
within some precise physical models.
8
1.2 The Universe Expansion from Newtonian Grav-
ity
It is not enough to observe the galactic motions, one should also try to explain
it with the laws of physics.
GM (r)
r̈ = − (1.7)
r2
where G is Newton’s constant and M (r) is the mass contained inside the sphere
of radius r. In other words, the particle feels the same force as if it had a
two-body interaction with the mass of the sphere concentrated at the center.
Note that r varies with time, but M (r) remains constant: because of spherical
1 Going a little bit more into details, it is also relevant when an object is so heavy and so
close that the speed of liberation from this object - defined like the speed of liberation from
the Earth - is comparable to the speed of light.
9
r
symmetry, no particles can enter or leave the sphere, which contains always the
same mass.
Since Gauss theorem allows us to make completely abstraction of the mass
outside the sphere, we can make an analogy with the motion e.g. of a satellite
ejected vertically from the Earth. We know that this motion depends on the
initial velocity, compared with the speed of liberation from the Earth: if the
initial speed is large enough, the satellites goes away indefinitely, otherwise it
stops and falls down. We can see this mathematically by multiplying equation
(1.7) by ṙ, and integrating it over time:
ṙ2 GM (r) k
= − (1.8)
2 r 2
where k is a constant of integration. We can replace the mass M (r) by the
volume of the sphere multiplied by the homogeneous mass density ρmass , and
rearrange the equation as
2
ṙ 8πG k
= ρmass − 2 . (1.9)
r 3 r
The quantity ṙ/r is called the rate of expansion. Since M (r) is time-independent,
the mass density evolves as ρmass ∝ r−3 (i.e., matter is simply diluted when the
Universe expands). The behavior of r(t) depends on the sign of k. If k is posi-
tive, r(t) can grow at early times but it always decreases at late times, like the
altitude of the satellite falling back on Earth: this would correspond to a Uni-
verse expanding first, and then collapsing. If k is zero or negative, the expansion
lasts forever.
In the case of the satellite, the critical value, which is the speed of liberation
(at a given altitude), depends on the mass of the Earth. By analogy, in the case
of the Universe, the important quantity that should be compared with some
critical value is the homogeneous mass density. If at any time, ρmass it is bigger
than the critical value
3(ṙ/r)2
ρmass = (1.10)
8πG
then k is positive and the Universe will re-collapse. Physically, what it means
is that the gravitational force wins against inertial effects. In the other case,
the Universe expands forever, because the density is too small with respect
to the expansion velocity, and gravitation never takes over inertia. The case
k = 0 corresponds to a kind of equilibrium between gravitation and inertia.
The Universe expands forever, following a power–law: r(t) ∝ t2/3 .
10
k<0
k=0
r
k>0
RH = cH −1 , (1.11)
11
Figure 1.7: Measuring the curvature of some two-dimensional spaces. By walk-
ing four times along a distance d, turning 90 degrees left between each walk, a
small man on the plane would find that he is back at his initial point. Doing
the same thing, a man on the sphere would cross his trajectory, and would stop
way from his departure point. Instead, a man on the hyperboloid would not
close his trajectory.
12
C
B
y
x
A
together with a crucial information: the scale of the map as a function of the
location on the map - the scale on a planisphere is not uniform! This would
bring us to a mathematical formalism, called Riemann geometry, that we don’t
have time to introduce here.
That was still four three dimensions. Finally, the curvature of a four-
dimensional space-time is impossible to visualize intuitively, first because it has
even more dimensions, and second because even in special/general relativity,
there is a difference between time and space (for the readers who are familiar
with special relativity, what is referred here is the negative signature of the
metric).
The Einstein theory of gravitation says that four-dimensional space-time is
curved, and that the curvature in each point is given entirely in terms of the
matter content in this point. In simple words, this means that the curvature
plays more or less the same role as the potential in Newtonian gravity. But the
potential was simply a function of space and time coordinates. In GR, the full
curvature is described not by a function, but by something more complicated -
like a matrix of functions obeying to particular laws - called a tensor.
Finally, the definition of geodesics (the trajectories of free-falling bodies)
is the following. Take an initial point and an initial direction. They define a
unique line, called a geodesic, such that any segment of the line gives the shortest
trajectory between the two points (so, for instance, on a sphere of radius R,
the geodesics are all the great circles of radius R, and nothing else). Of course,
geodesics depend on curvature. All free-falling bodies follow geodesics, including
light rays. This leads for instance to the phenomenon of gravitational lensing
(see figure 1.8).
So, in General Relativity, gravitation is not formulated as a force or a field,
but as a curvature of space-time, sourced by matter. All isolated systems follow
geodesics which are bent by the curvature. In this way, their trajectories are
13
affected by the distribution of matter around them: this is precisely what gravity
means.
14
• the two-dimensional space-time curvature, i.e., for instance, the curvature
of the (t, x) space-time. Because of isotropy, the curvature of the (t, y)
and the (t, z) space-time have to be the same. This curvature is the one
responsible for the expansion. Together with the spatial curvature, it fully
describes gravity in the homogeneous Universe.
The second part is a little bit more difficult to understand for the reader who
is not familiar with GR, and causes a big cultural change in one’s intuitive
understanding of space and time.
dr2
dl2 = a2(t) + r 2
(dθ 2
+ sin 2
θ dφ 2
) (1.13)
1 − kr2
where a(t) is a function of time, called the scale factor, whose time-variations
account for the curvature of two-dimensional space-time; and k is a constant
number, related to the spatial curvature: if k = 0, the Universe is flat, if k > 0,
it is closed, and if k < 0, it is open. In the last two cases, the radius of curvature
Rc is given by
a(t)
Rc (t) = p . (1.14)
|k|
When the Universe is closed, it has a √ finite volume, so the coordinate r is defined
only up to a finite value: 0 ≤ r < 1/ k.
If k was equal to zero and a was constant in time, we could redefine the
coordinate system with (r 0 , θ0 , φ0 ) = (ar, θ, φ), find the usual expression (1.12)
for distances, and go back to Newtonian mechanics. So, we stress again that
the curvature really manifests itself as k 6= 0 (for spatial curvature) and ȧ 6= 0
(for the remaining space-time curvature).
Note that when k = 0, we can rewrite dl 2 in Cartesian coordinates:
In that case, the expressions for dl contains only the differentials dx, dy, dz,
and is obviously left unchanged by a change of origin - as it should be, because
we assume a homogeneous Universe. But when k 6= 0, there is the additional
factor 1/(1 − kr 2 ), where r is the distance to the origin. So, naively, one could
think that we are breaking the homogeneity of the Universe, and privileging
a particular point. This would be true in Euclidean geometry, but in curved
geometry, it is only an artifact. Indeed, choosing a system of coordinates is
equivalent to mapping a curved surface on a plane. By analogy, if one draws a
planisphere of half-of-the-Earth by projecting onto a plane, one has to chose a
particular point defining the axis of the projection. Then, the scale of the map
15
Figure 1.9: Analogy between the Friedman coordinates for an open/closed Uni-
verse, and the mapping of a sphere by parallel projection onto a plane. Two
maps are shown, with different axes of projection. On each map, the scale de-
pends on the distance to the center of the map: the scale is smaller at the center
than on the borders. However, the physical distance between too point on the
sphere can be computed equally well using the two maps and their respective
scaling laws.
is not the same all over the map. However, if one reads too maps with different
projection axes (see figure 1.9), and computes the distance between Geneva and
Rome from the two maps, using the appropriate scaling laws, he will of course
find the same physical distance. This is exactly what happens in our case: in
a particular coordinate system, the expression for physical lengths depends on
the coordinate with respect to the origin, but physical lengths are left unchanged
by a change of coordinates. In figure 1.10 we push the analogy and show how
the factor 1/(1 − kr 2 ) appears explicitly for the parallel projection of a 2-sphere
onto a plane. The rigorous proof that equation (1.13) is invariant by a change
of origin would be more complicated, but essentially similar.
Free-falling bodies follow geodesics in this curved space-time. Among bodies,
we can distinguish between photons, which are relativistic because they travel at
the speed of light, and ordinary matter like galaxies, which are non-relativistic
(i.e., an observer sitting on them can ignore relativity at short distance, as we
do on Earth).
The geodesics followed by non-relativistic bodies in space-time are given by
dl = 0, i.e., fixed spatial coordinates. So, galaxies are at rest in comoving coor-
dinates. But it doesn’t mean that the Universe is static, because all distances
grow proportionally to a(t): so, the scale factor accounts for the homogeneous
16
r
dl dr
θ
0
R
Figure 1.10: We push the analogy between the Friedmann coordinates in a closed
Universe, and the parallel projection of a half–sphere onto a plane. The physical
distance dl on the surface of the sphere
p corresponds to the coordinate interval
dr, such that dl = dr/cos θ = dr/ 1 − sin2 θ = dr/ 1 − (r/R)2 , where R is
p
the radius of the sphere. This is exactly the same law as in a closed Friedman
Universe.
17
to
te θe r
re
Figure 1.11: An illustration of the propagation of photons in our Universe. The
dimensions shown here are (t, r, θ): we skip φ for the purpose of representa-
tion. We are siting at the origin, and at a time t0 , we can see a the light of
a galaxy emitted at (te , re , θe ). Before reaching us, the light from this galaxy
has traveled over a curved trajectory. In any point, the slope dr/dt is given
by equation (1.16). So, the relation between re and (t0 − te ) depends on the
spatial curvature and on the scale factor evolution. The trajectory would be a
straight line in space-time only if k = 0 and a = constant, i.e., in the limit of
Newtonian mechanics in Euclidean space. The ensemble of all possible photon
trajectories crossing r = 0 at t = t0 is called our “past light cone”, visible here
in orange. Asymptotically, near the origin, it can be approximated by a linear
cone with dl = cdt, showing that at small distance, the physics is approximately
Newtonian.
The redshift.
First, a simple calculation based on equation (1.16) - we don’t include it here
- gives the redshift associated with a given source of light. Take two observers
sitting on two galaxies (with fixed comoving coordinates). A light signal is sent
from one observer to the other. At the time of emission t1 , the first observer
measures the wavelength λ1 . At the time of reception t2 , the second observer
will measure the wavelength λ2 such that
δλ λ2 − λ 1 a(t2 )
z= = = −1 . (1.18)
λ λ1 a(t1 )
So, the redshift depends on the variation of the scale-factor between the time
of emission and reception, but not on the curvature parameter k. This can
be understood as if the scale-factor represented the “stretching” of light in out
18
curved space-time. We can check that when the Universe expands (a(t2 ) >
a(t1 )), the wavelengths are enhanced (λ2 > λ1 ), and the spectral lines are
red-shifted (contraction would lead to blue-shift, i.e., to a negative redshift
parameter z < 0).
In real life, what we see when we observe the sky is never directly the dis-
tance, size, or velocity of an object, but the light traveling from it. So, by study-
ing spectral lines, we can easily measure the redshifts. This is why astronomers,
when they refer to an object or to an event in the Universe, generally mention
the redshift rather than the distance or the time. It’s only when the function
a(t) is known - and as we will see later, we almost know it in our Universe -
that a given redshift can be related to a given time and distance.
The importance of the redshift as a measure of time and distance comes
from the fact that we don’t see our full space-time, but only our past line-
cone, i.e., a three-dimensional subspace of the full four-dimensional space-time,
corresponding to all points which could emit a light signal that we would receive
today on Earth. If we remove the coordinate φ and draw the past line-cone
of a given observer, it does look like a cone, of course curved like all photon
trajectories (see figure 1.11).
Also note that in Newtonian mechanics, the redshift was defined as z = v/c,
and seemed to be limited to |z| < 1. The true GR expression doesn’t have such
limitations, since the ratio of the scale factors can be arbitrarily large without
violating any fundamental principle. And indeed, observations do show many
objects - like quasars - at redshifts of 2 or even bigger. We’ll see later that we
also observe the Cosmic Microwave Background at a redshift of approximately
z = 1000!
dl = c dt (1.19)
a(t0 ) 1 ȧ(t0 )
z= −1= ȧ(t0 )
−1= dt . (1.20)
a(t0 − dt) 1 − a(t0 ) dt a(t0 )
ȧ(t0 ) dl
z= . (1.21)
a(t0 ) c
So, at small redshift, we recover the Hubble law, and the role of the Hubble
parameter is played by ȧ(t0 )/a(t0 ). In the Friedmann Universe, we will directly
define the Hubble parameter as the expansion rate of the scale factor:
ȧ(t)
H(t) = . (1.22)
a(t)
19
Angular diameter – redshift relation.
When looking at the sky, we don’t see directly the size of the objects, but
only their angular diameter. In Euclidean space, i.e. in absence of gravity, the
angular diameter dθ of an object would be related to its size dl and distance r
through
dl
dθ = . (1.24)
r
Recalling that z = v/c and v = Hr, we easily find an angular diameter – redshift
relation valid in Euclidean space:
H dl
dθ = . (1.25)
cz
In General Relativity, because of the bending of light by gravity, the steps of the
calculation are different. Using the definition of infinitesimal distances (1.13),
we see that the physical size dl of an object is related to its angular diameter
dθ through
dl = a(te ) re dθ (1.26)
where te is the time at which the galaxy emitted the light ray that we observe
today on Earth, and re is the comoving coordinate of the object. The equation
of motion of light gives a relation between re and te :
Z 0 Z t0
−dr c
√ = dt (1.27)
re 1 − kr 2
te a(t)
where t0 is the time today. So, the relation between re and te depends on a(t)
and k.
If we knew the function a(t) and the value of k, we could calculate the
integral (1.27) and re-express equation (1.26) in the form
H dl
dθ = f (z) (1.28)
cz
where f (z) would be a function of redshift, accounting for general relativistic
corrections, and depending on the curvature. We conclude that in the Fried-
mann Universe, for an object of fixed size and redshift, the angular diameter
depends on the curvature - as illustrated graphically in figure 1.12. Therefore,
if we know in advance the physical size of an object, we can simply measure
its redshift, its angular diameter, and immediately obtain some informations on
the geometry of the Universe.
It seems very difficult to know in advance the physical size of a remote object.
However, we will see in the next chapter that some beautiful developments of
modern cosmology enabled physicists to know the physical size of some objects
visible at a redshift of z ' 1000 (namely, the anisotropies of the Cosmological
Microwave Background). So, the angular diameter – redshift relation has been
used in the past decade in order to measure the spatial curvature of the Universe.
We will show the results in the last section of chapter two.
20
to
dl
te r
re’
re θe
θ’e
But we would like to extend this technique to very distant objects (in par-
ticular, supernovae, that are observed up to a redshift of two, and are thought
to have a calibrated absolute luminosity, like cepheids). So, one needs to recom-
pute this relation in the framework of general relativity. Because of the bending
of light rays, the trajectory of the photons traveling from an object located at
a given redshift depends both on the curvature and the scale factor. So, today,
the light will be dispersed on a sphere of variable size.
If we know several objects with the same absolute magnitude, and we plot
them on an apparent magnitude - redshift diagram, we will get a curve whose
shape at small redshift is given by (1.29), and at large redshift by General
Relativity corrections (depending on the spatial curvature and the scale factor.)
We will see in the next chapter that such an experiment has been performed for
supernovae, leading to one of the most intriguing discovery of the past years.
21
In summary of this section, according to General Relativity, the homoge-
neous Universe is curved by its own matter content, and the curvature can be
specified with just one number and one function: the curvature k, and the scale
factor a(t). We should be able to relate these two quantities with what causes
the curvature: the matter density.
Together with the propagation of light equation, this law is the key ingredient
of the Friedmann-Lemaı̂tre model.
In special/general relativity, the total energy of a particle is the sum of its
rest energy E0 = mc2 plus its momentum energy. So, if we consider only non-
relativistic particles like those forming galaxies, we get ρ = ρmass c2 . Then,
the Friedmann equation looks exactly like the Newtonian expansion law (1.9),
excepted that the function r(t) (representing previously the position of material
points) is replaced by the scale factor a(t). Of course, the equations look the
same, but are not equivalent . First, we have already seen that only at very
small distance, the distinction between the scale factor a(t) and the classical
position r(t) is irrelevant; on large distances – like the Hubble radius – the
difference of interpretation between the two is crucial. Second, we have seen
that the term proportional to k seems to break the homogeneity of the Universe
in the Newtonian formalism, while in General Relativity, when it is correctly
interpreted as the spatial curvature term, it is perfectly consistent with the
Cosmological Principle.
Actually, there is a third crucial difference between the Friedmann law and
the Newtonian expansion law. In the previous paragraph, we only considered
non-relativistic matter like galaxies. But the Universe also contains relativistic
particles traveling at the speed of light, like photons (and also neutrinos if their
mass is very small). A priori, their gravitational effect on the Universe expansion
could be important. How can we include them?
22
For ultra–relativistic matter like photons, the energy is not given by the rest
mass but by the frequency ν or the wavelength λ:
E = hν = hc/λ. (1.31)
ȧ
ρ̇ = −3 ρ ⇒ ρ ∝ a−3 . (1.33)
a
For relativistic matter like photons, we get
ȧ 1 ȧ
ρ̇ = −3 (1 + )ρ = −4 ρ ⇒ ρ ∝ a−4 . (1.34)
a 3 a
Finally, in quantum field theory, it is well–known that the vacuum can have an
energy density different from zero. The Universe could also contain this type
of energy (which can be related to particle physics, phase transitions and spon-
taneous symmetry breaking). For vacuum, one can show that p = −ρ. This
means that the vacuum energy is never diluted and remains constant. This con-
stant energy density was called by Einstein – who introduced it in a completely
different way and with other motivations – the Cosmological Constant. We will
see that this term is probably important in our Universe.
23
Chapter 2
24
1. ultra–relativistic particles, with v = c, p = ρ/3, ρ ∝ a−4 . This
includes photons, massless neutrinos, and eventually other particles that
would have a very small mass and would be traveling at the speed of
light. The generic name for this kind of matter, which propagates like
electromagnetic radiation, is precisely “radiation”.
where ρR is the radiation density and ρM the matter density. The order in
which we wrote the four terms on the right–hand side – radiation, matter,
spatial curvature, cosmological constant – is not arbitrary. Indeed, they evolve
with respect to the scale factor as a−4 , a−3 , a−2 and a0 . So, if the scale factors
keeps growing, and if these four terms are present in the Universe, there is a
chance that they all dominate the expansion of the Universe one after each other
(see figure 2.1). Of course, it is also possible that some of these terms do not
exist at all, or are simply negligible. For instance, some possible scenarios would
be:
• only matter domination, from the initial singularity until today (we’ll come
back to the notion of Big Bang later).
• radiation domination → matter domination today.
• radiation dom. → matter dom. → curvature dom. today
• radiation dom. → matter dom. → cosmological constant dom. today
But all the cases that do not respect the order (like for instance: curvature
domination → matter domination) are impossible.
During each stage, one component strongly dominates the others, and the
behavior of the scale factor, of the Hubble parameter and of the Hubble radius
are given by:
1. Radiation domination:
ȧ2 1
∝ a−4 , a(t) ∝ t1/2 , H(t) = , RH (t) = 2ct. (2.3)
a2 2t
So, the Universe is in decelerated power–law expansion.
25
2
H
a−4
a−3
−2
a
cst
2. Matter domination:
ȧ2 2 3
∝ a−3 , a(t) ∝ t2/3 , H(t) = , RH (t) = ct. (2.4)
a2 3t 2
Again, the Universe is in power–law expansion, but it decelerates more
slowly than during radiation domination.
3. Negative curvature domination (k < 0):
ȧ2 1
∝ a−2 , a(t) ∝ t, H(t) = , RH (t) = ct. (2.5)
a2 t
An open Universe dominated by its curvature is in linear expansion.
4. Positive curvature domination: if k > 0, and if there is no cosmological
constant, the right–hand side finally goes to zero: expansion stops. After,
the scale factor starts to decrease. H is negative, but the right–hand side
of the Friedmann equation remains positive. The Universe recollapses. We
know that we are not in such a phase, because we observe the Universe
expansion. But a priori, we might be living in a closed Universe, slightly
before the expansion stops.
5. Cosmological constant domination:
ȧ2 p
→ constant, a(t) ∝ exp(Λt/3), H = c/RH = Λ/3. (2.6)
a2
The Universe ends up in exponentially accelerated expansion.
So, in all cases, there seems to be a time in the past at which the scale factor
goes to zero, called the initial singularity or the “Big Bang”. The Friedmann
description of the Universe is not supposed to hold until a(t) = 0. At some time,
when the density reaches a critical value called the Planck density, we believe
26
that gravity has to be described by a quantum theory, where the classical notion
of time and space disappears. Proposals for such theories exist, called “string
theories”. Sometimes, they try to address the initial singularity problem, and to
build various scenarios for the origin of the Universe. Anyway, this field is still
very speculative, and of course, our understanding of the origin of the Universe
will always break down at some point. A reasonable goal is just to go back as
far as possible, on the basis of testable theories.
The future evolution of the Universe heavily depends on the existence of a
cosmological constant. If the later is exactly zero, then the future evolution
is dictated by the curvature (if k > 0, the Universe will end up with a “Big
Crunch”, where quantum gravity will show up again, and if k ≤ 0 there will be
eternal decelerated expansion). If instead there is a positive cosmological term,
then the Universe necessarily ends up in eternal accelerated expansion.
ΩR + ΩM − Ωk + ΩΛ = 1. (2.13)
Ω0 ≡ Ω R + Ω M + Ω Λ (2.14)
is equal to one. In that case, as we already know, the total density of matter,
radiation and Λ is equal at any time to the critical density
3c2 H 2 (t)
ρc (t) = . (2.15)
8πG
Note that the parameters Ωx , where x ∈ {R, M, Λ}, could have been defined as
the present density of each specie divided by the present critical density:
ρx0
Ωx = . (2.16)
ρc0
27
So far, we conclude that the evolution of the Friedmann Universe can be de-
scribed entirely in terms of four parameters, called the “cosmological parame-
ters”:
Ω R , Ω M , Ω Λ , H0 . (2.17)
One of the main purposes of cosmology is to measure the value of the cosmo-
logical parameters.
28
2
H
a−4
a−3
equality a now
γ O
pling
decou
esis
synth
nucleo
Figure 2.2: On the top, evolution of the square of the Hubble parameter as a
function of the scale factor in the Hot Big Bang scenario. We see the two stages
of radiation and matter domination. On the bottom, an idealization of a typical
photon trajectory. Before decoupling, the mean free path is very small due to the
many interactions with baryons and electrons. After decoupling, the Universe
becomes transparent, and the photon travels in straight line, indifferent to the
surrounding distribution of electrically neutral matter.
29
– the present temperature of this cosmological black–body had to be around a
few Kelvin degrees. This would correspond to typical wavelengths of the order
of one millimeter, like microwaves.
30
After these stages, we enter into a series of phase transitions that are much
better understood, and well constrained by the observations. These are:
1. at t = 1 − 100 s, T = 109 − 1010 K, ρ ∼ (0.1 − 1 MeV)4 , nucleosynthesis:
formation of light nuclei, in particular hydrogen, helium and lithium. By
comparing the theoretical predictions with the observed abundance of light
elements in the present Universe, it is possible to give a very precise esti-
mate of the total density of baryons in the Universe: ΩB h2 = 0.020±0.002.
2. at t ∼ 104 yr, T ∼ 104 K, ρ ∼ (1 eV)4 , the radiation density equals the
matter density: the Universe goes from radiation domination to matter
domination.
3. at t ∼ 105 yr, T ∼ 2500 K, ρ ∼ (0.1 eV)4 , recombination of atoms and
decoupling of photons. After that time, the Universe is transparent: the
photons free–stream along geodesics. So, by looking at the CMB, we
obtain a picture of the Universe at decoupling. Nothing has changed in
the distribution of photons between 105 yr and today, excepted for an
overall redshift of all wavelengths (implying ρ ∝ a−4 , and T ∝ a−1 ).
4. after recombination, the small inhomogeneities of the smooth matter dis-
tribution are amplified. This leads to the formation of stars and galaxies,
as we shall see later.
The success of the Hot big bang Scenario relies on the existence of a radiation–
dominated stage followed by a matter–dominated stage. However, an additional
stage of curvature or cosmological constant domination is not excluded.
31
In a matter–dominated Universe, H = 2/(3t). Then, using the current
estimate of H0 , we get t0 = 2/(3H0 ) ∼ 9 Gyr. If we introduce a negative
curvature or a positive cosmological constant, it is easy to show (using the
Friedmann equation) that H decreases more slowly in the recent epoch. In that
case, for the same value of H0 , the age of the Universe is obviously bigger. For
instance, with ΩΛ ' 0.9, we would get t0 ∼ 18 Gyr.
We leave the answer to these intriguing issues for section 2.3.
32
r
o
redshift luminosity
v(r) v (r)
lum
b
r r
Figure 2.3: A sketchy view of the galaxy rotation curve issue. The genuine
orbital velocity of the stars is measured directly from the redshift. From the
luminosity distribution, we can reconstruct the orbital velocity under the as-
sumption that all the mass in the galaxy arises form of the observed luminous
matter. Even by varying the unknown normalization parameter b, it is impos-
sible to obtain an agreement between the two curves: their shapes are different,
with the reconstructed velocity decreasing faster with r than the genuine ve-
locity. So, there has to be some non–luminous matter around, deepening the
potential well of the galaxy.
We know that the CMB temperature is approximately the same in all directions
in the sky. This proves that in the early Universe, at least until a redshift
z ∼ 1000, the distribution of matter was very homogeneous. So, we can treat the
inhomogeneities as small linear perturbations with respect to the homogeneous
33
background. For a given quantity, the evolution becomes eventually non–linear
when the relative density inhomogeneity
Note that the Fourier transformation is defined with respect to the comoving
coordinates ~r. So, k is the comoving wavenumber, and 2π/k the comoving
wavelength. The physical wavelength is
2πa(t)
λ(t) = . (2.24)
k
This way of performing the Fourier transformation is the most convenient one,
leading to independent evolution for each mode. It accounts automatically for
the expansion of the Universe. If no specific process occurs, δx (t, ~k) will remain
constant, but each wavelength λ(t) is nevertheless stretched proportionally to
the scale factor.
Of course, to a certain extent, the perturbations in the Universe are ran-
domly distributed. The purpose of the cosmological perturbation theory is not
to predict the individual value of each δx (t, ~k) in each direction, but just the
spectrum of the perturbations at a given time, i.e., the root–mean–square of all
δx (t, ~k) for a given time and wavenumber, averaged over all directions. This
spectrum will be noted simply as δx (t, k).
The full linear theory of cosmological perturbation is quite sophisticated.
Here, we will simplify considerably. The most important density perturbations
to follow during the evolution of the Universe are those of photons, baryons,
CDM, and finally, the curvature perturbations. As far as curvature is con-
cerned, the homogeneous background is fully described by the Friedmann ex-
pression for infinitesimal distances, i.e., by the scale factor a(t) and the spatial
curvature parameter k. We need to define some perturbations around this aver-
age homogeneous curvature. The rigorous way to do it is rather technical, but
at the level of this course, the reader can admit that the most relevant curvature
perturbation is a simple function of time and space, which is almost equivalent
to the usual Newtonian gravitational potential Φ(t, ~r) (in particular, at small
distance, the two are completely equivalent).
34
t dH
t2 r2 r
γ γ
t1
Figure 2.4: The horizon dH (t1 , t2 ) is the physical distance at time t2 between
two photons emitted in opposite directions at time t1 .
note that if the origin r = 0 coincides with the point of emission, the horizon
can be expressed as
Z r2 Z r2
dr
dH (t1 , t2 ) = 2 dl = 2 a(t2 ) √ . (2.25)
0 0 1 − kr2
But the equation of propagation of light, applied to one of the photons, also
gives Z t2 Z r2
c dt dr
= √ . (2.26)
t1 a(t) 0 1 − kr2
Combining the two equations, we get
t2
c dt
Z
dH (t1 , t2 ) = 2a(t2 ) . (2.27)
t1 a(t)
Physically, since no information can travel faster than light, the horizon rep-
resents the maximal scale on which a process starting at time t1 can have an
impact at time t2 . In the limit in which t1 is very close to the initial singularity,
dH represents the causal horizon, i.e. the maximal distance at which too points
in the Universe can be in causal contact. In particular, perturbations with wave-
lengths λ(t2 ) > dH (t1 , t2 ) are acausal, and cannot be affected by any physical
process – if they were, it would mean that some information has traveled faster
than light.
35
compute
1/2 1/2 1/2
dH (t1 , t2 ) = 4 c t2 [t2 − t1 ]. (2.28)
In the limit in which t1 t2 , we find
So, during radiation domination, the causal horizon coincides with the Hubble
radius.
Since each wavelength λ(t) = 2πa(t)/k evolves with time at a different rate
than the Hubble radius RH (t), a perturbation of fixed wavenumber k can be
super–horizon (λ RH ) at some initial time, and sub–horizon (λ RH ) at
a later time. This is illustrated on figure 2.5. While the scale factor is always
decelerating, the Hubble radius grows linearly: so, the Fourier modes enter one
after each other inside the causal horizon. The modes which enter earlier are
those with the smallest wavelength.
What about the perturbations that we are able to measure today? Can they
be super–horizon? In order to answer this question, we first note that the largest
distance that we can see today in our past–light–cone is the one between two
photons emitted at decoupling, and reaching us today from opposite direction
in the sky. Because the Universe was not transparent to light before decoupling,
with current technology, we cannot think of any observation probing a larger
distance.
So, in order to compute the maximal wavelength accessible to observation,
we can make a short calculation, quite similar to that of the horizon, but with
reversed photon trajectories: the two photons are emitted at (t1 , r1 ), and they
reach us today at (t0 , 0) from opposite directions. The propagation–of–light
equation gives
Z t0 Z 0
c dt −dr
= √ . (2.32)
t1 a(t) r1 1 − kr2
But today, two points located at coordinate r1 in opposite directions are sepa-
rated by the physical distance
Z r1 Z r1 Z t0
dr c dt
2 dl = 2 a(t0 ) √ = 2a(t0 ) = dH (t1 , t2 ). (2.33)
0 0 1 − kr 2
t1 a(t)
We conclude that the causal horizon evaluated today gives an upper bound on
the distances accessible to any observation (whatever the time of decoupling
is). Our knowledge of the Universe is limited to a finite sphere of physical
radius RH (t0 ). So, all observable cosmological perturbations are sub–horizon
today, while in the past, they were super–horizon (see figure 2.5). The largest
36
d RH
λα a
t eq t
observable scale has entered inside the horizon very recently, with a wavelength
approximately equal to
c 3 × 108 m s−1
RH (t0 ) = ' ' 5 × 103 Mpc. (2.34)
H0 70 km s−1 Mpc−1
37
perturbation
lar k
ge radiation domination
sm
wa
all
v
wa
ele
AC
v
ng
AU
ele
th
ng
AL
CA
th
t eq
US
AL
matter domination
t 2πa
RH
Figure 2.6: For any specie, this type of diagram can be used for understanding
qualitatively the evolution of the Fourier spectrum as a function of time. Each
Fourier mode of fixed comoving wavenumber k enters into the causal horizon
when it crosses the purple line (which corresponds to λ(t) = RH (t), i.e., to
k = 2πa(t)/RH (t)).
the same density perturbations, and can be viewed as single fluid. In addi-
tion, since baryons have a mass, they feel gravitational forces: so, the entire
baryon–photon fluid is coupled with the gravitational potential.
So, in each well of gravitational potential, the baryon–photon fluid tends to
be compressed by gravity, but the relativistic photon pressure p = 31 ρ tends to
resist to this compression. The dynamics of the fluid is governed by two antago-
nist forces, as for any oscillator. So, for each Fourier mode, any departure from
equilibrium – such as a non–zero initial perturbation – will lead to oscillations.
These oscillations propagate through the fluid exactly like sound waves in the
air: due to this analogy, the perturbations in the baryon–photon fluid before
decoupling are called acoustic oscillations.
Of course, the oscillations can occur only for causal wavelengths: because of
causality, wavelengths bigger than the Hubble radius do not feel any force. So, a
given mode with a given non–zero initial amplitude starts oscillating only when
it enters into the horizon (see figure 2.7). The initial amplitude for each mode
at initial time is given by the primordial spectrum of perturbations. Of course, if
the Universe was perfectly homogeneous from the very beginning (with no initial
perturbations), there would be no acoustic oscillations – and no formation of
structure either, so we must assume some primordial perturbations. At this
38
γ perturbation ( δ T/T )
decoupling
t 2πa
RH
stage, the primordial spectrum has to be assumed arbitrarily. Only in the last
section, we will see a precise mechanism called “inflation” leading to a given
primordial spectrum. On figure 2.7, we start from a flat spectrum, independent
of k – in good approximation, this corresponds to what is observed today, and
also to what is predicted by the theory of inflation.
When the photons decouple from the matter, their mean–free–path grows
rapidly from zero ton infinity, and they travel in straight line with essentially no
interaction. During this free-streaming, all their characteristics – momentum,
energy, wavelength, spectrum, temperature – do not evolve, excepted under the
effect of homogeneous expansion. So, δT and T̄ are both shifted proportionally
to 1/a(t), but the distribution of δT /T̄ (t, k) observed today is exactly the same
as at the time of decoupling. Observing the temperature anisotropies in the sky
amounts exactly in taking a picture of the Universe at the time of decoupling.
So, in order to know the present Fourier spectrum of temperature fluctua-
39
tions, it is enough to follow the acoustic oscillations until decoupling. As eas-
ily understood from figure 2.7, the Fourier modes which are still outside the
Hubble radius at decoupling have no time to oscillate. Smaller wavelengths
have enough time for one oscillation, even smaller ones for two oscillations, etc.
The Fourier spectrum of temperature fluctuations consists in a flat plateau for
k < 2πa(tdec)/RH (tdec ), and then a first peak, a second peak, a third peak, etc.
Note that according to this model, the perturbations oscillate, but they
keep the same order of magnitude (there is no large amplification). Therefore,
if we start from small primordial perturbations, we are able to describe all the
evolution of temperature fluctuations with the linear theory of perturbations.
40
Figure 2.8: The first genuine “picture of the Universe” at the time of decou-
pling, 100 000 years after the initial singularity, and 15 billion years before
the present epoch. Each blue (resp. red) spot corresponds to a slightly colder
(resp. warmer) region of the Universe at that time. This map, obtained by the
American satellite COBE in 1992, covers the entire sky: so, it pictures a huge
sphere centered on us. The fluctuations are only of order 10−5 with respect to
the average value T0 = 2.728 K. They are the “seeds” for the present structure
of the Universe: each red spot corresponds to a small overdensity of photons
and baryons at the time of decoupling, that has been enhanced later, leading to
galaxies and clusters of galaxies today.
41
Figure 2.9: The map of CMB anisotropies obtained by the balloon experiment
Boomerang in 2001. Unlike COBE, Boomerang only analyzed a small patch of
the sky, but with a much better angular resolution of a few arc–minutes. The
black (resp. yellow) spots correspond to colder (resp. warmer) regions.
There are still many CMB experiments going on. The satellite WMAP
goes on acquiring data in order to reach even better resolution. In 2007, the
European satellite “Planck” will be launched. Planck is expected to perform
the best possible measurement of the CMB temperature anisotropies, with such
great precision that most cosmological parameters should be measured with
only one per cent uncertainty.
42
Figure 2.10: The Fourier decomposition of the Boomerang map reveals the
structure of the first three acoustic oscillations. These data points account for
the spectrum of (δT )2 in (µK)2 – we know from COBE that δT is roughly of
order 10−5 T0 ' 30 µK. The spectrum is not shown as a function of wavenumber
k, but of multipole number l. This is because the anisotropies are not observed
in 3-dimensional space, but on a two-dimensional sphere: the sphere of redshift
z ∼ 1000 centered on us. For purely geometric reasons, a spherical map is
not expandable in Fourier modes, but in Legendre multipoles. However, the
interpretation is the same. The first, second and third peak correspond to the
modes that could oscillate one, two or three times between their entry into the
Hubble radius and the time of decoupling.
43
Figure 2.11: The spectrum of the Boomerang map is perfectly fitted by the
theoretical predictions. For a given choice of cosmological parameters – such as
ΩB , ΩCDM , ΩΛ , ΩR , H0 , the parameters describing the primordial spectrum,
etc.– the theoretical calculation is performed by a numerical code of typically
2000 lines. By comparing with the experimental data, one can extract the
parameter values, as we will see in section 2.3.
tions inside the Hubble radius during radiation domination, this time would be
the same for all small wavelengths, because they would grow at the same rate,
starting from the same value. But because there is a very slow amplification of
perturbations during this epoch, the most amplified wavelength are the smallest
ones – since they entered earlier into the Hubble radius. So, the smallest wave-
lengths are the first ones to become non-linear. As a consequence, the smallest
structures in the Universe are the older ones: this is called hierarchical structure
formation.
44
Figure 2.12: The full-sky map of CMB anisotropies obtained by the satellite
WMAP in 2003. The blue (resp. red) spots correspond to colder regions. This
map has been cleaned from any foreground contamination, including that of the
Milky Way: it only shows the CMB contribution.
ies will also appear. Finally, in the present Universe, the scale of non-linearity
is expected to be around 20 or 40 Mpc. This is one order of magnitude bigger
than clusters of galaxies: so, even the clusters are not smoothly distributed. In
fact, they tend to organize themselves along huge filaments, separated by voids
of size 20 to 40 Mpc. The structure of the Universe on these huge scales looks
very much like a spounge... But by smoothing the present distribution of matter
over scales larger than 40 Mpc, one finds again a continuous distribution, with
small inhomogeneities described by the linear Fourier spectrum of cosmological
perturbation theory.
Over the past decades, astronomers could build some very large three-
dimensional maps of the galaxy distribution – even larger than the scale of
non-linearity. On figure 2.15, we can see two thin “slices” of the surrounding
Universe. Inside these slices, each point corresponds to a galaxy seen by the
“2dF Galaxy Redshift Survey”. The radial coordinate is the redshift, or the
distance between the object and us. Even at redshift z < 0.1, we can see some
45
Figure 2.13: Comparison between the COBE and WMAP anisotropy maps. In
both figure, the red stripe in the middle is the foreground contamination from
the Milky Way. The orientation of the two maps is the same in order to allow
for direct visual comparison. The resolution of WMAP is 30 times better, but
it is clear that on large angular scale, the two maps reveal the same structure.
46
Figure 2.14: The spectrum of WMAP map is perfectly fitted by the theoretical
predictions, for a given choice of cosmological parameters – namely, for the
model showed here, ΩB , ΩCDM , ΩΛ , the so-called “optical depth to reionization”
(a quantity not defined in this course), and the two parameters describing a
power-law primordial spectrum.
47
Figure 2.15: The distribution of galaxies in two thin slices of the neighboring
Universe, obtained by the 2dF Galaxy Redshift Survey. The radial coordinate
is the redshift, or the distance between the object and us.
and the number of neutrino families Nν . For each value of the baryon density
and of Nν , the theory predicts a given abundance of hydrogen, helium, lithium,
etc. So, by observing the abundance of these elements in the Universe – for
instance, using spectroscopy – it is possible to constrain the baryon density.
Current observations lead to the prediction
48
Figure 2.16: The Fourier spectrum of matter perturbations, reconstructed from
a large galaxy redshift survey (here, the Sloan Digital Sky Survey). The data
are compared with a theoretical spectrum, obtained by integrating numerically
over a system of differential equations, accounting for the evolution of linear
perturbations. In the calculation, the cosmological parameters were chosen in
order to obtain a maximum likelihood fit to the data. Comparing the data
with the linear spectrum is meaningful only for the largest wavelengths, k <
0.2h Mpc−1 , which have not been affected by the non–linear evolution. For
smaller scales, the agreement between the data and the theoretical prediction is
only a coincidence.
the most complete and precise cosmological test. Future CMB experiments like
MAP and Planck should be able to measure each parameter with great precision.
At the moment, some particular parameters or combinations of parameters are
already measured unambiguously, using the CMB anisotropy map of WMAP
(and also of other experiments like ACBAR and CBI). One example is the
baryon density, which is related in an interesting way (that we will not detail
here) to the relative amplitude of the acoustic peaks. The WMAP data gives
ΩB h2 = 0.022 ± 0.001. (2.37)
This constraint is as precise as that from Nucleosynthesis, and they are in perfect
agreement with each other – which is really striking, since these two measure-
ments are done with completely different techniques: one uses nuclear physics,
49
applied to the Universe when it had a density of order (1 Mev)4 , while the other
uses fluid mechanics, applied to a much more recent era with density (0.1 eV)4 .
The perfect overlap between the two constraints indicates that our knowledge
of the Universe is impressively good, at least during the epoch between t ∼ 1 s
and t ∼ 100 000 yr.
We have seen that the position of the acoustic peak is given by the Hubble
radius at decoupling, which value is predicted by the theory. So, using the
angular diameter–redshift relation (see section 1.3.5), it is possible to extract
some information both on the spatial curvature and on the evolution of the
scale factor a(t). A detailed study shows that the leading effect is that of
spatial curvature. So, the position of the acoustic peak gives a measurement of
Ωk = 1 − Ω0 . The result is
50
(Further back in time)
FAINTER
(Farther)
Perlmutter, et al. (1998) (ΩΜ,ΩΛ) =
26 ( 0, 1 )
(0.5,0.5) (0, 0)
( 1, 0 ) (1, 0)
24 (1.5,–0.5) (2, 0)
Supernova
Λ=0
Flat
Cosmology
Project
22
effective mB
20
Calan/Tololo
(Hamuy et al,
A.J. 1996)
18
16
14
0.02 0.05 0.1 0.2 0.5 1.0
redshift z MORE REDSHIFT
(More total expansion of universe
since the supernova explosion)
supernovae, from the 1998 results of the “Supernovae Cosmology Project”. Even
if it is not very clear visually from the figure, a detailed statistical analysis reveals
that a flat matter–dominated Universe (with ΩM = 1, ΩΛ = 0) is excluded. This
result has been confirmed by more recent data at larger redshift. According to
51
Figure 2.18: Combined constraints on (ΩM , ΩΛ ) from CMB and Supernovae
experiments. The simple case of a spatially flat Universe with no cosmological
constant corresponds to the lower right corner of the plot. In blue, the CMB
allowed region favors a spatially flat Universe with ΩM + ΩΛ ' 1. In orange, the
Supernovae constraint is orthogonal to the CMB one. By combining the two
results, one obtains the favored region limited by the black ellipses, centered
near (ΩM , ΩΛ ) = (0.3, 0.7).
52
In conclusion, current experiments agree on the following matter budget in
the present Universe:
Ω0 = ΩB + ΩCDM + ΩΛ ' 1
ΩΛ ' 0.73
ΩB ' 0.04
So, the Universe seems to be accelerating today, and 96% of its energy density
is non–baryonic and of essentially unknown nature! The measured value of
the cosmological constant shows that the Universe started to be Λ–dominated
only recently: at redshifts z ≥ 2, the Universe was still matter–dominated. We
seem to be leaving precisely during the period of transition between matter–
domination and Λ–domination!
53
The origin of fluctuations.
The horizon problem can be reformulated in a slightly different way. We have
seen on figure 2.5 that during radiation and matter domination, the Hubble
radius (and the causal horizon) grow faster than the physical wavelength of
each perturbation. So, all the wavelengths observable today on cosmological
scales were larger than the horizon in the early Universe. So far, we did not
propose any mechanism for the generation of primordial perturbations. But we
have the following problem: if primordial perturbations are acausal, how can
they be generated without violating causality (i.e., the fact that no information
can travel faster than the speed of light)?
So, at the end of an inflation, the causal horizon is much bigger than the Hubble
radius. After, during radiation and matter domination, they grow at the same
rate, and the causal horizon remains larger. So, an initial stage of inflation can
54
d
RH
λα a
Figure 2.19: Comparison of the Hubble radius with the physical wavelength
of a few cosmological perturbations. During the initial stage of accelerated
expansion (called inflation), the Hubble radius grows more slowly than each
wavelength. So, cosmological perturbations find their origin inside RH . Then,
each mode exits from the Hubble radius during inflation and re-enters during
radiation or matter domination.
solve the horizon problem: at the time of decoupling, if the causal horizon is
at least on thousand times larger than the Hubble radius, it encompasses the
whole observable Universe. Then, all the points in the CMB map are in causal
contact, and may have acquired the same temperature in the early Universe.
In the inflationary scenario, it is important not to make confusion between
the different causal horizons. An horizon is always defined with respect to an
initial time. We have seen that if the horizon is defined with respect to the
initial singularity, or to the beginning of inflation, then it can be very large.
But the horizon defined with respect to any time after the end of inflation
is still close to RH . So, the causal distance for a physical process starting
after inflation is unchanged. In particular, the characteristic scale that appears
in the evolution of cosmological perturbations is still the Hubble radius. In
inflationary cosmology, one must be very careful because there are two different
causal horizons, which are relevant for different problems.
Let’s come back finally to the origin of fluctuations. We can update
figure 2.5 by adding a preliminary stage of inflation. In general, the ratio of an
arbitrary physical wavelength over the Hubble radius is given by
So, under the simple assumption that ä > 0, we see that this ratio increases
during inflation: each wavelength grows faster than the Hubble radius. We
conclude that each Fourier mode observable today on cosmological scales started
inside the Hubble radius, crossed it during inflation, and re-entered later (see
figure 2.19). In addition, all these wavelengths could have been smaller than
the causal horizon at any time. This helps a lot in finding a realistic mechanism
for the generation of fluctuations, as we will see in the next section.
All these arguments – and also some other ones – led a few cosmologists like
Guth, Starobinsky and Hawking to introduce the theory of inflation in 1979.
Today, it is one of the corner stones of cosmology.
55
2.4.3 Scalar field inflation
So, the first three problems of section 2.4.1 can be solved under the assumption
of a long enough stage of accelerated expansion in the early Universe. How can
this be implemented in practice?
First, by combining the Friedman equation (1.30) in a flat Universe with the
conservation equation (1.32), it is easy to find that
Slow-roll.
First, let us assume that just after the initial singularity, the energy density
is dominated by a scalar field, with a potential flat enough for slow–roll. In any
small region where the field is approximately homogeneous and slowly–rolling,
accelerated expansion takes place: this small region becomes exponentially large,
encompassing the totality of the present observable Universe. Inside this region,
the causal horizon becomes much larger than the Hubble radius, and any initial
spatial curvature is driven almost to zero – so, some of the main problems
of the standard cosmological model are solved. After some time, when the
field approaches the minimum its potential, the slow-roll condition breaks, and
inflation ends: the expansion becomes decelerated again (this is not true for all
56
potentials, it depends on additional assumptions; for instance, with any simple
potential of the form V (ϕ) = ϕ2n , slow–roll has to break in a vicinity of the
minimum).
Reheating.
At the end of inflation, the kinetic energy of the field is bigger than the
potential energy; in general, the field is quickly oscillating around the minimum
of the potential. According to the laws of quantum field theory, the oscillating
scalar field will decay into fermions and bosons. This could explain the origin
of all the particles of matter and radiation. Because the particles generally end
up with a thermal distribution, this stage is called “reheating”.
57
Current observations of CMB anisotropies are the best evidence in favor of in-
flation, and with future CMB experiments, it should be possible to reconstruct
the shape of the inflationary potential to some extent; that would be really fan-
tastic: some observations made today of the Universe at decoupling would give
us details about what happened just after the initial singularity – at energies of
order ρ1/4 ∼ 1016 GeV !
Inflation is still a very active field of research, especially as far as the con-
nection with particle physics is concerned. First, the scalar field invoked for
inflation is ad hoc, it is introduced just for cosmological purposes. There are
several proposals concerning the role of this field in particle physics model. For
instance, it could be a Higgs boson associated with the GUT symmetry, or it
could have to do with extra dimensions in string theory. Some people try to
think how these assumptions could be tested, at least in some indirect way
(e.g., through the shape of the primordial spectrum of perturbations). Second,
the theory of reheating is still being investigated in details, in a more an more
realistic way from the point of view of particle physics.
2.4.4 Quintessence ?
It is a pity that the theory of inflation, which is able to solve so many crucial is-
sues in cosmology, doesn’t say anything about the Cosmic Coincidence problem.
Inflation predicts that ΩM + ΩΛ = 1, in agreement with what is observed today,
but it doesn’t give any hint on the splitting between matter and cosmological
constant – while the hint from particle physics is either ΩΛ = 0 or ΩΛ = 1, at
odds with observations.
As long as we explain the supernovae data with a cosmological constant, it
appears as an incredible coincidence that ρΛ is of the same order of magnitude
as ρM just in the present Universe, while in the early Universe it was incredibly
smaller...
We have seen that during slow-roll inflation, the pressure of the scalar field
is negative, almost like for a cosmological content. In the past five years, many
people have suggested to explain the supernovae data with a slow-rolling scalar
field. So, there would be two stages of scalar field inflation, one in the early
Universe, and one today. A priori, they would be caused by two different scalar
fields, called respectively the inflaton and the quintessence field (although some
model try to unify the two). The big advantage with these models is that the
energy density responsible for the cosmic acceleration today doesn’t have to
be constant in the past. In the early Universe, it could very well be of the
same order of magnitude as that of radiation, and only slightly smaller. During
radiation domination, the density of quintessence and radiation would decay
at the same rate, and after radiation–matter equality, quintessence would enter
into a stage of slow–roll. Then, its energy density would remain almost constant,
and it would start to dominate today, leading to the acceleration of the Universe.
Such classes of models are under investigation. However, they have their
own problems, and they don’t solve the Cosmic Coincidence problem in a fully
convincing way. In fact, a wide variety of models have been proposed in order
to explain the supernovae data, but none of them succeeds in explaining while
the Universe starts to accelerate only in the very near past. Together with the
nature of dark matter, the nature of the cosmological term is the biggest enigma
of modern cosmology – and these two questions are far from being small details,
since altogether, Λ and CDM account for 96% of the present energy density in
the Universe! So, cosmology has done some very impressive progresses in the
past decades, but it seems that many other exciting discoveries could be waiting
58
for us in the near future!
Acknowledgments
I would like to thank all the participants of the school for their stimulating
interest, disturbing questions, and interesting comments!
59
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: