An Introduction To The Theory of Vector Bundles

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

An introduction to the theory of vector bundles

by Enrique Arrondo(*)

Version of May 25th, 2023


This is still a draft version, and any suggestion to improve the presentation of this material will be very
welcome.

1. Preliminaries of Algebraic Geometry


2. The notion of vector bundle
3. Operations with vector bundles
4. Intersection theory and Chern classes
5. The splitting principle and applications
6. Some advanced intersection theory

These notes are based on few courses I gave at the Universidad Complutense de Madrid
and they are an expanded version of [A], the course I gave in the First Summer School on
Complex Geometry held in Villarrica in 2010. They intend to be essentially self-contained
in the understanding, but not in the proofs. We included a first section on the minimum
preliminaries that the reader should understand on Algebraic Geometry. The scope is to
show how we algebraic geometers handle vector bundles and Chern classes to deal with
computations in intersection theory. Most of the ideas, as well as more precise proofs and
constructions, can be found in [F].

(*) Departamento de Álgebra, Geometrı́a y Topologı́a, Facultad de Ciencias Matemáticas,


Universidad Complutense de Madrid, 28040 Madrid, Spain, arrondo@mat.ucm.es

1
1. Preliminaries of Algebraic Geometry

Definition. An affine set over K is a subset X ⊂ AnK defined by a set S ⊂ K[x1 , . . . , xn ],


i.e.
X = V (S) := {x ∈ AnK | f (x) = 0 for all f ∈ S}.

Remark 1.1. It is clear that S defines the same affine variety as the ideal it generates.
Hence, by Hilbert Basis Theorem (in particular, any affine set can be defined by a finite
number of polynomials). In fact, the biggest ideal defining an affine set X ⊂ AnK is

I(X) := {f ∈ K[x1 , . . . , xn ] | f (x) = 0 for all x ∈ X}.

The set of polynomial functions on X is naturally identified with the quotient O(X) :=
K[x1 , . . . , xn ]/I(X).

Exercise 1.2. Prove that the set of affine sets contained in a given affine set X ⊂ AnK
define the closed sets of a topology on X. Show that a basis of such topology is the sets
of the form
DX (f ) := X \ V (f ).

Definition. The Zariski topology on an affine set X ⊂ AnK is the topology whose closed
sets are the affine sets of AnK contained in X.

Remark 1.3. Observe that, for a non-empty basic open set DX (f ) of an affine set
X ⊂ AnK (i.e. f ∈
/ I(X)) we have an injective map

DX (f ) → An+1
K
1
(x1 , . . . , xn ) 7→ (x1 , . . . , xn , f (x1 ,...,x n)
)

whose image is the affine set X 0 ⊂ An+1


K defined by the equations of X plus the equation
xn+1 f − 1. Hence, the ring of polynomial functions on X 0 is in bijection with the set
of quotients of a polynomial function on X with a power of the polynomial function f¯
determined by f . This is nothing but the ring of fractions O(X)f¯. This means that a
“polynomial function” on DX (f ) should be an element of O(X)f¯.

Exercise 1.4. Prove that, any time DX (f ) = DX (g), we have a natural isomorphism of
rings O(X)f¯ ∼
= O(X)ḡ (you will need to assume that K is algebraically closed to apply the
Nullstellensatz).
At this point, we can give the right notion of “polynomial” function for any open set.

2
Definition. A regular function on an open set U ⊂ X of an affine set X is a function
g : U → K such that, for each x ∈ U , any basic open set DX (f ) ⊂ U of x, the restriction
g|DX (f ) can be expressed as an element of O(X)f¯. We will write OX (U ) for the ring of
regular functions on U .

Remark 1.5. It can be proved (using again the Nullstellensatz) that the above definition
is equivalent to just the existence, for all x ∈ U , of only one basic neighborhood DX (f ) ⊂ U .
The advantage of our definition is that it implies easily that the set of regular functions on
DX (f ) is O(X)f¯. The important point is that, once we decided that the ring of regular
functions on DX (f ) is O(X)f¯, there is only one possible choice of sheaf OX .

Definition. The structure sheaf of an affine set X ⊂ AnK is the sheaf of rings OX assigning
to each open set U ⊂ X its ring of regular functions.
We are assuming the reader knows the notion of sheaf and morphism or isomorphism
of sheaves. However, we recall the following definition, which will be crucial along the
notes.

Definition. Given a sheaf (of rings or any other category) F on a topological space X,
the direct image of F under a continuous map ϕ : X → Y is the sheaf ϕ∗ F on Y defined
by assigning, to any open set V ⊂ Y ,
(ϕ∗ F)(V ) = F(ϕ−1 (V )).

With all this, we can finally give the following:

Definition. An algebraic variety over a field K is a topological space X endowed with a


sheaf of rings OX (in which each OX (U ) is a subring of the ring of functions from U to
S
K) such that there exists an open covering X = i Ui and for every open set Ui there is
a homeomorphism ϕi : Ui → Xi with an affine set Xi such that the morphism of sheaves
OXi → ϕi ∗ OUi given by the morphisms (for each open set U ⊂ Xi )
OXi (U ) → OUi (ϕ−1
i (U ))
f 7→ f ◦ ϕi
is an isomorphism (see Remark 1.12 for a precision about this definition).

Exercise 1.6. Prove that a projective set (i.e. a subset of PnK defined by homogeneous
polynomials) is an algebraic variety (of course, the open covering should be given by the
complements of the coordinate hyperplanes).

Definition. A map ϕ : X → Y between two algebraic varieties is a morphism (or regular


map) if it is a continuous map and the map ϕ0 : OY → ϕ∗ OX given by
OY (V ) → OX (ϕ−1 (V ))
f 7→ f ◦ϕ

3
is a morphism of sheaves. If ϕ is homeomorphism and ϕ0 is an isomorphism of sheaves,
the map ϕ is said to be an isomorphism.

Remark 1.7. At a first glance it could seem that the definition of variety is not natural.
First of all, we want neighborhoods of the points to be isomorphic to affine sets, instead of
open subsets of them. This is not restrictive at all, since Remark 1.3 shows that the basic
open sets of an affine set are still isomorphic to affine sets. On the other hand, imitating
what one does for defining manifolds, a natural definition would be to impose to have an
atlas whose open sets are isomorphic to open sets of AnK , which would also allow to include
the notion of dimension in the definition. However, this definition is not correct, because
open sets are very big in the Zariski topology, and in fact only few varieties would fit in
that definition (more precisely, the so-called rational varieties).

Proposition 1.8. The cartesian product of two algebraic varieties is an algebraic variety
in the natural way. Moreover:
(i) The product of two affine varieties X ⊂ AnK and Y ⊂ Am
K is an affine variety X × Y ⊂
n+m
AK .
(ii) O(X × AnK ) = O(X)[x1 , . . . , xn ].

Proof: The proof relies on part (i), which is clear, since the product of X ⊂ AnK and
n+m
Y ⊂ Am K s the affine variety X × Y ⊂ AK , defined by the equations of X and the
equations of Y . Now, in general, if X = Ui and Y = Vj with Ui ∼ = Xi , Vj ∼
S S
= Yj , where
S
Xi , Yj are affine subsets, then we have a covering X × Y = i,j (Ui × Vj ) in which each
Ui × Vj is isomorphic to the affine variety Xi × Yj .
For part (ii), we will show that, given the first projection p : X × AnK → X, the sheaf
p∗ OX×AnK is given by
p∗ OX×AnK (U ) = OX (U )[x1 , . . . , xn ]
(so proving our result when taking U = X). Since the assignment U 7→ OX (U )[x1 , . . . , xn ]
is a sheaf, and a sheaf is determined by its values on a basis of open sets, it is enough to
prove the above equality when U is affine. But it is clear that, if Y ⊂ Am K is an affine set
with ideal I(Y ) ⊂ K[y1 , . . . , ym ], the product Y × AK is the affine variety of Am+n
n
K whose
ideal I ⊂ K[y1 , . . . , ym , x1 , . . . , xn ] is the one generated by I(Y ). Hence

O(Y ×AnK ) = K[y1 , . . . , ym , x1 , . . . , xn ]/I = (K[y1 , . . . , ym ]/I)[x1 , . . . , xn ] = O(Y )[x1 , . . . , xn ],

as wanted.

Exercise 1.9. Given algebraic varieties X, Y , prove that the projections p : X × Y → X


and q : X × Y → Y are regular maps and that, given any algebraic variety Z with regular

4
maps p0 : Z → X and q 0 : Z → Y , there is a unique regular map ϕ : Z → X × Y such that
p ◦ ϕ = p0 and q ◦ ϕ = q 0 .

Example 1.10. There is a natural construction coming from the product, with different
names depending on the context. Given two morphisms ϕ : X → Y and ϕ0 : X 0 → Y , its
fiber product is the subset of the product

X ×Y X 0 := {(x, x0 ) ∈ X × X 0 | ϕ(x) = ϕ0 (x0 )}.

We also say that, considering the projections p1 , p2 over X and X 0 , we also say that the
commutative diagram
p1
X ×Y X 0 −→ X

p2 ϕ
y y
ϕ0
X0 −→ Y
is a Cartesian square. It has the universal property that, given any other variety Z with
morphisms q1 : Z → X and q2 : Z → X 0 such that ϕ ◦ q1 = ϕ0 ◦ q2 , then there exists a
unique morphism ψ : Z → X ×Y X 0 such that pi ◦ ψ = qi for i = 1, 2. Another way of
interpreting a Cartesian square is to pull back the morphism ϕ via the morphism ϕ0 . In
this way, we get another morphism p2 X ×Y X 0 → X 0 with the property that, for each
x0 ∈ X, its fiber p−1 0
2 (x ) is naturally isomorphic to the fiber ϕ
−1
(ϕ0 (x0 )) of ϕ0 (x0 ) of the
original morphism (of course, also the top morphism can also be regarded as the pullback
of the bottom morphism). Two typical examples are:
1) The pullback a morphism ϕ : X → Y via an inclusion i : Y 0 ,→ Y is nothing but
the restriction map ϕ|ϕ−1 (Y 0 ) : ϕ−1 (Y 0 ) → Y 0 .
2) If, moreover, ϕ is again an inclusion of a subvariety X of Y , then the pullback is
now the intersection of the subsets X and Y 0 .

We pass now to introduce a way of constructing algebraic variety by glueing subvari-


eties.
S
Proposition 1.11. Let X be an algebraic variety and let X = i Ui be an open covering
such that for every open set Ui there is an isomorphism ϕi : Ui → Xi with an affine set.
For each i, j, define Xij = ϕi (Ui ∩ Uj ) and ψij = ϕj |Ui ∩Uj ◦ ϕ−1
i |Xij : Xij → Xji . Then:

(i) ψii = idXi .


(ii) For each i, j, k, it holds ψik = ψjk ◦ ψij (in particular, ψji is the inverse of ψji ).
Reciprocally, given a collection {Xi } of affine sets and, for each i, j and open subset
Xij ⊂ Xi satisfying properties (i) and (ii), there is an algebraic variety X (unique up to

5
S
isomorphism) with an open covering X = i Ui and isomorphisms ϕi : Ui → Xi such that
Xij = ϕi (Ui ∩ Uj ) and ψij ◦ ϕi |Ui ∩Uj = ϕj |Ui ∩Uj .

Proof: It is clear that conditions (i) and (ii) hold for a covering as in the statement. So we
assume now that we have the collection of affine sets as in the statement. We consider in
`
the disjoint union i Xi the relation ≡ defined by xi ≡ xj iff ψij (xi ) = xj (in particular,
xi ∈ Xij and xj ∈ Xji ). This is an equivalence relation precisely because of (i) and (ii).
` `
We thus consider the quotient π : i Xi → X := ( i Xi )/ ≡ equipped with the obvious
topology. Then, for each i, the restriction πi : Xi → Ui := π(Xi ) is a homeomorphism
between Xi and an open set Ui ⊂ X. To define the structure sheaf OX it is enough to
define its restriction to each Ui , and, if U ⊂ Ui we define

OX (U ) = {f ◦ πi −1 −1
|U : U → K | f ∈ OXi (πi (U ))}.

This is well defined, because, if also U ⊂ Uj , then ψij maps isomorphically the open subset
πi−1 (U ) ⊂ Xi into the open subset πj−1 (U ) ⊂ Xj . In particular, ψij ◦ πi−1 = πj−1 and
OXi (πi−1 (U )) = {f ◦ ψij | f ∈ OXj (πj−1 (U ))}.

{f ◦ πi −1 −1 −1
|U : U → K | f ∈ OXi (πi (U ))} = {f ◦ ψij ◦ πi : U → K | f ∈ OXj (πj−1 (U ))},

so that the rings obtained using i and j coincide. It is clear that the maps ϕi := πi−1
satisfy the wanted conditions.
Finally, if both X (with isomorphisms ϕi : Ui ≤ Xi ) and X 0 with isomorphisms
ϕ0i : Ui0 ≤ Xi ) satisfy the conditions of the statements, we can produce an isomorphism
−1
ϕ : X → X 0 from the isomorphisms ϕ0i ◦ ϕi : Ui → Ui0 , which glue in the intersections
because
−1 −1
(ϕ0j ◦ ϕj )|Ui ∩Uj = (ψij ◦ ϕ0i |Ui ∩Uj )−1 ◦ (ψij ◦ ϕi |Ui ∩Uj ) = (ϕ0i ◦ ϕi )|Ui ∩Uj .

Remark 1.12. To be completely honest, there is another condition for the notion of
variety, namely to be separated. This means that the diagonal ∆X ⊂ X × X is closed.
Contrary to the intuition, this is not always true. The typical example consists of glueing
two affine lines X1 , X2 taking X12 and X21 as the line minus the same point and glueing
them across the identity map. We thus produce a variety X which is a line in which one
point is substituted by two points, so that in X × X we have an affine plane in which one
point is replaced by four points, while the diagonal passes only through two of the points,
the other two being in the closure. Separated varieties enjoy good properties (which we
will use freely throughout the notes):

6
1) Given two open affine sets U, V ⊂ X, their intersection is also affine. Indeed, U ∩ V
is naturally isomorphic to the intersection in X × X of U × V (which is affine) and the
diagonal ∆ ⊂ X × X (which is closed by assumption). Therefore U ∩ V is isomorphic to a
closed subset of an affine set so that is it affine. This property is not essential, but it helps
to avoid long turns when trying to prove something by covering by affine sets.
2) The graph of a morphism ϕ : X → Y is closed when Y is separated (when X = Y
and ϕ = idX the graph is nothing but the diagonal). This is so because the graph of ϕ
is the inverse image by idX × ϕ : X × Y → Y × Y (which is a continuous map) of the
diagonal of Y (which is closed by assumptions).

Exercise 1.13. Prove that the diagonals of AnK and PnK are closed sets, and conclude that
any affine or projective set is separated. While proving it, detect why the same proof does
not work for the example of Remark 1.12.

Example 1.14. Let G(k, n) be the set of linear subspaces of dimension k in PnK (called
Grassmannian). It is classically represented as a projective variety via the Plücker embed-
ding in which the linear space spanned by the points whose coordinates are the rows of
the matrix
a00 . . . a0n
 
. .. 
A =  .. .
ak0 . . . akn
(n+1)−1
is mapped to the point in PKk+1 whose coordinates (called Plücker coordinates) are the
maximal minors of A (we leave as an exercise to prove that this map is well defined and
injective, and that its image is a projective set; reading the rest of the example could help
a lot). Specifically, for a choice of k + 1 ordered columns i0 < . . . < ik of A, we will denote

a0i0 ... a0ik


pi0 ,...,ik := ... .. .
.
aki0 ... akik

We will give an alternative way of showing that G(k, n) is an algebraic variety by using the
glueing method of Proposition 1.11. The covering we will use will be given by the subsets
Ui0 ,...,ik consisting of the subspaces whose Plücker coordinate pi0 ,...,ik is not zero (prove as
an exercise that this is the set of linear subspaces not meeting the (n − k − 1)-dimensional
subspace of equations xi0 = . . . = xik = 0). Observe that a subspace in Ui0 ,...,ik meets
necessarily each subspace xi0 = . . . = xc ij = . . . = xik = 0 in exactly one point pj , whose
coordinate xij is not zero, so that we can take it to be one. Hence we can recover the
linear subspace as the span of p0 , . . . , pk , which we can take as the rows of a matrix Ai0 ,...,ik
(observe, as an exercise, that the entries of Ai0 ,...,ik are, up to a sign, Plücker coordinates

7
divided by pi0 ,...,ik ). For example, an element Λ ∈ G(k, n) is in U0,...,k when the subspace
Λ can be generated by the rows of a matrix
 
1 ... 0 a0,k+1 ... a0n
 .. .. .. .. .. 
A0,...,k =. . . . . 
0 ... 1 ak,k+1 . . . akn

(for a general Ui0 ,...,ik one would get the same, but with the identity matrix in the rows
i0 , . . . , ik of the corresponding matrix Ai0 ,...,ik ). An element of Ui0 ,...,ik is thus uniquely
determined by a choice of (k + 1)(n − k) elements (the entries of A outside the identity
matrix), so that it can be identified with an affine space, which we denote by Xi0 ,...,ik .
Observe that, inside that vector space, the subset of those corresponding to another set
Uj0 ,...,jk is an open set Xi0 ,...,ik ;j0 ,...,jk defined by the not vanishing of the determinant
ot the submatrix Ai0 ,...,ik [j0 , . . . , jk ] of Ai0 ,...,ik corresponding to the columns j0 , . . . , jk .
Moreover, since the different matrices whose files determine the same linear space as the
rows of Ai0 ,...,ik are obtained by left-multiplying Ai0 ,...,ik by a non-singular matrix, and
Aj0 ,...,jk has the identity in the columns j0 , . . . , jk , we have an identity

Aj0 ,...,jk = Ai0 ,...,ik [j0 , . . . , jk ]−1 Ai0 ,...,ik

or, equivalently,
Ai0 ,...,ik = Ai0 ,...,ik [j0 , . . . , jk ]Aj0 ,...,jk .

In particular, the glueing map Xj0 ,...,jk ;i0 ,...,ik → Xi0 ,...,ik ;j0 ,...,jk (which maps the entries
of Aj0 ,...,jk to the entries of Ai0 ,...,ik ) is a regular map whose inverse is, symmetrically,
regular, so this is an isomorphism. We leave as an exercise (tedious to write up clearly) to
check that these glueing maps satisfy the conditions of Proposition 1.11.

Remark 1.15. As we warned in Remark 1.7, it is very rare that, as it happens in the
(k+1)(n−k)
previous example, that G(k, n) can be covered by open sets isomorphic to AK
(hence G(k, n) is a rational variety). Of course, this is saying that G(k, n) is smooth of
dimension (k + 1)(n − k), but we cannot take this as definition of dimension. Another
attempt will be to use the number of equations, but this is not true neither. The first
non-trivial example in the projective space is to consider the image of the map

ϕ : P1K → P3K

(t0 : t1 ) 7→ (t30 : t20 t1 : t0 t21 : t31 )

which is called the twisted cubic, and we will denote it by C. It is not difficult to prove
that ϕ is an isomorphism, so that C must be a curve. Being embedded in P3K would

8
suggest that C is expected to be defined by two equations, but one can prove that its ideal
I(C) ⊂ K[x0 , x1 , x2 , x3 ] is generated by the polynomials

x0 x2 − x21 , x0 x 3 − x 1 x 2 , x1 x3 − x22

and it is impossible to generate it by only two polynomials. However, if we restrict to the


affine open set x0 6= 0, the first two equations can be written as
x2  x 2 x3 x1 x2
1
= , =
x0 x0 x0 x0 x0
and from this we get
x1 x3  x 2 x  x 2
1 2 2
= =
x0 x0 x0 x0 x0
so that the third equation can be derived from the first two. In fact, it can be proved that,
if x is a smooth point of an irreducible projective variety X ⊂ PnK of dimension m (X is
said to be irreducible if it cannot be decomposed in a non-trivial way as the union of two
closed subsets; if you do not feel comfortable with this notion, in the smooth case you are
allowed to think that irreducible is equivalent to be connected), then X can be defined by
n − m equations in a neighborhood of x (if you want, you can take this as the definition
of smooth point), and the result remains true if you replace PnK with any other smooth
variety of dimension n. One interesting fact is that you can write the three generators of
the above I(C) as the minors of the matrix
 
x0 x1 x2
x1 x2 x3

and this will allow us to guess that the right codimension is two and we will also even to
compute the degree of C.
We also state without proof the other main result about dimension (and its relation
with morphisms) that will be needed throughout the notes.

Theorem 1.16. Let ϕ : X → Y be a morphism and assume X is irreducible of dimension


n. Then:
(i) If X is a projective variety, then also f (X) is an irreducible projective variety.
(ii) For a general X, the closure of f (X) is irreducible and contains an open set U such
that for each y ∈ U , the dimension of ϕ−1 (y) is dim(X) − dim (f (X)).

Proof: See §5.2 and §6.3 of Chapter I of [Sh].

Example 1.17. A trivial case in which to apply Theorem 1.16 is when X = Y × Z and
ϕ is the first projection. Since, in this case, ϕ−1 (y) is isomorphic to Z for all y ∈ Y , it

9
follows that dim(X) = dim(Y ) + dim(Z). Of course we do not need to have a product
to have all fibers isomorphic to the same Z. The case we will use frequently is when ϕ
S
is locally trivial with fiber Z, i.e. when there is an open covering Y = i Vi such that for
each i there is an isomorphism ψi : ϕ−1 (Vi ) → Vi × Z such that the composition
ψ −1
i −1
ϕ|ϕ−1 (V )
Vi × Z −→ϕ (Vi ) −→ i Vi

is the first projection. We have two important properties of locally trivial morphisms:
1) Given a fiber square
ν0
0 −→
X X

 0 ϕ
yϕ y
ν
Y 0 −→ Y
if ϕ is locally trivial with fiber Z, then also ϕ0 is, since the pullback by ν of the projection
Vi × Z → Vi is clearly the projection ν −1 (Vi ) × Z → ν −1 (Vi ).
2) Any regular map ψ : X → W such that any restriction to ϕ−1 (y) is a constant
map, then the map ϕ̄ : Y → W mapping y to the value by ϕ of any element in ϕ−1 (y) is
a regular map. This is because when ϕ : X = Y × Z → Y is the first projection, fixing an
element z ∈ Z we can define ϕ̄ by ϕ̄(y) = ϕ(y, z).

10
2. The notion of vector bundle
In affine and projective geometry, varieties are defined as zeros of polynomials, but
there is a difference: in the affine case, a polynomial determines a well-defined function
that we can evaluate and decide whether it is zero or not; however, in projective geometry,
a homogeneous polynomial of positive degree never defines a function, and it only makes
sense to say whether the polynomial vanishes or not at some point. And we cannot take
regular functions instead of polynomials, since the only possible regular functions of the
projective space are the constant functions. The main idea is that a regular function
f : X → K can be identified with the map s : X → X × K given by s(x) = (x, f (x)),
i.e. a regular map s whose composition with the first projection is the identity map. In
the language of vector bundles we are going to introduce now, this means a section of the
trivial line bundle X × K. Before giving the precise definitions, we start with a couple of
examples.

Example 2.1. Let F ∈ K[x0 , . . . , xn ] be a homogeneous polynomial of degree d. If we


consider fi , the dehomogenization of F with respect of the variable xi , now fi becomes a
function on the affine open set Ui = {xi 6= 0}. Let us try to analyze why it is not possible
to glue all these functions fi to get a function on Pn . If we write the coordinates in Ui as
x0 xi−1 xi+1 xn F
xi , . . . , xi , xi , . . . , xi , then fi = xd . Therefore the reason why we cannot glue together
i
xd
the functions on Ui and Uj is that fj is obtained from fi when multiplying by i
xd
. This
j
means that any hypersurface V (F ) of Pn can be described locally by functions fi , but they
can be glued together only after a multiplication.
If now X ⊂ Pn has arbitrary codimension r (or any other ambient space), we observed
in Remark 1.15 that, when X is smooth, we can find an open covering such that, on each
open set, X is defined by exactly r equations. The question now is: is it possible somehow
to glue together those local functions in a similar way as in the previous example? Although
the answer is negative in general, we present a positive case in order to give a clearer idea
of what we mean.

Example 2.2. Let X ⊂ P2 be the subset consisting of the points (1 : 0 : 0), (0 : 1 : 0) and
(0 : 0 : 1). At each open set Ui = {xi 6= 0}, the set X ∩ Ui consists of one point, so its affine
ideal is generated by exactly two elements. More precisely, the ideals of X ∩ U0 , X ∩ U1
and X ∩ U2 are respectively generated by xx01 , xx20 ; xx01 , xx21 ; and xx02 , xx12 . For any λ ∈ K \ {0}
(the reader will understand soon why we do not take just λ = 1), it is possible to relate
the two first sets of generators by the expression
x   x1 2  x 
x0
1 ( x 0
) 0 x1
0

 =  
x2 x1 x2 x1 x2
x
(1 − λ) x2
λ x0 x
0 0 1

11
and the matrix in the expression has entries regular in U0 ∩ U1 and is invertible in that
open set. Similarly, there is a relation
 x   0x
λ x21 (1 − λ0 ) xx0 x2 2
 x 
0 0
x1 1
x2
 =  
x2 x2 2 x1
x1 0 ( x1 ) x2

Putting together the last two relations we get


x   0 x1 x2 0 x2
 x 
1
x0 λ x02 (1 − λ ) x0
0
x2
 =  
x2 0 x2 2 0 0 x22 x1
x0 λ (1 − λ)( x0 ) (λλ − λ + 1) x0 x1 x2

Thus if we want to avoid divisions by zero (and get a matrix similar to the previous ones),
we need λλ0 − λ0 + 1 = 0, i.e. λ0 = 1−λ
1
. Hence, if λ 6= 0, 1, the above matrices provides a
way of glueing together the pairs of equations in the different open sets.
The precise definition is the following (in which the reader can replace the category
of algebraic varieties with any category of topological spaces):

Definition. A vector bundle of rank r (or line bundle if r = 1) over an algebraic variety
X is an algebraic variety F equipped with a morphism π : F → X so that there exists a
S
covering X = i∈I Ui by open subsets such that:
(i) For each i ∈ I there is an isomorphism ψi : π −1 (Ui ) → Ui × Kr satisfying that the
composition π ◦ ψi−1 : Ui × Kr → Ui is the first projection.
(ii) For each i, j ∈ I there is an (r × r)-matrix Aij (called transition matrix, or transition
function if r = 1) whose entries are regular functions in Ui ∩ Uj satisfying that the
composition

−1
ϕij = ψj ◦ ψi|Ui ∩Uj
: (Ui ∩ Uj ) × Kr → π −1 (Ui ∩ Uj ) → (Ui ∩ Uj ) × Kr

takes the form


ϕij (x, v) = (x, Aij (x)v).

Remark 2.3. It is clear that any subcovering of the covering still satisfies conditions
(i) and (ii), so we will always work, when necessary, with sufficiently fine coverings (for
example, given a finite number of vector bundles, we can always take a covering that works
for all of them). If only one open set is needed, i.e. F = X ×Kr and π is the first projection,
we say that F is a trivial bundle of rank r. A covering for which we have conditions (i) and
(ii) will be called trivializing covering. Condition (i) is saying that, for any x ∈ X the set
Fx := π −1 (x) (called the fiber of the vector bundle at the point x), is bijective to Kr , and
that locally the fibers are glued to produce a trivial vector bundle. From this, it is clear

12
that, in condition (ii), the first coordinate of ϕij (x, v) must be x. Thus condition (ii) is
just saying that the fibers of F glue together in different trivial representations in a linear
way. In other words, the fibers of the vector bundle have to be regarded as vector spaces.

Definition. A morphism of vector bundles is a regular map ϕ : F → F0 such that π = π 0 ◦ϕ


(i.e. it sends fibers Fx to fibers Fx0 ) and the induced map ϕx : Fx → F0x is linear. If the
maps ϕx are all isomorphisms, the ϕ is said to be an isomorphism of vector bundles.
S
Proposition 2.4. Let X be an algebraic variety with an open covering X = i∈I Ui . Let
F be a vector bundle over X with transition matrices Aij corresponding to the covering.
Then:
(i) Aii is the identity matrix Ir .
(ii) Ajk Aij = Aik (in particular, Aij and Aji are inverse to each other).

Reciprocally, given a set of matrices {Aij }i,j∈I satisfying (i) and (ii) there is a vector
bundle over X (unique up to isomorphism) whose matrices are the given matrices.
Proof: This is nothing but Proposition 1.11 in which we glue the varieties Ui ×Kr along the
open subsets (Ui ∩ Uj ) × Kr ⊂ Ui × Kr in which we glue (Ui ∩ Uj ) × Kr with (Uj ∩ Ui ) × Kr
using the map ϕij (x, v) = (x, Aij (x)v).

Definition. Asection of a vector bundle π : F → X is a regular map s : X → F such that


π ◦ s = idX (i.e. the image of any point x is a vector in the fiber Fx ). We will denote by
Γ(X, F) the set of sections of F.

Remark 2.5. We will handle most of the concepts of vector bundles via their transition
matrices.
1) For example, a morphism ϕ : F → F0 of vector bundles of ranks r and r0 can be
regarded (taking a common trivializing covering {Ui } for both vector bundles) as a set of
−1
matrices Ai ∈ M atr0 ×r (OX (Ui )) (defining the maps Ui × Kr ∼= π −1 (Ui ) → π 0 (Ui ) ∼
=
r0
Ui × K subject to the compatibiliy conditions

A0ij Ai |Ui ∩Uj = Aj |Ui ∩Uj Aij .

2) Similarly, a section of a vector bundle is determined by a colection fi1 , . . . , fir of


regular functions on each open set Ui , with the compatibility condition

fj1 |Ui ∩Uj


   
fi1 |Ui ∩Uj
.. ..
 = Aij  .
   
 . .
fjr |Ui ∩Uj fir |Ui ∩Uj

13
Observe that every vector bundle has always the so-called zero section, mapping each
point to the zero vector of its fiber, and corresponding to taking all functions fi to be zero.
Observe also that we can define the zero locus of a section, which is the subset of X in
which the image of the section is the zero vector. It corresponds, in each Ui to the locus
on which the functions fi1 , . . . , fir vanish, so that the zero locus is a closed subset of X.
3) Also, given any possible operation with vector spaces, we can extend it naturally
to the corresponding operation of vector bundles. For example, if F is a vector bundle
with transition matrices, then the dual vector bundle F∗ is determined by the transition
matrices (A−1 )t .

Example 2.6. With the language of transition matrices, in Example 2.1, we have defined,
for each d ≥ 0 (although the construction is valid for any d ∈ Z) a line bundle Ld over PnK
xd
with transition functions i
xd
and each homogeneous polynomial of degree d can be regarded
j
as a section of Ld . Also, in Example 2.2 we have constructed, for each λ 6= 0, 1, a vector
bundle Fλ of rank two on P2 , with transition matrices for the open sets Ui = {xi 6= 0}:

( xx10 )2 1 x2 −λ x0 x2 1 x1 x2 −λ x2
     
0 1−λ x1 1−λ x21 1−λ x20 1−λ x0
A10 =   , A21 =   , A20 =  
(1 − λ) xx1 x2 2 λ xx01 0 ( xx21 )2 ( xx20 )2 0
0

and each Fλ has a section vanishing at the three coordinates points. Observe that the dual
vector bundle Eλ∗ has transition matrices:

(1 − λ) xx12 λ−1 x0
 x2     
0 λ−1 x0 x2 0 0 λ x2
x21 λ x21
A010 =   , A021 =   , A020 =  .
x21 x20
0 1 x0 λ xx0 x2 1 x22 x22
1 x0 x1
λ x22
λ x1 2

Let us use this to identify the bundles Fλ in the next example.

Example 2.7. Let us define the cotangent bundle of PnK , for any field K, in terms of
its transition matrices with respect to the open sets Ui = {xi 6= 0}. We first define a
differential form on Ui as a formal expression of the form

x0 xi−1 xi+1 xn
ω = f0 d( ) + . . . + fi−1 d( ) + fi+1 d( ) + . . . + fn d( )
xi xi xi xi

where f0 , . . . , fi−1 , fi+1 , . . . , fn are regular functions depending on the affine coordinates
x0 xi−1 xi+1 xn
xi , . . . , xi , xi , . . . , xi . To restrict this to any Ui ∩ Uj we write, for any k 6= i:

 ( xk ) 
xk x xj xk xk xj xi
d( ) = d xji = d( ) − 2 d( )
xi ( xj ) xi xj xi xj

14
so that we get

ω=
 
xj x0 x0 xi−1 xi+1 xn xi xn
f0 d( )+. . .+(− f0 −. . .− fi−1 − fi+1 −. . .− fn )d( )+. . .+fn d( ) .
xi xj xi xi xi xi xj xj
In other words, the transition matrix from Ui to Uj is given by
 1 0 ... 0 ... 0 
 0 1 ... 0 ... 0 
 . . . ..
. .. ..

 .
xj  . 
Aij =  − x0 − x1 . . . − xj . . . − xn .

xi  xi xi xi xi 
 . .. .. .. 
 .
. . . .

0 0 ... 0 ... 1
This provides a vector bundle of rank n on Pn that we denote by ΩPn . For example, in
the case n = 2, we have
 x2   x1   x0 
− x20 − xx0 x2 2 x2 0 0 x2
1 1
A10 =   , A21 =   , A20 =  2 .
x0 x1 x21 −x0 x0 x1
0 x0 − x2 − x2 x2
− x2
x1 2 2 2 2

Exercise 2.8. Show that the matrices


     
1−λ 0 λ−1 0 −1 0
A0 = , A1 = , A2 =
0 1 0 λ 0 −λ
define an isomorphism between ΩP2 and the vector bundle F∗λ defined in Example 2.6 (this
shows that all the bundles Fλ are isomorphic to the tangent bundle TP2 , in particular
isomorphic to each other).

Exercise 2.9. If X ⊂ P4K is the quadric of equation x0 x3 + x1 x4 − x22 = 0, find a vector


bundle of rank two over X having a section whose zero locus is the line x0 = x1 = x2 (this
vector bundle is called the spinor bundle of X).

Example 2.10. Let us consider U ⊂ Pn × Kn+1 to be the subset of pairs (x, v) such that
v is a vector in the vector line of Kn+1 defining the point x ∈ Pn . Then the first projection
U → Pn endows U with the structure of a line bundle over Pn (called the tautological line
bundle over Pn ). Indeed, for any i = 0, . . . , n, we consider the open set D(xi ). Writing
any x ∈ D(xi ) as (a0 : . . . : ai−1 : 1 : ai+1 : . . . : an ), we get that a vector v ∈ Kn+1 such
that (x, v) ∈ U can be written in a unique way as v = (λa0 , . . . , λai−1 , λ, λai+1 , . . . , λan ).
We have therefore a bijection ψi : D(xi ) × K → p−1 (D(xi )) given by
 b0 bn 
ψi (b0 : . . . : bn ), λ = (b0 : . . . : bn ), (λ , . . . , λ )
bi bi

15
The map ψj−1 ◦ ψi : D(xi xj ) × K → D(xi xj ) × K is thus defined by

bj 
ψj−1 ◦ ψi (b0 : . . . : bn ), λ = (b0 : . . . : bn ), λ

.
bi
b
Since (b0 : . . . : bn ) 7→ bji is a regular function over D(xi xj ), the maps ϕi := ψi−1 define a
line bundle structure over Pn , and precisely U ∼ = L−1 .

Example 2.11. In a similar (and more general) way, consider the subset U ⊂ G(k, n) ×
Kn+1 consisting of pairs (Λ, v) such that v is a vector in the (k + 1)-dimensional linear
subspace of Kn+1 defining Λ. As in the above example, the second projection π : U →
G(k, n) defines on U a structure of vector bundle of rank k + 1, called the tautological
sub-bundle of the Grassmannian. This is proved as in Example 2.10, following the ideas
of Example 1.14. On each open set Ui0 ...ik := {pi0 ...ik 6= 0}, if (Λ, v) ∈ π −1 (Ui0 ...ik ) the
vector v can be written, in a unique way, as

( λi0 ...ik ,0 ... λi0 ...ik ,k ) Ai0 ...ik

i.e. a linear combination of the rows of Ai0 ...ik . This gives the required isomorphism

ψi0 ...ik : π −1 (Ui0 ...ik ) → Ui0 ...ik × Kk+1 .

If we consider now another open sets Uj0 ...jk , we can thus write

( λi0 ...ik ,0 ... λi0 ...ik ,k ) Ai0 ...ik = ( λi0 ...ik ,0 ... λi0 ...ik ,k ) Ai0 ,...,ik [j0 , . . . , jk ]Aj0 ,...,jk ,

which shows that we have an equality

( λj0 ...jk ,0 ... λj0 ...ik ,k ) = ( λi0 ...ik ,0 ... λi0 ...ik ,k ) Ai0 ,...,ik [j0 , . . . , jk ].

Therefore the transpose of the matrices Ai0 ,...,ik [j0 , . . . , jk ] form a set of transition matrices
for U.

Example 2.12. Let us see the first applications of the above construction. The fiber of
~
U at each Λ ∈ G(k, n) is naturally identified with the (k + 1)-dimesional lineal space Λ
whose projectivization is Λ. If we dualize the bundle inclusion we get an epimorphism of
vector bundles G(k, n) × (Kn+1 )∗ → U∗ , where now the fiber of U∗ at each Λ ∈ G(k, n)
~ We can take now the d-th symmetric power, and get
is the space of linear forms on Λ.
another epimorphism

G(k, n) × K[x0 , . . . , xn ]d = G(k, n) × S d (Kn+1 )∗ → S d U∗

16
where by K[x0 , . . . , xn ]d we mean the vector space of (homogeneous) forms of degree d in
x0 , . . . , xn . Now the fiber of S d U∗ at each Λ ∈ G(k, n) is the space of forms of degree d
on Λ.~ Observe that, given a non-zero form F ∈ K[x0 , . . . , xn ]d , it obviously determines a
section sF of G(k, n) × K[x0 , . . . , xn ]d , and then the composition
s
F
G(k, n)−→G(k, n) × K[x0 , . . . , xn ]d → S d U∗
yields another section of S d U∗ . That section vanishes at the subspaces Λ ∈ G(k, n) such
~ i.e. such that Λ is contained in the hypersurface of Pn defined
that F restricts to zero at Λ, K
by F = 0 (hence the section of U∗ is not zero). Hence, the set Σk (X) of linear spaces of
dimension k contained in a hypersurface X of degree d in PnK is the zero locus of a section
of S d U∗ , which is a vector bundle of rank k+d k+d
 
d . Since a section is defined locally by d
regular functions, we expect this to be the codimension of Σk (X) in G(k, n). We will see
that the above epimorphism of bundles is enough to conclude that this is so in general. In
particular, a general surface of degree d ≥ 4 in P3K is expected not to contain lines, while
a general cubic surface is expected to contain a finite number of lines. Another scope of
these notes is to give techniques to compute numbers like this.
Let us see a more general kind of example, involving more than one section, for which
we will consider the above example with d = 1.

Lemma 2.13. Fix H1 , . . . , Hm ∈ K[x0 , . . . , xn ]1 linearly independent linear forms and


identify them with m sections of the vector bundle U∗ over G(k, n). Then the locus on
which the restriction morphism
ϕ : G(k, n)× < H1 , . . . , Hm >,→ G(k, n) × (Kn+1 )∗ → U∗
has rank at most t is the set of k-spaces Λ meeting the linear subspace H1 = . . . = Hm = 0
in dimension at least k − t.

Proof: For a k-space Λ, the rank of ϕ at Λ is the dimension of the span of the restrictions
~ of H1 , . . . , Hm . Hence the rank is at most t if and only if the intersection of Λ with
to Λ
the linear subspace H1 = . . . = Hm = 0 is at least k − t.

In this case, we can compute by hand (using results about dimension that we will be
by granted) the dimension (or, what is better, the codimension) of the subset of G(k, n)
of linear subspaces meeting a given linear space in more dimension than expected:

Proposition 2.14. For any linear space Λ0 ⊂ Pn of dimension k 0 and any l ≤ k, k 0 , the
Schubert variety Ω ⊂ G(k, n) of linear subspaces meeting Λ0 in at least dimension l has
codimension (l + 1)(n + l − k − k 0 ) in G(k, n).

Proof: We consider the incidence variety I ⊂ G(l, k 0 ) × G(k, n) of pairs (Λ00 , Λ) where Λ00
is a subspace of Λ0 contained also in Λ. Clearly Ω is the image of I under the second

17
projection, and the fiber by this projection of a general Λ in the image is just one element.
This implies that I and its image have the same dimension. Hence it is enough to check
that I has dimension (k + 1)(n − k) − (l + 1)(n + l − k − k 0 ). For this we will use the first
projection. Since G(l, k 0 ) has dimension (l + 1)(k 0 − l), it is enough to prove that the fibers
of the first projection have dimension (k − l)(n − k). In fact, we will prove that these fibers
are isomorphic to G(k − l − 1, n − l − 1).
Indeed, fixing a linear space Λ00 ⊂ Λ0 of dimension l, let A ⊂ PnK be a linear subspace
of dimension n − l − 1 skew with Λ00 . The fiber of Λ00 under the second projection is the
set of linear subspaces of dimension k containing Λ00 , and, taking intersections with A, this
set is in bijection with the set of linear subspaces of dimension k − l − 1 in A. It is a not
so difficult exercise (specially if you choose nice coordinates) to prove that this bijection is
in fact an isomorphism, which concludes the proof.

Remark 2.15. Observe that the result of the above proposition is symmetric in k and
k 0 , i.e. the Schubert variety of k 0 -spaces of Pn meeting a k-space Λ in at least dimension
l has also codimension (l + 1)(n + l − k − k 0 ) in G(k 0 , n). The way of remembering the
above result is that, in the language of the Lemma 2.13, the locus in which the morphism
ϕ between a vector bundle of rank m = n − k 0 and the (k + 1)-rank vector bundle U∗ over
G(k, n) has rank at most t = k − l has codimension (m − t)(k + 1 − t) in G(k, n). The point
now is to prove the same result, at least in a general situation, when we replace G(k, n)
with any arbitrary variety and ϕ by any morphism of vector bundles over it. For example,
the twisted cubic can regarded (see Remark 1.15) as the dependency locus of the section
(x0 , x1 , x2 ) and (x1 , x2 , x3 ) of L1 ⊕ L1 ⊕ L1 (dependency locus means the locus on which
the sections have no maximal rank). Since this is a vector bundle of rank three, we would
expect this dependency locus to have codimension two, which is now the correct one.

18
3. Operations with vector bundles
We will introduce the main tools to make precise the ideas of Remark 2.15. Specifically,
we will see that, when the equations of a subvariety can be described as a degeneracy locus
of sections of a vector bundle, then we have a expected codimension from such a description.
Moreover, under good conditions, coincides with the right one. As we have seen, we can
compute codimensions when dealing with sections of the (dual of the) tautological bundle
over a Grassmannian. The idea will be to see that, at least under good conditions, any
possible example comes from a Grassmannian. For this we need to see how to produce
morphisms to Grassmannians (using vector bundles) and then pull our computations back
to our original variety. The first step will be to define in general a pullback of vector
bundles.

Proposition 3.1. Let π : F → Y be a vector bundle of rank r over an algebraic variety


Y and let ϕ : X → Y be a regular map. Then ϕ∗ F := {(x, v) ∈ X × F | ϕ(x) =
π(v)} is an algebraic variety and the first projection π 0 : ϕ∗ F → X provides ϕ∗ F a
S
structure of vector bundle over X such that, if Y = i Vi is a trivializing covering for F
with transition matrices Aij , then X = i ϕ−1 (Vi ) is a trivializing covering for ϕ∗ F with
S

transition matrices Aij ◦ϕ|ϕ−1 (Vi ∩Vj ) (meaning a matrix in which each entry is ϕ|ϕ−1 (Vi ∩Vj )
composed with the corresponding entry of Aij ).

Proof: Take Y = i Vi be a trivializing covering for F. Then X = i ϕ−1 (Vi ) is an open


S S

covering of X. Since for each i we have isomorphisms ψi : π −1 (Vi ) ∼


= Vi × Kr , we can
build a commutative diagram (in which the two squares containing the bottom arrow are
Cartesian)
π 0−1 (ϕ−1
 (Vi )) → π −1 (V
 i)

= yψi
 0 ∼
= yψi

ϕ|ϕ−1 (V ) ×idKr
ϕ−1 (V
i) × K
r i
−→ Vi ×
K
r
 
y y
ϕ|ϕ−1 (V )
ϕ−1 (Vi ) −→ i Vi
producing an isomorphism ψi0 defined by
−1
ψi0 (x, v) = x, ψi−1 (ϕ(x), v) ∈ ϕ∗ F.


If we now take two different open sets, we have an isomorphism

−1
ϕij = ψj ◦ ψi|U i ∩Uj
: (Ui ∩ Uj ) × Kr → π −1 (Ui ∩ Uj ) → (Ui ∩ Uj ) × Kr

defined by
ϕij (y, v) = (y, Aij (y)v),

19
or, equivalently,
ψi−1 (y, v) = ψj−1 y, Aij (y)v .


Therefore, for all x ∈ ϕ−1 (Ui ∩ Uj ),


−1 −1
ψ 0 i (x, v) = x, ψi−1 (ϕ(x), v) = x, ψj−1 (ϕ(x), Aij (ϕ(x))v) = ψ 0 j (x, Aij (ϕ(x))v),
 

showing that ϕ∗ F is indeed a vector bundle, with transition matrices Aij ◦ ϕ.

Definition. Given a regular map ϕ : X → Y of algebraic varieties and a vector bundle


F on Y , the vector bundle ϕ∗ F over X defined in Proposition 3.1 is called (see Example
1.10) the pull-back of the vector bundle.

Remark 3.2. Let ϕ : X → Y be a regular map and let F be a vector bundle over Y .
There is a natural homomorphism ϕ∗ : Γ(Y, F ) → Γ(X, ϕ∗ F ) mapping s : Y → F to

ϕ∗ s : X → ϕ∗ F ⊂ X × F

x 7→ x, s(ϕ(x))

If ϕ∗ s = 0, this means that s is zero at the image of ϕ, so that ϕ∗ is injective if ϕ is


surjective. On the the hand, given a section s̃ : X → ϕ∗ F, it is clear that s̃ restricts
to the fiber ϕ−1 (y) of a point of y ∈ Y to a map s̃|ϕ−1 (Y ) : ϕ−1 (y) → ϕ−1 (y) × Fy . In
particular, if the only regular functions on ϕ−1 (y) are constant, then s̃|ϕ−1 (Y ) is constant.
Hence, under good conditions, one should expect that the map s : Y → F mapping each
y to the common value in Fy determined by s̃|ϕ−1 (Y ) is a regular map (this happens if ϕ
is locally trivial, as proved in Example 1.17), so it is a section s of F such that ϕ∗ s = s̃.
Hence, when we are under the previous hypotheses, the map ϕ∗ : Γ(Y, F ) → Γ(X, ϕ∗ F ) is
an isomorphism.

Definition. Given a vector bundle F on an algebraic set X and a linear subspace V ⊂


Γ(X, F), we define the evaluation map as the map evV : X × V → F defined by evV (x, s) =
s(x). When V = Γ(X, F), we usually write evV = evF . If evV is surjective we say that F
is generated by the sections of V . If evF is surjective, we will say that F is generated by its
global sections (or globally generated, or spanned).

Proposition 3.3. Let F be a vector bundle of rank r on X that is generated by the


sections of V ⊂ Γ(X, F ). Then there is a natural regular map ϕV : X → G(r − 1, P(V ∗ ))
such that ϕ∗V (U∗ ) = F.

Proof: We define the map as follows. Since, by hypothesis, the evaluation map evV :
X × V → F is surjective, its dual F∗ → X × V ∗ is injective. This means that, for any

20
x ∈ X, the fiber F∗x is an r-dimensional subspace of V ∗ . We thus define ϕV (x) to be the
corresponding (r − 1)-dimensional subspace of P(V ∗ ).
To see that ϕV is a regular map we use coordinates. We first choose a basis s1 , . . . , sn
of V and take coordinates λ1 , . . . , λn with respect to it. Since the regularity of the map
can be checked locally, we restrict to open sets U ⊂ X on which F|U is isomorphic to
U × Kr . On such open set U , any section si is represented by regular functions fi1 , . . . , fir
on U . In particular, the evaluation map evV restricted to U can be described by
 

λ1
 f11 (x) . . . fn1 (x)  λ1 
. ..   ..  .
(x,  .. ) 7→  ...

.  .
λn f1r (x) . . . fnr (x) λn

Hence the dual map is given (with respect to the dual bases) by the transpose matrix
 
f11 (x) ... f1r (x)
 .. ..  .
 . . 
fn1 (x) . . . fnr (x)

In particular, ϕV (x) is the linear subspace of V ∗ generated by the points whose coordinates
are the columns of the above matrix. Therefore the Plücker coordinates of the subspace
are the maximal minors of the matrix, which are all regular functions on U . This shows
that ϕV is a regular map.
Finally, since U is the sub-bundle of G(r − 1, P(V ∗ )) × V ∗ consisting of the pairs (Λ, v)
such that v is a vector in the subspace of V ∗ determined by Λ, it follows that ϕ∗V (U) is
the sub-bundle of X × V ∗ consisting of the pairs (x, v) such that v is in the subspace of
V ∗ determined by ϕV (x). By the definition of ϕV , this is equivalent to say v ∈ F∗x , so that
it immediately follows that ϕ∗V (U) is precisely F∗ .

Remark 3.4. Consider the morphism ϕV : X → G(r − 1, dim V − 1). By Proposition


3.3, the locus Xk on which m sections in V has rank at most k is the pull-back by ϕV of
the locus on which m sections of U∗ has rank at most k. Hence, if we take the sections
to be independent, by Lemma 2.13, the locus Xk is the pull-back of the Schubert variety
of (r − 1)-spaces whose intersection with the subspace of dimension k 0 = dim V − 1 − m
(defined by the m sections) has dimension at least r − 1 − k. This Schubert variety has
codimension (r − k)(m − k), so that we expect Xk to have that codimension (or, in special
cases, smaller codimension). Let us see that this is the case.

Theorem 3.5. Let V ⊂ Γ(X, F) be a set of sections generating F. Then, for a general
choice of m sections in V , the locus on which they have rank at most k has codimension

21
(r − k)(m − k) in X. In particular, the zero locus of a general section of F will have
codimension r.

Proof: With the notation in Remark 3.4, consider the morphism the incidence variety
I ⊂ X × G(k 0 , dim V − 1) of pairs (x, Λ), such that ϕ(x) and Λ meet in dimension at least
r − 1 − k. From the above considerations, the statement to prove is equivalent to the fact
that the general fiber of the second projection from I has codimension (r − k)(m − k) in
X. To prove this, it is enough to prove (see Theorem 1.16) that I has dimension equal to

dim G(k 0 , dim V − 1) + dim X − (r − k)(m − k).

But this is so because the fiber at any x of the first projection from I is the set of k 0 -spaces
meeting the (r − 1)-space ϕV (x) in dimension at least r − 1 − k and, by Proposition 2.14,
this has codimension (r − k) (dim V − 1) + (r − 1 − k) − (r − 1) − k 0 , i.e. dimension


dim G(k 0 , dim V − 1) − (r − k)(m − k).

Example 3.6. As observed in Remark 2.15, Theorem 3.5 is saying that, with this point
of view, it is now natural to expect the twisted cubic to have codimension two. One
more step, to which we will devote the next section, is to compute the degree in case the
dimension is the correct one.

Proposition 3.7. Let X be an algebraic variety and let p : X × PnK → PnK denote the
second projection. Then Γ(X × PnK , p∗ Ld ) is naturally isomorphic to O(X)[x0 , . . . , xn ]d ,
the space of homogeneous polynomials of degree d in O(X)[x0 , . . . , xn ].

Proof: For any i = 0, . . . , n, we have p∗ Ld |X×Ui ∼ = X × Ui × K. Hence the ring of sections


of this restriction is O(X × Ui ), i.e. (see Proposition 1.8(ii)) the ring of polynomials
in the indeterminates xx0i , . . . , xxi−1
i
, xxi−1
i
, . . . , xxni with coefficients in O(X). We can thus
represent such a section as xFai , where Fi is a homogeneous polynomial of degree a in
i
the indeterminates x0 , . . . , xn and coefficients in O(X). Of course, we can take the same
exponent a for all i = 0, . . . , n, and we can also assume a ≥ d. Since the transition function
xd
from X × Ui to X × Uj is xdi we have
j

xdi Fi Fj
d a = a
xj xi xj

i.e.
xa−d
j Fi = xa−d
i Fj .

22
It follows that we can write Fi = xa−d
i F and Fj = xa−d
j F for some homogeneous polynomial
of degree d (you do not need any hypothesis on O(X) to be a factorization domain; just
check by hand that the coefficients of the monomials of Fi not containing at least xa−d
i are

zero). It is clear that F does not depend on the choice of i, j, so that any section of p Ld
determines such a homogeneous polynomial. Since, reciprocally, it is obvious that such an
F determines a section of p∗ Ld , the result follows.

Example 3.8. We can apply the above result to the case X is just one point, so that
p is just the identity map. We thus get that the space of sections of Ld is the space of
homogeneous polynomials of degree d. Since the definition of Ld makes sense even if d < 0,
we get that there are vector bundles not having other sections apart from the zero section.
In the limit case d = 0, in which L0 is nothing but the trivial bundle, we get that the
only regular functions on PnK are the constant functions. This result is true in general for
irreducible (or, more generally, connected) projective sets, and it is proved using the fact
that the the image by a regular map of a projective set is a projective set. Since we will
use that the regular functions defined on a Grassmannian G(k, n) are constant, we sketch
an alternative proof. Given a regular map

ϕ : G(k, n) → K

we want to show that, given Λ, λ0 ∈ G(k, n); we have ϕ(Λ) = ϕ(Λ0 ). It is a simple execise
to show that there is a finite chain of k-dimensional subspaces

Λ = Λ0 , Λ1 , . . . , Λs = Λ0

such that, for all i = 1, . . . , s, the subspaces Λi−1 , Λi meet along a (k − 1)-dimensional
subspace Ai , and hence their span is a (k + 1)-dimensional subspace Bi . Now the result
follows by proving (again it is an easy exercises choosing good coordinates) that each subset
Li ⊂ G(k, n) of those subspaces contained in Bi and containing Ai is isomorphic to P1K .
Therefore, ϕ|Li is constant, which implies

ϕ(Λ) = ϕ(Λ0 ) = ϕ(Λ1 ) = . . . = ϕ(Λs ) = ϕ(Λ0 )

as wanted.
As it happens for regular functions, and as observed in Example 3.8, vector bundles
can have no non-zero sections, but they could have in open sets. It is thus convenient to
extend the notion of sheaf of regular functions to the notion of sheaf of sections of a vector
bundle. Let us see first the structure we can expect to have.

23
Remark 3.9. Since sections of a vector bundle π : F → X over an algebraic variety take
values in a vector, they can also be added and multiplied by a constant, so that Γ(X, F) is
a vector space. But we can improve it, since the constant by which we multiply can be vary
with the point. In other words, Γ(X, F) has the structure of O(X)-module, since given
s ∈ Γ(X, F) and f ∈ O(X), we can define f s ∈ Γ(X, F) as (f s)(p) = f (p)s(p) (we leave
as an exercise to check that this f s, and also the sum of two sections, is indeed a section).
We can do this for any open set and consider, for each U ⊂ X, the OX (U )-module
Γ(U, π −1 (U )). Observe that, for those open sets for which there exists an isomorphism
ϕ : π −1 (U ) → U × Kr , we have an isomorphism Γ(U, π −1 (U )) → Γ(U, U × Kr ) (mapping
s 7→ ϕ ◦ s), hence it is isomorphic to OX (U )⊕r , hence it is a free module (defined by, so
that, in this case, the module is free. Moreover, for each U 0 ⊂ U , we have a commutative
diagram
Γ(U, π −1 (U )) → Γ(U, U × Kr )
↓ ↓
Γ(U , π (U )) → Γ(U , U 0 × Kr )
0 −1 0 0

given by restricting from U to U 0 . This shows that also Γ(U 0 , π −1 (U 0 )) is free, and a basis
comes from the restriction of a basis of Γ(U, π −1 (U )).

Definition. A sheaf of OX -modules over an algebraic variety is a sheaf F that assigns


to each open set U ⊂ X a OX (U )-module F(U ). It is said to be free of rank r if it is
⊕r
isomorphic to OX , and it is said to be locally free of rank r if there is an open covering
S
X = i Ui such that each F|Ui is a free.

Lemma 3.10. Given an algebraic variety X, the operation of taking sheaves of sections
defines a bijection between the set of isomorphism classes of vector bundles of rank r over
X and the set of isomorphism classes of locally free sheaves of rank r over X.

Proof: We have seen in Remark 3.9 that the sheaf of sections of a vector bundle of rank
r is locally free (we leave the details to the reader), so it is enough to reverse the process.
S
Given a locally free sheaf F of rank r, we take an open covering X = i Ui such that
⊕r
there is an isomorphism if sheaves ψi : F|Ui → OU i
. It is then clear that F|Ui is the
r
sheaf of sections of Ui × K . Given two open sets Ui , Uj of the covering we thus have an
isomorphism ψi−1 |U ∩U ψj |U ∩U
⊕r i j i j ⊕r
OUi ∩Uj −→ F|Ui ∩Uj −→ OU i ∩Uj
.

In particular, there is an automorphism of OX (Ui ∩ Uj )⊕r , which is given necessarily by a


matrix Aij . We leave to the reader to check the technical details to show that the matrices
Aij form a set of transition matrices of a vector bundle whose sheaf of sections is F

Remark 3.11. Although in literature most of algebraic geometers use interchangeably


vector bundles and locally free sheaves, we will instead try to use different symbols, mainly

24
calligraphical letters for sheaves and bold letters for vector bundles. For example, the
standard notation for the sheaf of sections of Ld on PnK is denoted by OPnK (d), and we will
also denote with U to the sheaf of sections of the tautological bundle U of G(k, n). We
will use however this equivalence to define two operators on sheaves any time we have a
regular map ϕ : X → Y :
1) Given the sheaf of sections F of a vector bundle F of rank r over Y , we will define
its inverse image ϕ∗ F as the sheaf of sections of the pull-back ϕ∗ F, and this is still a
locally free sheaf of rank r (now over X). There is a notion of inverse image of sheaves,
at least in the category of the so-called coherent sheaves, which coincides with the one we
just defined in the locally free case, but we will skip that.
2) Given any sheaf F over X, we can still define its direct image ϕ∗ F. In principle
we have a problem, because now, for each open set V ⊂ Y , in principle (ϕ∗ F)(V ) :=
F(ϕ−1 (V )) is only an OX (ϕ−1 (V ))-module. But the morphism OY → ϕ∗ OX attached to
ϕ gives a homomorphism of rings OY (V ) → OX (ϕ−1 (V )) that gives to any OX (ϕ−1 (V ))-
module an extra structure of OY (V )-module. When dealing with locally free sheaves, it is
not true in general that their direct images are locally free, but just coherent sheaves.

We defined a subbundle U ⊂ G(k, n) × Kn+1 as an incidence variety, but it seems


much more natural to work with the incidence variety in G(k, n) × PnK . For this, it seems
natural to “projectivize” U (i.e. its fibers). This process can be done in general:

Theorem 3.12. Given a vector bundle π : F → X over an algebraic variety, then there is
`
an algebraic variety P(F ) parametrizing the disjoint union x∈X P(Fx ) such the natural
map π̄ : P(F) → X is a locally trivial morphism. Moreover, there is a tautological line
bundle UF ⊂ π̄ ∗ F whose sheaf of sections OF (−1) satisfies π̄∗ OF (d) = S d F ∗ (i.e. the
sheaf of sections of the d-th symmetric product S d F∗ of the dual F∗ ).
S
Proof: If X = i Ui is a trivializing covering for F with transition matrices Aij , then
P(F) can be constructed by glueing together the corresponding Xi := Ui × P(Kr ) through
the automorphisms induced by the transition matrices. So it is clear that P(F) is an
algebraic variety, and π̄ is a morphism, since it is locally defined by the first projection
Xi := Ui × P(Kr ) → Ui (If you prefer to avoid glueing varieties, you can also construct
P(F) by considering equivalence classes in F minus the zero section, in which two elements
are equivalent if and only if are proportional vectors in the fiber of the same point).
Recall that we defined

π̄ ∗ F = {(α, v) ∈ P(F) × F | π̄(α) = π(v)}

i.e. the set of pairs of a point α ∈ P(Fx ) and a vector v ∈ Fx for the same point x. So
~ ⊂ Fx represented by α, and
we can impose the condition that the vector v is in the line α

25
this defines
UF = {(α, v) ∈ π̄ ∗ F | v ∈ α
~ }.

Let us see that the first projection p1 : UF → P(F) defines a line bundle structure on
UF . On each open set Xi of P(F ), the restriction of the vector bundle π̄ ∗ F is nothing
but Xi × Kr = Ui × P(Kr ) × Kr , and the restriction of UF is nothing but Ui × U, where
U = L−1 is the tautological line subbundle over P(Kr ). It is an exercise to prove that
all these L−1 on the different Xi glue together to produce a line bundle structure on UF .
However, we prefer to explicitly construct the transition functions (see Remark 3.13). We
thus decompose each open set
r
[
Xi := Ui × P(K ) = r
Ui × {x0i0 6= 0}
i0 =1

and repeat on each Ui × {x0i0 =


6 0} the same construction as in Example 2.10. From the
isomorphism
ψi−1 : Ui × Kr → π −1 (Ui ) ⊂ F

and the one induced by it

ψ̄i : Ui × P(Kr ) → π̄ −1 (Ui ) ⊂ P(F)

we define an isomorphism, for each i0 ,

−1 0 −1 0
ψi,i 0 : Ui × {xi0 6= 0} × K → p1 (Ui × {xi0 6= 0}) ⊂ UF

a1 ar 
(x, (a1 , . . . , an ), λ 7→ ψ̄i (x, (a1 : . . . : ar )), (ψi−1 (λ( , . . .

))
ai0 ai0
defining a trivialization of UF in Ui,i0 . Given now two open sets Ui,i0 and Uj,j 0 , we first
recall that we have a transition matrix Aij such that, for each x ∈ Ui ∩ Uj ,

ψj ψi−1 (x, (a1 , . . . , ar ) = x, Aij (x) ( (a1 , . . . , ar )t ) =: x, (b1 , . . . , br )


  

where b1 , . . . , br are linear combinations, with coefficients entries of Aij (x), of a1 , . . . , ar .


We thus have

−1
 −1 a1 ar 
ψj,j 0 ψi,i0 (x, (a1 , . . . , an ), λ = ψj,j 0 ψ̄i (x, (a1 : . . . : ar )), (ψi (λ , . . . )) =
ai0 ai0

λ −1  bj 0 
= ψj,j 0 ψ̄i (x, (a1 : . . . : ar )), (ψi (a1 , . . . , ar )) = (x, (b1 , . . . , bn ), λ
ai0 ai0
bj 0
so that UF has transition functions ai0 .

26
To prove π̄∗ OF (d) = S d F ∗ , it suffices to check the sections of both sheaves coincide for
the open sets on which F trivializes, since they form a basis of the Zariski topology. Take
thus U ⊂ X such that π −1 (U ) ∼ = U × Kr and take x0 , . . . , xr−1 to be coordinates for Kr .
Therefore, π̄ −1 (U ) ∼ = U × P(Kr ), where P(Kr ) is the projective space with homogeneous
coordinates x0 , . . . , xr−1 , the restriction of UF is U ×L−1 and the restriction of π̄ is nothing
but the second projection.
Now, on one hand, OF (d) is the sheaf of sections of S d U∗F , whose restriction to
U × P(Kr ) is U × Ld = π̄ ∗ Ld . By Proposition 3.7, we have Γ(U × P(Kr ), π̄ ∗ Ld ) =
O(U )[x0 , . . . , xr−1 ]d .
On the other hand, since F|U ∼= U × Kr , it follows S d F∗ ∼
= U × S d (Kr )∗ = U ×
|U
K[x0 , . . . , xr−1 ]d , whose space of sections is again O(U )[x0 , . . . , xr−1 ]d (we need to give a
regular function for any monomial of degree d). This completes the proof.

Definition. A projective bundle is a map π̄ : P(F) → X as above. The variety P(F) is


also called the projectivization of the vector bundle F. It is important to warn that we are
following the notation in [F], but for many texts (for example in [H]), the projectivization
of a vector space means not the set of lines in the vector space, but of quotients of rank
one, or, in other words, the set of hyperplanes. Hence, the projectivization P(F) in [H]
means what we would write P(F∗ ).

Remark 3.13. We wrote UF for this new tautological line bundle (or OF (−1) for the
corresponding sheaf of sections) because it depends on F, and not only on P(F). Indeed,
for any line bundle L, it is clear that P(F ⊗ L) ∼ = P(F). However, UF⊗L is different.
Indeed, if fij are the transition functions of L (as usual, we can assume that F and L
have a common trivializing covering), then the transition matrices of F × L are fij Aij .
b 0 b 0
Therefore, the transition functions aj 0 we got in the proof of Theorem 3.12 become fij aj 0
i i
for UF⊗L . This proves UF⊗L ∼ = UF ⊗ π̄ ∗ L. For example, it is clear that P(X ⊗ K) = X,
with UX⊗K = X × K. Hence, for any line bundle L over X, we have P(L) = X with
UL = L.
With this notion of projectivization, we get that the projectivation of the universal
subbundle of G(k, n) is the wanted incidence variety:

Proposition 3.14. Let U be the tautological vector subbundle over G(k, n). Then P(U)
is naturally isomorphic to the incidence variety X = {(Λ, x) ∈ G(k, n) × PnK | x ∈ Λ}.
Moreover, if q : X → PnK denotes the second projection, then OU (d) ∼
= q ∗ (OPn (d)).

Proof: The first part follows at once from the very definition. Indeed, we defined U as
the subbundle of the trivial bundle G(k, n) × Kn+1 consisting of pairs (Λ, v) such that the

27
vector v is in the (k + 1)-subdimensional space of Kn+1 corresponding to Λ. Hence the
projectivization of this fact yields the wanted incidence variety. Recall also that the bundle
structure of π : U → G(k, n) was given by the restriction of the first projection.
In the case k = 0, the universal subbundle of PnK , which we showed to be L−1 , is
now the incidence variety in PnK × Kn+1 of pairs (x, v) such that the vector v is in the line
determined by x, and π. Hence, by definition, q ∗ L−1 ⊂ P(U) × L−1 is the set of pairs
(Λ, x), (x0 , v)) such that x = x0 .


On the other hand, recall that UF was defined as the subbundle of π ∗ U ⊂ P(U) × U
of pairs (Λ, x), (Λ0 , v) such that Λ = Λ0 .


It is thus clear that UF is isomorphic to q ∗ L−1 , i.e. the sheaves OU (−1) and
q ∗ (OPn (−1)) are isomorphic. Dualizing and taking d-th symmetric powers we conclude
the proof.

Exercise 3.15. Prove that the projection q : P(U) → Pnk is locally trivial with fiber
G(k−1, n−1) [Hint: For any D(xi ), check that the map q −1 (D(xi ) → D(xi )×G(k−1, n−1)
mapping each (x, Λ) to (x, Λ∩{xi = 0} is an isomorphism, where G(k−1, n−1) is identified
with the Grassmannian of (k − 1)-dimensional subspaces in {xi = 0}].

Corollary 3.16. Γ(G(k, n), S d U∗ ) is canonically isomorphic to K[x0 , . . . , xn ]d , the space


of homogeneous polynomials of degree d in K[x0 , . . . , xn ].

Proof: Following the notation of Proposition, 3.14, we have, using Theorem 3.12 for the
first equality

Γ(G(k, n), S d U∗ ) = (π̄∗ OU (d))(G(k, n)) = (OU (d))(P(U)) = Γ(P(U), q ∗ Ld ) = Γ(PnK , Ld )

where the last equality comes from Remark 3.2 (and Example 3.8 to use that the only
regular functions on a Grassmannian are the constant functions) using Exercise 3.15. Now
the result follows from Example 3.8.

28
4. Intersection theory and Chern classes
In this section we will try to show the main techniques in Intersection Theory, intro-
ducing in particular the notion of Chern classes. A good survey of all this can be found
in Appendix A of [H], while the technical details are in [F]. The ambient space will be a
smooth irreducible affine variety X. To illustrate how all this works we will start by inter-
secting with hypersurfaces, since for hypersurfaces can imitate the behaviour of Example
2.1 and get any hypersurface as the zero locus of a line bundle.

Example 4.1. It is a fact in Algebraic Geometry that, in a smooth algebraic variety


X, any (irreducible) subvariety Y of codimension one is locally defined by one equation.
S
This means that, given such Y , there is an open covering X = i Ui such that, for each
i, the intersection Y ∩ Ui is the zero locus of a regular function fi ∈ OX (Ui ). It is then
f
possible to prove that, given i, j, the quotient fji is a regular function in Ui ∩ Uj not having
f
zeros. Hence we get a line bundle LY with transition functions fji having a section whose
zero locus is Y (we leave as an exercise to prove that the choice of different local functions
fi produces a line bundle isomorphic to LY . This can give an idea of how to define the
intersection in X of Y with any (irreducible) subvariety Z ⊂ X: restrict the line bundle
LY to Z and define the intersection of Y and Z as the zero locus of a nonzero section of
LY |Z . We find several problems, which we will need to deal with:
1) It is not necessarily true that LY |Z has actually some nonzero section.
2) In case LY |Z has a nonzero section, its zero locus has the expected dimension (the
dimension of Z minus one), but it is not necessarily irreducible: it can have several com-
ponents (fortunately all of them of the same expected dimension), with some multiplicity.
It is thus reasonable to deal not with irreducible varieties, but with cycles of irreducible
varieties, i.e. linear combinations (with integral coefficients) of irreducible subvarieties.
3) Again in the case of LY |Z having nonzero sections, different sections will have
different zero loci, so we need to deal with some equivalence relation (spoiler: in the case
of hypersurfaces, the reasonable equivalence relation will be to correspond to come from
the same line bundle).
Before dealing with these three points, we illustrate this with an example.

Example 4.2. Let consider the set

B := { (x0 : x1 : x2 ), (u0 : u1 ) ∈ P2K × P1K | u0 x0 + u1 x1 = 0}




and let p1 , p2 be the two projections. Observe that the fibers of p1 are all of them just one
point, except for the point (0 : 0 : 1), in which

E := p−1 1
1 (0 : 0 : 1) = {(0 : 0 : 1)} × PK .

29
In fact, P1K can be naturally with the pencil of lines passing through (0 : 0 : 1) and the
map p1 : B → P2K (called the blow-up of P2K at the point (0 : 0 : 1)) consists of replacing the
point (0 : 0 : 1) with the so-called exceptional divisor E, identified with the set of directions
at the point (0 : 0 : 1). Let us try to compute the intersection of E with itself with the
technique we described in Example 4.1. We first compute explicitly the line bundle LB
over B corresponding to the hypersurface E ⊂ B. The basic open sets we can use are
xi uj 6= 0. Since we will restrict LB to be, we will only need the open sets x2 ui 6= 0.
Observe then that E has local equations
x1
in the open set x2 u0 6= 0
x2

and
x0
in the open set x2 u1 6= 0
x2
so that the transition function from the first open set to the second one is xx01 = − uu01 . Hence,
the restriction of L to E (identified with the projective line of homogeneous coordinates
u0 , u1 ) has transition function − uu10 from the open set u0 6= 0 to u1 6= 0. Therefore
LE|E = OP1 (−1). We thus find the first problem mentioned in Example 4.1. However,
allowing cycles with negative coefficients, LE|E corresponds to minus the class of one
point, and we get equivalent cycles independently of the chosen point. As a conclusion,
the self-intersection of E in B is minus the class of a point.

Definition. Given a smooth irreducible algebraic variety X of dimension n, a cycle of


codimension r is a linear combination n1 Y1 +. . .+ns Ys of irreducible varieties of dimension
n − r, with all ni ∈ Z (if r = 1, the cycle is called a divisor). The group of all cycles of
codimension r is denoted by Z r (X).

Remark 4.3. Since to any irreducible hypersurface Y we can associate a line bundle,
given any divisor D = n1 Y1 + . . . + ns Ys ∈ A1 (X), we can associate to it the line bundle
LD = L⊗n ns
Y1 ⊗ . . . ⊗ LYs (if each Yk is described locally in Ui by fik , then LD is the line
1

n
Πk fjkk
bundle with transition functions n
Πk fikk
. This gives a homomorphism

Div(X) → P ic(X)
D 7→ LD

where Div(X) is the group of divisors of X and P ic(X) is the so-called Picard group of X,
i.e. the group of isomorphism classes of line bundles with the group law given the tensor
product (this is indeed a homomorphism since the transition functions of a tensor product
of line bundles are the product of the transition functions of the factors). It is a much
deeper result that the map is surjective, i.e that any line bundle L is isomorphic to some

30
LD . The reason is that, if X is in some sense complete (the right notion is proper, but
you can interpret it as being a projective variety), then there is a sufficiently “big” line
bundle L0 such that L ⊗ L0 has nonzero sections (in the projective case, L0 would be the
restriction of Ld for a sufficiently big d). Then the zero locus of a section of L ⊗ L0 is a
divisor D, so that L ⊗ L0 ∼ = LD . But the fact that L0 is “big” implies that it also has a
nonzero section, with zero locus a divisor D0 . Hence we can write L0 ∼= LD0 and

L = LD ⊗ L−1
D 0 = LD−D 0 .

At this point, if we want classes of divisors to correspond to isomorphism classes of line


bundles, we need to define an equivalence relation (called linear equivalence) in which two
divisors D, D0 are linearly equivalent if LD ∼ = LD0 . We thus need to understand when
a divisor D is equivalent to zero, i.e. when it defines the trivial line bundle. Writing
D = D0 − D00 (the positive part plus the negative part), if X = i Ui is a trivializing
S

covering for LD , on each Ui the divisor D0 will be defined by regular functions fi0 , while
f0
j
f 00
D00 will be defined by regular functions fi00 . Hence the transition functions of LD are j
f0
,
i
f 00
i
and the fact that LD is trivial implies the existence of regular functions fi ∈ OX (Ui )
without zeros such that, on each Ui ∩ Uj we have

fj0
fj00
fj = fi0
fi
fi00

fi fi00
so that on each Ui we can define a (rational) function fi0 that extends to the whole X.
The point is that now we can define

W := { x, (t0 : t1 ) ∈ X × P1K | t0 fi (x)fi00 (x) + t1 fi0 (x) = 0}




(well-defined independently to which Ui contains the point x). From this definition it is
clear that, if p : W → P1K is the second projection, we have that p−1 (0 : 1)−p−1 (1 : 0) = D.

Definition. Given a smooth irreducible variety X the Chern class of a line bundle L over
X, denoted by c1 (L) is the class of any divisor D such that L = LD . If L is the sheaf of
section of L, we will also define its Chern class c1 (L) := c1 (L). In other words, we have an
isomorphism c1 : P ic(X) → A1 (X), where A1 (X) is the quotient of Div(X) by the linear
equivalence of divisors.
In general codimension we try now to imitate as much as possible the situation in
codimension one. Unfortunately, the situation is far from being a direct generalization

31
of what we have seen: it is not true that any subvariety of codimension r (even if it is
smooth) is the zero locus of a section of a vector bundle of rank r. So we cannot establish
a bijection between the set of vector bundles of rank r over an algebraic varieties and the
set of some equivalence classes of cycles of codimension r. We start by giving the right
notion of equivalence classes of cycles.

Definition. Given a smooth irreducible algebraic variety X of dimension n, for any sub-
variety W ⊂ X × P1K of dimension n − r + 1, such that the image of the second projection
p : W → P1K contains an open set, we can associate to any t ∈ P1K its fiber p−1 (t). With
the natural identification of X × {t} with X and a suitable definition of multiplicity of the
components of the fiber, we can consider p−1 (t) as a cycle in Z r (X). We define Ratr (X)
to be the subgroup of Z r (X) generated by cycles of the form p−1 (t1 ) − p−1 (t2 ) for all
possible subvarieties W ⊂ X × P1K and all possible t1 , t2 ∈ P1K (a cycle in Ratr (X) is said
to be rationally equivalent to zero. Finally, define Ar (X) := Z r (X)/Ratr (X).

Example 4.4. The easiest example is when X = PnK . For A0 (PnK ) there is nothing to
say (it always happens that A0 (X) is generated by the class of X, and the multiplication
by that class is the identity). If we pass to the other extreme, also An (PnK ) is easy in this
case. Indeed, given two points (a0 : . . . : an ), (b0 : . . . : bn ) ∈ PnK , we consider
 
 x0 ... xn
W := { (x0 : . . . : xn ), (t0 : t1 ) | rk = 1}
a0 t0 + b0 t1 ... an t0 + bn t1

(this is nothing but the graph of a parametrization of the line passing through the two
points). Then the fiber of (1 : 0) under the second projection is (a0 : . . . : an ), while the
fiber of (0 : 1) is (b0 : . . . : bn ). Hence the classes of two points are always the same, so
An (PnK ) is generated by the class of one point (observe that we could arrive to this because
we could parametrize the line through two points, but for a general X there could be no
rational curve through two points). It is also easy to study A1 (PnK ) knowing that any
irreducible subvariety of PnK of dimension n − 1 is always defined by a homogeneous poly-
nomial. Hence, if we have two subvarieties Y0 and Y1 defined respectively by homogeneous
polynomials F0 , F1 ∈ K[x0 , . . . , xn ] of the same degree, we can now consider

W := { (x0 : . . . : xn ), (t0 : t1 ) | t0 F0 (x0 , . . . , xn ) + t1 F1 (x0 , . . . , xn ) = 0}

(observe that we absolutely need F0 and F1 to have the same degree if we want W to be
well-defined). Taking again the fibers of (1 : 0) and (0 : 1) under the second projection, we
get now that the classes of Y0 and Y1 are the same. It can be proved. in general, it can be
proved that a cycle n1 Z1 + . . . + nm Zm of codimension r is rationally equivalent to zero
if and only if n1 deg(Z1 ) + . . . + nm deg(Zm ) = 0. Hence any Ar (PnK ) (with r = 0, . . . , n is

32
isomorphic to Z via the degree map. For example, A1 (PnK ) is generated by the class h of
hyperplane. In this case, it is easy to give to A(PnK ) := r Ar (PnK ) a (graded) ring structure.
L

Since r general hyperplanes meet transversally in a linear subspace of codimension r, we get


that Ar (PnK ) is generated by the class of hr . Hence, as a graded ring, A(PnK ) is isomorphic
to Z[t]/(tn+1 ), in which we identify h with the class of t. This is nothing but encoding
Bézout’s Theorem. For example, when n = 2, the only interesting product is

A1 (P2K ) × A1 (P2K ) → A2 (P2K )

and we know that, if a curve of degree d1 (whose class is d1 h) and another curve of degree
d2 meet in a finite number of points, then the number of points of intersection, counted
with multiplicity, is d1 d2 . This coincides with the fact that the product of d1 h and d2 h is
d1 d2 h2 .

Remark 4.5. The notion of rational equivalence essentially means that we can deform
one cycle into another through a rational curve. This is why things worked in the above
example. If we replace in the definition P1K with an arbitrary algebraic curve, we get was it
called algebraic equivalence, and we again have a multiplication of classes. When K = C, a
smooth complex algebraic variety of dimension n is a topological variety of dimension 2n,
and any cycle in Z r (X) defines a cycle in H 2r (X, Z), and we can also define homological
equivalence and use the intersection product we have in that context. It can be proved that
cycles algebraically equivalent to zero are homologically equivalent to zero. The interested
reader can be find more details about the different equivalences of cycles in §19.3 of [F].

Remark 4.6. Given a morphism ϕ : X → Y , we have the notion of pull-back of vector


bundles, in particular we can define a homomorphism of groups ϕ∗ : P ic(Y ) → P ic(Y ).
Hence, composing with the isomorphisms c1 , we have a homomorphism of cycles (which
we will denote with the same symbol)

ϕ∗ : A1 (Y ) → A1 (X)

which, by definition, satisfies ϕ∗ (c1 L) = c1 (ϕ∗ L). This is not defined in the natural way
mapping the class of a divisor n1 Z1 +. . .+ns Zs to the class of n1 ϕ−1 (Z1 )+. . .+ns ϕ−1 (Zs )
(since each Zj is locally defined by one equation, one expects each ϕ−1 (Zj ) to be also
defined by one equation), because it is not true that any ϕ−1 (Zj ) is a hypersurface (in
fact, the only other possibility is that ϕ−1 (Zj ) is the whole X, i.e. that Zj is contained
in the image of ϕ). The fact that ϕ∗ is well-defined means that we can move any divisor
in its equivalence class such that each ϕ−1 (Zj ) has codimension one in X (maybe it is not
irreducible, and even some component could appear with multiplicity, so that each ϕ1 ((Zj )

33
could be a divisor by itself). The point now is that this situation can be repeated for any
codimension, in the sense that we can define for any r ∈ N a homomorphism

ϕ∗ : Ar (Y ) → Ar (X)

by choosing good representatives of the cycles. This map is also called pull-back.

Remark 4.7. When dealing with sheaves, there is also a push-forward operation, so we
can try to define one for classes of cycles. In fact, it exists, but it does not preserve codi-
mension, but dimension. Hence, if we have a morphism ϕ : X → Y from algebraic varieties
of respective dimensions n and m, there is a well-defined push-forward homomorphism

ϕ∗ : An−r (X) → Am−r (Y )

such that the class of a subvariety Z ⊂ X of dimension r to the class of the sum of the
r-dimensional components of ϕ(Z), each of them counted with multiplicity equal to the
degree of the map Z → ϕ(Z). For example, for the blow-up map p1 : B → P2K of Example
4.2, the map p1 ∗ : A1 (B) → A1 (P2K ) will map the class of E to zero. Observe that, when
we have an inclusion i : Y ,→ X of a subvariety, the map i∗ is nothing but sending the class
of a cycle in Y to the class of the same cycle, but now regarded as a cycle in X. This map
is not necessarily injective (for example, if i : C → PnK is the embedding of a non-rational
curve, the class of two different points in C is always different, but their image by i∗ is
necessarily the same, as we saw in Example 4.4)

Remark 4.8. We cheated if the reader got the impression that the pull-back construc-
tion can be done with not that much difficulty. In fact, the key part of constructing an
intersection product is hidden inside the pull-back construction. Indeed, once we know
the pull-back exists, the pull-back of the inclusion i : Y ,→ X of an irreducible subvariety
is nothing but computing the intersection with Y of cycles of X. In other words, the
multiplication map with a subvariety Y of codimension s is nothing but the composition
i∗ i
A( X)−→Ar (Y )−→A
∗ r+s
(X)

so that we can define a product

Ar (X) × As (X) → Ar+s (X)


L r
making A(X) := r A (X) a graded ring (called the Chow ring of X). This product
satisfies that, if Y, Y ⊂ X are irreducible subvarieties of respective codimension r and r0
0

whose intersection Y ∩ Y 0 consists of (n − r − s)-dimensional components Z1 , . . . , Zm with


respective multiplicities n1 , . . . , nm , then the intersection product of the classes of Y and

34
Y 0 is the class of n1 Z1 + . . . + nm Zm . Observe that pull-backs behave well with products,
in the sense that, given a morphism ϕ : X → Y , the pull-back map ϕ∗ A(Y ) → A(X) is a
homomorphism of (graded) rings.

Remark 4.9. We will also give by granted several properties of the pullback and push-
forward of cycles. Everything is based on a first main result: Assume we have a Cartesian
square p2
X ×Y P −→ P 
p1 π
y y
ϕ
X −→ Y
P
if we have (the class of) a cycle α = i ni Zi on P , its pushforward by π is π∗ α =
P
i ni π(Zi ) where the sum is now restricted to the subvarieties Zi such that π(Zi ) has the
same dimension. On the other hand, if the cycle is well-chosen such that the inverse image
by p2 of each Zi has the same codimension in X ×Y P as the one of Zi in P , then the
pullback of α is p∗2 α = i ni p−1
P
2 (Zi ). Since (see Remark 1.10) the maps π and p1 have
the same fibers, and the same happens for ϕ and p2 , it is natural to think that

ϕ∗ π∗ α = p1 ∗ p∗2 α.

This is in fact true if ϕ and π have good properties. For example, there is no problem when
ϕ and π are projective bundles or, more generally, locally trivial maps (and we will apply
this result mainly in this situation), since all the fibers are the same. There is however
another main case. Consider that the Cartesian square is now
i
ϕ−1(Z) −→ X

ϕ −1 ϕ
y |ϕ (Z) y
j
Z −→ Y

where i, j are the inclusion of the corresponding subvarieties. Then, for any cycle α on X,
we have
j ∗ ϕ∗ α = ϕ|ϕ−1 (Z) ∗ i∗ α


so that, composing with j∗ in both sides and using the commutativity of the diagram, we
get
[Z]ϕ∗ α = j∗ j ∗ ϕ∗ α = j∗ ϕ|ϕ−1 (Z) ∗ i∗ α = ϕ∗ i∗ i∗ α = ϕ∗ ([ϕ−1 (Z)]α)


Taking linear combinations of subvarieties Z, we get the relation

ϕ∗ (α · ϕ∗ β) = ϕ∗ (α) · β

for α ∈ A(X) and β ∈ A(Y ), called projection formula, and valid for good morphisms (for
example, locally trivial maps). We will give all this for granted.

35
In fact, the actual construction of an intersection theory is much slower, starting with
the intersection by divisors, i.e. Chern classes of line bundles, as we already did (in this new
language, the intersection product of a subvariety i : Y ,→ X with a divisor corresponding

to a line bundle L is defined as [Y ] · c1 (L) = i∗ c1 (L|Y ) ) and continuing by defining the
intersection with Chern classes of vector bundles, as we will define next.

Lemma 4.10. Let F be a vector bundle of rank r over X and let π̄ : P(F) → X be the
corresponding projective bundle. Then, π̄∗ (c1 (OF (1)r−1 π̄ ∗ α) = α for any α ∈ A(X). In
particular, π̄ ∗ : A(X) → A(P(F)) is a monomorphism of rings.

Proof: Since c1 (OF (1)) is the hyperplane divisor in the fibers of π̄ (which are projective
spaces of dimension r − 1), the class of c1 (OF (1))r−1 in A(P(F)) is the class of a subvariety
that maps generically 1:1 to X. This implies π̄∗ (c1 (OF (1))r−1 ) = [X]. Since this is the
unity in A(X), the result follows now from the projection formula.

Example 4.11. We can wonder now if we take higher powers of c1 (OF (1)). We have
π̄∗ (c1 (OF (1)r−1+i ) ∈ Ai (X). For example (see Remark 3.13), if F is a line bundle L, we
have that π̄ is the identity and c1 (OL (1)) = c1 (L∗ ) = −c1 (L). Hence π̄∗ (c1 (OF (1)r−1+i ) =
(−1)i c1 (L)i . Observe that we thus get the coefficients of the inverse series of 1 + c1 (L)t ∈
A(X)[t] (since the coefficients ti for i > n are zero, the series is in fact a polynomial).
Hence we can recover c1 (L) from the series of values of π̄∗ (c1 (OF (1)r−1+i ). This suggests
the following:

Definition. Let F be a vector bundle of rank r over X, and let π̄ : P(F) → X be


the corresponding projective bundle. The i-th Segre class of F is defined as si (F) =
π̄∗ (c1 (OF (1))r−1+i ) ∈ Ai (X). We call Segre polynomial of F to s(F) = s0 (F ) + s1 (F )t +
s2 (F )t2 + . . . (observe that s0 (F) = [X], and we will write that class as 1). We define
the Chern polynomial of F to be the formal inverse of s(F) as a formal series. If we write
c(F) = c0 (F) + c1 (F)t + c2 (F)t2 + . . . (again c0 (F) = 1), then ci (F) ∈ Ai (X) is called the
i-th Chern class of F. Observe that we immediately get

c1 (F) = −s1 (F),

c2 (F) = s1 (F)2 − s2 (F),

and so on.

Example 4.12. Consider on G(k, n) the vector bundle U of rank r = k + 1. Recall


that π̄ : P(U) → G(k, n) is the first projection of the incidence variety of G(k, n) × PnK ,
and c1 (OU (1)) is the pullback by the second projection of the hyperplane class of OPn (1).

36
Thus, for any i, the product c1 (OU (1)r−1+i is represented by a set of pairs (Λ, x) such
that x is contained in a fixed linear subspace Λ0 of codimension r − 1 + i = k + i. Hence
si (U) = 0 if i > n − k, and if 0 ≤ i ≤ n − k, then si (U) is represented by the class of

Ω(Λ0 ) = {Λ ∈ G(k, n) | Λ ∩ Λ0 6= ∅}

where Λ0 ⊂ PnK of dimension n − k − i. As already checked in Proposition 2.14, this


has codimension i in G(k, n). On one extreme, sn−k (U) is the class of the set of linear
subspaces passing through one point; in particular, one gets that sn−k (U)k+1 is the class
of one point of G(k, n). On the other extreme, s1 (U) is the class of

Ω(A) = {Λ ∈ G(k, n) | Λ ∩ A 6= ∅}

where A ⊂ PnK is a linear subspace of dimension n − k − 1. If we want now to compute


s1 (U)2 , we need to intersect Ω(A) with another Ω(A0 ), where A0 ⊂ PnK is another linear
subspace of dimension n − k − 1. We can try to take A, A0 in a special position, namely
spanning a linear space B of dimension n − k, or, equivalently, meeting in a linear space
C of dimension n − k − 2. It is then clear that

Ω(A) ∩ Ω(A0 ) = {Λ ∈ G(k, n) | dim(Λ ∩ B) ≥ 1} ∪ {Λ ∈ G(k, n) | Λ ∩ C 6= ∅}

and both sets have the expected codimension two, according to Proposition 2.14. Since the
class of {Λ ∈ G(k, n) | Λ ∩ C 6= ∅} is s2 (U), it follows that c2 (U) (which is s1 (U)2 − s2 (U))
is the class of {Λ ∈ G(k, n) | dim(Λ ∩ B) ≥ 1}. In an alternative way, we could have also
dealt with this intersection using the projection formula, which in this case reads

s1 (U)2 = π̄∗ (c1 (OU (1))k+1 ) [Ω(A0 )] = π̄∗ c1 (OU (1))k+1 )π̄ ∗ [Ω(A0 )]
 

so that we need to decompose the intersection in the incidence variety of P(U) ⊂ G(k, n) ×
PnK
{(Λ, x) ∈ P(U) | x ∈ A} ∩ {(Λ, x) ∈ P(U) | Λ ∩ A0 6= ∅} =
= {(Λ, x) ∈ P(U) | dim(Λ ∩ B) ≥ 1, x ∈ A} ∪ {(Λ, x) ∈ P(U) | x ∈ C}
whose image in G(k, n) is (generically 1:1) again {Λ ∈ G(k, n) | dim(Λ ∩ B) ≥ 1} ∪ {Λ ∈
G(k, n) | Λ ∩ C 6= ∅}. In fact, if you want to build intersection theory from the very
beginning, what you should do is to take this as the definition of product: first you start
(as we did) defining the intersection with one divisor, and from this not only constructing
Segre classes, but also defining the product by them as

si (F)α = π̄∗ (c1 (OF (1))r−1+i π̄ ∗ α) ∈ Ai+j (X).

for any α ∈ Aj (X). Since Chern classes are defined through Segre classes, this also defines
the multiplication by Chern classes.

37
Lemma 4.13. Let ϕ : X → Y be a regular map and let F be a vector bundle of rank r
over Y . Then si (ϕ∗ F) = ϕ∗ (si (F)) for any i = 1, . . . , r. As a consequence, also ci (ϕ∗ F) =
ϕ∗ (ci (F)) for any i = 1, . . . , r.

Proof: By definition, si (ϕ∗ F) = π̄∗0 (c1 (Oϕ∗ F (1))r−1+i ), where π̄ 0 : P(ϕ∗ F) → X is the
projection. We have a Cartesian square
ϕ0
P(ϕ∗ F) −→ P(F)

 0 
yπ̄ yπ̄
ϕ
X −→ Y

where π̄ 0 and ϕ0 are the projections, and c1 (Oϕ∗ F (1)) = ϕ0 (c1 (OF (1))) (prove this as an
exercise). We thus have (see Remark 4.9)


si (ϕ∗ F) = π̄∗0 (c1 (Oϕ∗ F (1))r−1+i ) = π̄∗0 ϕ0 (c1 (OF (1))) = ϕ∗ π̄∗ (c1 (OF (1))) = ϕ∗ (si (F))

as wanted.

38
5. The splitting principle and applications
In this section we will show how to make computations with Chern classes in the
Chow ring of a variety. The main tool will be the so-called splitting principle. We will
illustrate the technique with many concrete examples and applications.

Lemma 5.1. Let F be a vector bundle of rank r over an algebraic variety and line bundles
L1 , . . . , Lr . Then the following are equivalent:
(i) There exists a sequence of epimorphisms F = Fr  Fr−1  . . .  F1  F0 = 0 such
that the kernel of each Fi  Fi−1 is isomorphic to Li .
(ii) There exists a filtration 0 = F0 ⊂ F1 ⊂ . . . ⊂ Fr = F such that Fi /Fi−1 ∼
= Lr−i+1 .

Proof: Assume we have a sequence like in (i). For each i, let F0i be the kernel of the
composed surjection F → Fr−i . Then, for each i we have a commutative diagram

0 → F0i−1 → F → Fr−i+1 → 0
↓ || ↓
0 → F0i → F → Fr−i → 0

and the snake lemma proves that F0i /F0i−1 ∼


= Lr−i+1 .
To prove that (ii) implies (i), it suffices to apply to F∗ the implication we just proved
and then dualize.

Lemma 5.2. Let F be a vector bundle in the conditions of Lemma 5.1. If F has a global
section without zeros, then c1 (L1 ) . . . c1 (Lr ) = 0.

Proof: We take a filtration 0 = F0 ⊂ F1 ⊂ . . . ⊂ Fr = F such that Fi /Fi−1 ∼


= Lr−i+1 and
let s : X → F be a section with no zeros. We prove the result by induction on the rank r,
the result being clear for r = 1. If r > 1, consider the composition

s0 : X → F → F/Fr−1 ∼
= L1 .

We distinguish three cases:


–If s0 has no zeros, this means that L1 is the trivial bundle, so that c1 (L1 ) = 0 and
the result is clear.
–If the composition is identically zero, then s comes from a section of Fr−1 , which
necessarily has no zeros. By induction hypothesis, c1 (L2 ) . . . c1 (Lr ) = 0, and the result
follows also in this case.
–Finally, if s0 is not identically zero but has some zeros, it vanishes at a hypersurface
X 0 ⊂ X. Restricting to X 0 we have that the section s|X 0 : X 0 → F|X 0 necessarily comes

39
from a section of Fr−1|X 0 . By construction, this last section does not vanish, and hence,
by induction hypothesis, c1 (L2|X 0 ) . . . c1 (Lr|X 0 ) = 0. In other words, if i : X 0 → X is the
inclusion, we are saying

0 = c1 (i∗ L2 ) . . . c1 (i∗ Lr ) = i∗ c1 (L2 ) . . . c1 (Lr )




As a consequence, we get (see Remark 4.8)

0 = i∗ i∗ c1 (L2 ) . . . c2 (Lr ) = [X 0 ] · c1 (L2 ) . . . c2 (Lr ) = c1 (L1 ) · c1 (L2 ) . . . c1 (Lr )


  

concluding the proof.

Theorem 5.3. Let F be a vector bundle in the conditions of Lemma 5.1. Then c(F) =
 
1 + c1 (L1 )t . . . 1 + c1 (Lr )t .

Proof: Write σ1 , . . . , σr for the elementary symmetric functions in c1 (L1 ), . . . , c1 (Lr ). With
this notation, the statement is equivalent to the identity

(1 + σ1 t + . . . + σr tr )(1 + s1 (F)t + s2 (F)t2 + . . .) = 1

i.e. for each i > 0


si (F) + si−1 (F)σ1 + . . . + si−r (F)σr = 0.

By definition of Segre classes, this is equivalent to prove

π̄∗ ξ i+r−1 + π̄∗ (ξ i+r−2 )σ1 + . . . + π̄∗ (ξ i−1 )σr = 0 (∗)

where ξ = c1 OF (1) and π̄ : P(F) → X is the projection. Since there is an inclusion of


bundles UF ⊂ π̄ ∗ F, it follows that π̄ ∗ F ⊗ U∗F has a section with no zeros. By Lemma 5.2,
we have

0 = c1 (π̄ ∗ L1 ⊗ U∗F ) . . . c1 (π̄ ∗ Lr ⊗ U∗F ) = (π̄ ∗ c1 (L1 ) + ξ) . . . (π̄ ∗ c1 (Lr ) + ξ)

or, equivalently,
ξ r + ξ r−1 π̄ ∗ σ1 + . . . + ξπ̄ ∗ σr−1 + π̄ ∗ σr = 0.

For each i > 0 if we multiply the above identity by ξ i−1 , take direct image by π̄ and apply
the projection formula, we get (*).

Remark 5.4. We can always assume that we are in the situation of Theorem 5.3 (this
is what is called the splitting principle). Indeed, given a line bundle F of rank r, we can
always consider the projectivization P(F), and we thus know that the pull-back induces a

40
monomorphism A(X) ,→ A(P(F)). Moreover, there is an inclusion of bundles UF ⊂ π̄ ∗ F
and hence its quotient is a vector bundle of rank r − 1. Iterating the process, we get a
morphism ϕ : X 0 → X such that ϕ∗ : A(X) → A(X 0 ) is a monomorphism and such that
ϕ∗ F has a chain of epimorphisms as in Lemma 5.1, so that c(ϕ∗ F) splits as a product of
linear factors. Let us see how to use it in a few examples:
1) If ϕ∗ F has a sequence of epimorphisms as in Lemma 5.1, with kernels L1 , . . . , Lr ,
then ϕ∗ F∗ has a filtration as in Lemma 5.1, with cokernels L∗1 , . . . , L∗r . Hence, we get

1 + ϕ∗ (c1 (F))t + . . . + ϕ∗ (cr (F))tr = 1 + c1 (L1 )t . . . 1 + cr (Lr )t


 

and
1 + ϕ∗ (c1 (F∗ ))t + . . . + ϕ∗ (cr (F∗ ))tr = 1 + c1 (L∗1 )t . . . 1 + c1 (L∗r )t =
 

 
= 1 − c1 (L1 )t . . . 1 − c1 (Lr )t .

Comparing coefficients, we get that ϕ∗ (ci (F∗ )) = −ϕ∗ (ci (F)) if i is odd and ϕ∗ (ci (F∗ )) =
ϕ∗ (ci (F)) if i is even. Since ϕ∗ is injective, we get that ci (F∗ ) = −ci (F) if i is odd and
ci (F∗ ) = ci (F) if i is even. For example (see Example 4.12), on G(k, n) we get that c1 (U∗ )
is the class of Ω(A) = {Λ ∈ G(k, n) | Λ ∩ A 6= ∅} for a linear space A ⊂ PnK of dimension
n − k − 1, while c2 (U∗ ) is the class of {Λ ∈ G(k, n) | dim(Λ ∩ B) ≥ 1}, where B is a linear
space of dimension n − k.
2) Assume ϕ∗ F has a sequence of epimorphisms as in Lemma 5.1, with kernels
L1 , . . . , Lr . Then ϕ∗ (F⊗L) has a sequence of epimorphisms with kernels L1 ⊗ϕ∗ L, . . . , Lr ⊗
ϕ∗ L. This implies that the Chern polynomial of ϕ∗ (F ⊗ L) is

c(ϕ∗ (F ⊗ L)) = 1 + (c1 (L1 ) + c1 (ϕ∗ L))t . . . 1 + (c1 (Lr ) + c1 (ϕ∗ L))t .
 

In particular,

ϕ∗ c1 (F ⊗ L) = c1 (L1 ) + . . . + c1 (Lr ) + rϕ∗ L = ϕ∗ (c1 (F) + rc1 (L))




so that
c1 (F ⊗ L) = c1 (F) + rc1 (L).

For the top Chern class, instead of giving a concrete formula in terms of the Chern classes
of F and L, we will obtain an expression that will be more useful later on. First of all,
I hope that the reader already understood that it is useless to carry out ϕ∗ through the
computations. What we are doing is to get an extension of A(X) in which the polynomial
c(F) splits. So, writing simply c(F) = (1 + α1 t) . . . (1 + αr t) (the symbols α1 , αr are
someteimes called Chern roots of F), what we are saying is that cr (F ⊗ L) = (α1 +
c1 (L)) . . . (αr + c1 (L)).

41
3) If F is a rank.two vector bundle, being in any of the conditions of Lemma 5.1 is
equivalent to have an exact sequence
0 → L → F → L0 → 0
where L, L0 are line bundles, so that c(F) = 1 + c1 (L)t 1 + ci (L0 )t and hence
 

c1 (F) = c1 (L) + c1 (L0 )


c2 (F) = c1 (L)c1 (L0 ).
If we want to compute, for instance, the Chern classes of S 3 F, the third symmetric power
of F, we first observe that we have a sequence of surjective maps
S 3 F  S 2 F ⊗ L0  F ⊗ L0 ⊗ L0  L0 ⊗ L0 ⊗ L0  0
with kernels
L ⊗ L ⊗ L, L ⊗ L ⊗ L0 , L ⊗ L0 ⊗ L0 , L0 ⊗ L0 ⊗ L0
so that
c(F) = 1 + 3c1 (L)t 1 + (2c1 (L) + c1 (L0 ))t 1 + (c1 (L) + 2c1 (L0 ))t 1 + 3c1 (L0 )t
   

from where we conclude


c1 (S 3 F) = 6c1 (L) + 6c1 (L0 ) = 6c1 (F)
c2 (S 3 F) = 11c1 (L)2 + 32c1 (L)c1 (L0 ) + 11c1 (L0 )2 = 11c1 (F)2 + 10c2 (F)
c3 (S 3 F) = 6c1 (L)3 + 48c1 (L)2 c1 (L0 ) + 48c1 (L)c1 (L0 )2 + 6c1 (L0 )3 = 6c1 (F)3 + 30c1 (F)c2 (F)
c4 (S 3 F) = 18c1 (L)3 c1 (L) + 45c1 (L)2 c1 (L)2 + 18c1 (L)c1 (L)3 = 18c1 (F)2 c2 (F) + 9c2 (F)2

Remark 5.5. Observe that Theorem 5.3 and the splitting principle imply, in particular,
ci (F) = 0 if i > r. Moreover, we have proved the identity
ξ r + π̄ ∗ c1 (F)ξ r−1 + . . . + π̄ ∗ cr−1 (F)ξ + π̄ ∗ cr (F) = 0.
In fact, that relation is the one of minimal degree for ξ with coefficients in π̄ ∗ A(X). To
see this, assume there is a relation
π̄ ∗ αi ξ i + π̄ ∗ αi−1 ξ i−1 + . . . + π̄ ∗ α1 ξ + π̄ ∗ α0 = 0
with i < r. Multiplying by ξ r−i−1 and taking q∗ we get αi = 0. An iteration process shows
then αi = αi−1 = . . . = α1 = α0 = 0. More generally, it can be proved that the Chow ring
of P(F) is
A(P(F)) = A(X)[ξ]/(ξ r + π̄ ∗ c1 (F)ξ r−1 + . . . + π̄ ∗ cr−1 (F)ξ + π̄ ∗ cr (F)).
In fact, some authors take as definition of Chern classes the coefficients of this minimal
relation.

42
Theorem 5.6. If 0 → F0 → F → F00 → 0 is an exact sequence of vector bundles, then
c(F) = c(F0 )c(F00 ).

Proof: By the splitting principle, we can assume that F0 , F00 have filtrations

0 = F00 ⊂ F01 ⊂ . . . ⊂ F0r0 = F0

with line bundles L0i = F0i /F0i−1 and

0 = F000 ⊂ F001 ⊂ . . . ⊂ F00r00 = F00

with line bundles L00i = F00i /F00i−1 . By Theorem 5.3, it is enough to check that F has a
filtration

0 = F00 ⊂ F01 ⊂ . . . ⊂ F0r0 = F0 = Fr0 ⊂ Fr0 +1 ⊂ . . . ⊂ Fr0 +r00 = F

with Fr0 +i /Fr0 +i−1 = L00i . For this, we consider the commutative diagram defining the
vector bundle Fr+r0 −1 .

0 0
↓ ↓
0 → F0 → Fr0 +r00 −1 → F00r00 −1 → 0
|| ↓ ↓
0 → F0 → F → F00 → 0
↓ ↓
L00r00 = L00r00
↓ ↓
0 0

We can iterate the above process. Specifically, we can now define Fr+r0 −2 through the
diagram
0 0
↓ ↓
0 → F0 → Fr0 +r00 −2 → F00r00 −2 → 0
|| ↓ ↓
0 → F0 → Fr0 +r00 −1 → F00r00 −1 → 0
↓ ↓
00 00
Lr00 −1 = Lr00 −1
↓ ↓
0 0

43
Proposition 5.7. Let F be a vector bundle of rank r over X, and assume that F has
a section s : X → F whose zero locus Z is empty or has codimension r in X. Then
cr (F) = [Z].

Proof: The case in which Z is empty is in Lemma 5.2, so that we will assume Z not to be
empty. By the splitting principle, we can assume that there is a filtration

0 = F0 ⊂ F1 ⊂ . . . ⊂ Fr = F

with line bundles Li = Fi /Fi−1 . Let us proceed as in Lemma 5.2, so that we make
induction on r, the case r = 1 being trivial. If r > 1, consider the composition
s
s0 : X −→F → F/Fr−1 = Lr .

The section s0 cannot be identically zero, since this would imply that s of comes from a
section of Fr−1 , so that its zero locus, being defined locally by only r − 1 equations, would
have codimension at most r − 1, contrary to our hypothesis. If X 0 is the zero locus of s0
(X 0 is not empty, since it contains Z), we will have that the restriction s|X 0 comes from
a section s̄0 of Fr−1|X 0 and the zero locus of s̄0 is Z, which has codimension r − 1 in X 0 .
By induction hypothesis, we have cr−1 (Fr−1|X 0 ) = [Z] as classes in A(X 0 ). Hence, writing
i : X 0 ,→ X for the inclusion, as a class in A(X), we have

[Z] = i∗ cr−1 (Fr−1|X 0 ) = i∗ i∗ cr−1 (F) = [X 0 ] · cr−1 (F) = c1 (Lr )cr−1 (Fr−1 ) = cr (F)
 

where the last equality comes from Theorem 5.6.

Example 5.8. We can find now a concrete application putting together all we have
seen so far. We have seen in Example 2.12 that giving the equation of a cubic surface
in P3K is the same as giving a section of S 3 U∗ over G(1, 3) (which is a vector bundle of
rank four), and its zero locus will be the set of lines contained in the corresponding cubic
surface of P3K . On the other hand, by Theorem 3.5, such zero locus will be of the expected
codimension four for a general cubic surface, i.e. there will be a finite number of lines
contained in a general cubic surface. Moreover, the class of this finite number of lines is
given, by Proposition 5.7, by c4 (S 3 U∗ ). We have seen in example 3) of Remark 5.4 that

c4 (S 3 U∗ ) = 18c1 (U∗ )2 c2 (U∗ ) + 9c2 (U∗ )2

and in example 1) that c1 (U∗ ) is the class of the set of lines meeting a given line, while
c2 (U∗ ) is the class of the set of lines contained in a given plane. It is thus clear that
c2 (U∗ ) is the class of lines contained in two different planes, i.e. the class of one element

44
of G(1, 3) (the intersection of the two planes). On the other hand, we already computed
c1 (U)2 (which is s1 (U)2 ) in Example 4.12, and it was the sum of the classes of the set of
lines contained in a plane (which is c2 (U∗ )) and of the set of lines passing through a given
point (which is s2 (U)). We thus have
c1 (U)2 c2 (U∗ ) = c2 (U∗ )2 + s2 (U)c2 (U∗ )
and the first summand we have seen to be the class of one element of G(1, 3), while
the second summand is zero, since there are no lines passing through one given point
a contained in a given plane if the point is chosen not to be in the plane. Summing
up, c4 (S 3 U∗ ) is the class of 27 elements of G(1, 3), i.e. there are 27 lines contained in a
general cubic hypersurface. The reader who needs to become familiar with these techniques
is invited to spend a few hours to imitate these computations to conclude that there are
2875 lines contained in a general quintic hypersurface in P4K .

Example 5.9. Theorem 5.6 allows to make computations as with the splitting principle,
but without a complete splitting in line bundles. For example, and following the notation
of Remark 5.4, let F, F0 two vector bundles for which ¸(F) = (1 + α1 t) . . . (1 + αr t) and
¸(F0 ) = (1 + α10 t) . . . (1 + αr0 0 t). Then, interpreting each αj0 as c1 (L0j ), where L01 , . . . , L0r0 are
the kernels of a sequence of epimorphisms for F0 , we get
c(F ⊗ F0 ) = c(F ⊗ L01 ) . . . c(F ⊗ L0r0 )
and the computations of Remark 5.4 show thus
crr0 (F ⊗ F0 ) = cr (F ⊗ L01 ) . . . cr (F ⊗ L0r0 ) = Πi,j (αi + αj0 ).

Lemma 5.10. Let F, F0 be vector bundles on X of respective ranks r, r0 . Then


ar0 ar0 +1 . . . ar0 +r−1
ar0 −1 ar0 . . . ar0 +r−2
crr0 (F∗ ⊗ F0 ) = .. .. .. ..
. . . .
ar0 −r+1 ar0 −r+2 . . . ar 0
where ai is the coefficient of ti in the expansion of the series s(F)c(F0 ).

Proof: By Example 5.9, if we write c(F) = Πi (1 + αi t) and c(F0 ) = Πj (1 + αj0 t), we have
c(F∗ ⊗ F0 ) = Πi,j (1 + (αj0 − αi )t). Hence we have
1 c1 ... cr 0 ... 0
0 1 ... cr−1 cr ... 0
.. .. .. ..
. . . .
crr0 (F∗ ⊗ F0 ) = Πi,j (αj0 − αi ) = 0 ... 0 1 c1 ... cr
1 c01 ... c0r0 −1 c0r0 ... 0
.. .. ..
. . .
0 ... 1 c01 ... c0r0 −1 c0r0

45
(the last inequality is typical from resultant theory: consider both sides as polynomials in
the variables α1 , . . . , αr , α10 , . . . , αr0 0 , observe that the left side divides to the right sides,
r r
and that both sides have the same degree and the same coefficient for α10 . . . αr0 0 ). The
trick now is to multiply the above determinant by
1 s1 s2 . . . sr0 +r−1
0 1 s1 . . . sr0 +r−2
1 = 0 0 1 . . . sr0 +r−3
.. ..
. .
0 0 0 ... 1
so that we get
1 0 ... 0 0 ... 0
0 1 ... 0 0 ... 0
.. .. ..
. . .
crr0 (F∗ ⊗ F0 ) = 0 0 ... 1 0 ... 0
1 a1 . . . ar0 −1 ar0 ... ar0 +r−1
.. .. .. ..
. . . .
1 a1 ... ar0 −1 ar0
from where the result follows.

Example 5.11. We can now interpret Example 4.12 in the language of Chern classes. In
fact, if Q is the cokernel of the inclusion U ⊂ G(k, n) × Kn+1 we have from Theorem 5.6,
the equality
c(Q)c(U) = c(G(k, n) × Kn+1 ) = c(G(k, n) × K)n+1 = 1.
This implies ci (Q) = si (U) for all i, and we checked in Example 4.12 that these Segre
classes are represented by a set of subspaces meeting a linear space Λ0 of dimension n−k−i.
Observe that a linear space of dimension k in PnK is a linear space of dimension n − k − 1 in
(PnK )∗ , so that there is a natural identification of G(k, n) with G(n − k.1, n). It is easy to
check that, under this identification, the role of the bundles U∗ and Q are interchanged.
Hence, the class ci (U∗ ), interpreted as a class in G(n − k − 1, n), can be represented by
a set of subspaces of dimension n − k − 1 meeting a linear space of (PnK )∗ of dimension
k − i + 1. Hence, as a class in G(k, n), we can represent ci (U∗ ) by a set of linear spaces Λ
of dimension k whose span with a linear space Λ0 of dimension n − k + i − 2 is contained in
a hyperplane, i.e. dim(Λ, Λ0 ) ≥ i − 1. By Lemma 2.13, this corresponds now to the locus
on which k − i + 2 independent sections of U∗ have rank at most k − i + 1.
Using Proposition 3.3, as we used it in Theorem 3.5, the above example shows that
the i-th Chern class of a globally generated vector bundle can be represented by the locus
on which r − i + 1 independent sections have rank at most r − i. We will prove, however,
a much more general result.

46
Theorem 5.12 (Giambelli-Porteous). Let F be a vector bundle of rank r over X and
write ci = ci (F). Then, for any choice of m sections of F, the locus Xk on which these
sections have rank at most k is empty or has codimension at most (m−k)(r−k). Moreover,
if equality holds, then the class of Xk in A(m−k)(r−k) (X) is

cr−k cr−k+1 . . . cr−k+m−k−1


cr−k−1 cr−k ... cr−k+m−k
[Xk ] = .. .. .. ..
. . . .
cr−m+1 cr−m+2 ... cr−k

Proof: It is enough to assume the m sections to be independent, so that they span a


subspace V ⊂ Γ(X, F) of dimension m. Consider the Grassmannian G(m − k − 1, P(V ))
of subspaces of dimension m − k in V . Let p1 , p2 denote the two projections from X ×
G(m − k − 1, P(V )). The composition map

ϕ : p∗2 U → p∗2 (G(m − k − 1, P(V )) × V ) = X × G(m − k − 1, P(V )) × V = p∗1 (X × V ) → p∗1 F

is a morphism of vector bundles on X × G(m − k − 1, P(V )) that assigns to any (x, P(V 0 ), s)
such that s ∈ V 0 the pair (P(V 0 ), s(x)). Hence a point x ∈ X lies in Xk if and only if there
exists V 0 ⊂ V and s ∈ V 0 such that ϕ(x, P(V 0 ), s) = 0. In other words, Xk is the image
by p1 of the zero locus X̃k of the section of p∗2 U∗ ⊗ p∗1 F corresponding to ϕ.
Observe that p∗2 U∗ ⊗ p∗1 F has rank (m − k)r, so that X̃k is empty or has dimension
at least dim X + (m − k)k − (m − k)r = dim X − (m − k)(r − k).
Thus the class of X̃k is c(m−k)r (p∗2 U∗ ⊗ p∗1 F), and by Lemma 5.10 we have

ar ar+1 ... ar+m−k−1


ar−1 ar ... ar+m−k
[X̃k ] = .. .. .. ..
. . . .
ar−m+k+1 ar−m+k+2 ... ar

where ai is the coefficient of ti in s(p∗2 U)c(p∗1 F). Thus the summands of ai take the form
p∗2 (sl (U))p∗1 (ci−l (F)), where necessarily l ≤ k (because sl (U) = 0 if l > k, as we have seen
in Example 4.12). Hence, when developing the previous determinant, the summands take
the form
p∗2 sl1 (U) . . . slm−k (U) p∗1 ci−l1 (F) . . . ci−lm−k (F)
 

and l1 + . . . + lm−k ≤ (m − k)k. The push-forward by p1 of summands for which l1 + . . . +


lm−k < (m − k)k is zero by the projection formula, because

p1 ∗ p∗2 (sl1 (U) . . . slm−k (U)) = 0




47
(since p∗2 (sl1 (U) . . . slm−k (U)) has degree bigger than the dimension of X). Thus, when
taking p1 ∗ , the only surviving summand of each ai is p∗2 (sk (U ))p∗1 (ci−k (F)), so that

cr−k cr−k+1 . . . cr−k+m−k−1


cr−k−1 cr−k ... cr−k+m−k
p1 ∗ [X̃k ] = p1 ∗ (p∗2 (sk (U))m−k )p∗1 .. .. .. ..
. . . .
cr−m+1 cr−m+2 ... cr−k

and, using the projection formula and the fact that p1 ∗ (p∗2 (sk (U))m−k ) = 1, we get the
wanted result.

Remark 5.13. If F is globally generated, Giambelli-Porteous formula gives a geometric


interpretation of its Chern classes. We know that , for i = 1, . . . , r = rkF, the dependency
locus of r − i + 1 general sections of F has the expected codimension i, so that the above
Theorem implies that the class of this dependency locus is precisely ci (F). When i = r, we
already proved that. When i = 1, if V is the vector space spanned by r general sections
of F, the dependency locus of the sections is the locus on which

evV : X × V → F

has rank at most r − 1, and this is the same as the locus in which the morphism
r
^ r
^ r
^
evV : X × V → F

is zero, and this is nothing but the zero locus of a section of the determinant line bundle
Vr
det F = F. Hence we get c1 (F) = c1 (det F). This last result is true even if F is not
globally generated, and it is easily proved using the splitting principle. The typical case
is when X is smooth and F = ΩX , the cotangent bundle (we will not give the details to
construct it in Algebraic Geometry). Its determinant is called the canonical line bundle
and the class of divisors associated to it are called canonical divisors and represented by
KX .

Remark 5.14. A more general result is true if you substitute the morphism evV :
X × Km → F by a morphism ϕ : E → F in which now E is an arbitrary vector bundle of
rank m. Then the same result holds for the locus in which ϕ has rank ta most k, with the
same formula for [Xk ], in which now ci stands for the coefficient of ti in c(F)c(E)−1 . The
proof is the same, but now you substitute G(m − 1, P(V ) with a relative Grassmannian
π̄G(m − 1, P(E)) → X generalizing the notion of P(E) (which would be the case m = 0)
consisting of a fibration over X in which the fiber on each x ∈ X is now G(m − 1, P(Ex )).

48
Such a G(m − 1, P(E)) still has a universal subbundle UE ⊂ π̄ ∗ E of rank m, playing the
role of U in the above proof. A clear example is a morphism ϕ : E → F of vector bundles
of the same rank r. Then the locus in which ϕ has rank at most r − 1 is the locus in which
Vr Vr V Vr
the map ϕ: E → F is zero. Hence, if ϕ is not identically zero (i.e. the locus
of points in which ϕ has rank r is not empty), this morphism is zero in the zero locus of
the corresponding non-zero section of F ⊗ ( E)−1 , an this class is
V V

^ ^ ^ ^
c1 ( F ⊗ ( E)−1 ) = c1 ( F) − c1( E) = c1 (F) − c1 (E)

and this is the coefficient of t in c(F)c(E)−1 .

Example 5.15. Let us see a couple of applications.


1) First we go back to Example 3.6, and apply Giambelli-Porteous formula to the case
of the twisted cubic. Since C is the dependency locus of two sections of L1 ⊕ L1 ⊕ L1 and

c(L1 ⊕ L1 ⊕ L1 ) = (1 + ht)3 = 1 + 3ht + 3h2 t2 + h3 t3 ,

[C] = c2 (L1 ⊕ L1 ⊕ L1 ) = 3h2 ,

i.e. C has degree three.


2) If we want an example in which we need to use an actual determinant, we can
consider the space of 3 × 3 matrices (up to multiplication) of rank one. We can identify
the proportionality classes of matrices with P8K , via the identification
 
x0 x1 x2
(x0 : x1 : x2 : x3 : x4 : x5 : x6 : x7 : x8 ) ↔  x3 x4 x5  .
x6 x7 x8

Now the set of matrices of rank one is the locus in which the three sections (x0 , x1 , x2 ),
(x3 , x4 , x5 ), (x6 , x7 , x8 ) of L1 ⊕ L1 ⊕ L1 have rank one. By Giambelli-Porteous formula,
this locus has class

c2 (L1 ⊕ L1 ⊕ L1 ) c3 (L1 ⊕ L1 ⊕ L1 ) 3h2 h3


= = 6h4
c1 (L1 ⊕ L1 ⊕ L1 ) c2 (L1 ⊕ L1 ⊕ L1 ) 3h 3h2

so that this locus has degree four.

49
6. Some advanced intersection theory
In this final section we introduce some advanced intersection theory, in which we will
just state the main results.

Remark 6.1. First of all, we have defined the notion of pullback and pushforward for
both sheaves and the Chow ring. Surprisingly enough, there is a deep connection between
the two categories, although it works fine only for pullbacks (so it seems more natural to
consider vector bundles rather than sheaves). Of course, given any vector bundle F over
X, we can associate to it a polynomial c(F) ∈ A(X)[t], but we would like to associate
directly an element of ch(F) ∈ A(X). Recall that any time we have an exact sequence

0 → F0 → F → F00 → 0

we have c(F) = c(F0 )c(F00 ), but now we are looking for an additive map, i.e.

ch(F) = ch(F0 ) + ch(F00 ).

To get this, the best idea seems to be to use the Chern roots, since, if the Chern roots
of F0 are α10 , . . . , αr0 0 and the Chern roots of F00 are α100 , . . . , αr0000 , then the Chern roots of
F as above are α10 , . . . , αr0 0 , α100 , . . . , αr0000 . However, it is not a good idea to define ch(F) =
α1 + . . . + αr for a vector bundle of Chern roots α1 , . . . , αr , since this is nothing but c1 (F),
and we miss the information about the other Chern classes. Observe that, to have the
additivity property, it is enough to define

ch(F) = f (α1 ) + . . . + f (αr )

for any function f . To choose a nice f , let us impose another condition, namely

ch(F0 ⊗ F00 ) = ch(F0 )ch(F00 ).

Since the Chern roots of F0 ⊗ F00 are all the αi0 + αj00 , we have to choose f in such a way
that
X X
f (αi0 + αj00 ) = (f (α10 ) + . . . + f (αr0 0 ) f (α100 ) + . . . + f (αr0000 )) = f (αi0 )j(αj00 )

i.j i,j

and a natural choice is to ask f (αi0 + αj00 ) = f (αi0 )f (αj00 ). Hence, in order to have both
the additive and the multiplicative properties, we define the Chern character of a vector
bundle F of Chern roots α1 , . . . , αr as

ch(F) := eα1 + . . . + eαr .

50
These exponentials make sense, since we can write formally

x2 x3
ex = 1 + x + + + ...
2 6

although we need to take coefficients in Q (we will consider then A(X)Q := A(X) ⊗ Q).
For example, computing the first terms,

α12 α2
ch(F) = (1 + α1 + + . . .) . . . (1 + αr + r + . . .) =
2 2

α12 + . . . + αr2 c1 (F)2 − 2c2 (F)


= r + (α1 + . . . + αr ) + + . . . = r + c1 (F) + + ...
2 2

Exercise 6.2. Check that the first terms of the Chern character are

c1 (F)2 − 2c2 (F) c1 (F)3 − 3c1 (F)c2 (F) + 3c3 (F)


ch(F) = r + c1 (F) + + +
2 6

c1 (F)4 − 4c1 (F)2 + 4c1 (F)c3 (F) + 2c2 (F)2 − 4c4 (F)
+ + ...
24
Check also that it is possible to recover the Chern classes of a vector bundle from its Chern
character.

Remark 6.3. At this point, by the additivity of the Chern character, any time we have
an exact sequence
0 → F0 → F → F00 → 0

we can think of identifying the bundle F with the sum F0 and F00 . To do this, we can
define the free abelian group generated by all the possible bundles, and make the quotienf
by the subgroup generated by all F − F0 − F00 as above. This is called the Grothendieck
group K 0 (X), and we can regard the Chern character as a map

ch : K 0 (X) → A(X)Q .

Despite of its name, K 0 (X) is also a ring since the tensor product is well-defined, in the
sense that, given another bundle F000 , we also have an exact sequence

0 → F0 ⊗ F000 → F ⊗ F000 → F00 ⊗ F000 → 0

so that we have the equality of classes [F ⊗ F000 ] = [F0 ⊗ F000 ] + [F00 ⊗ F000 ]. In this way, ch
is in fact a homomorphism of rings. If we pass from the category of vector bundles to the
one of locally free sheaves, and enlarging it to coherent sheaves, we can now to consider

51
a new group K0 (X) of equivalence classes of coherent sheaves in which we now consider
[F] = [F 0 ] + [F 00 ] if there is an exact sequence of sheaves

0 → F 0 → F → F 00 → 0.

When X is smooth, K0 (X) and K 0 (X) are isomorphic, since any coherent sheaf admits
a finite resolution by locally free sheaves (if you do not know the precise definition of
coherent sheaf, you can think of this property as a definition). This means that we can
define the Chern character (and hence the Chern classes) for any coherent sheaf. We will
be always in the smooth hypothesis, and we will simply write K(X) for either K0 (X) or
K 0 (X). We will also write K(X)Q := K(X) ⊗ Q.

Example 6.4. Let us see what the situation in the projective space is. First of all, it is
a consequence of the Hilbert’s Syzygy Theorem that any coherent sheaf F over PnK has a
resolution of the type

0 → Fn → Fn−1 → . . . → F1 → F0 → F → 0

where each Fi is a direct sum of sheaves of the form OPnK (a). Hence K(PnK ) is generated by
the classes of these sheaves. On the other hand, there is an exact sequence (called Koszul
complex) of the form

0 → OPnK (−n − 1) → OPnK (−n)⊕(n+1) → . . . → OPnK (−1)⊕(n+1) → OPnK → 0


n+1
(in which the general term takes the form OPnK (−i)⊕( i ) ) showing that the class of each
OPnK (a) is generated by the classes of OPnK (a − n − 1), . . . , OPnK (a − 1) or by the classes of
OPnK (a + 1), . . . , OPnK (a + n + 1). As a consequence K(PnK ) is generated by the classes of

OPnK , OPnK (1), ... OPnK (n − 1), OPnK (n).

Since there respective Chern characters are

1
2 n−1 n
1 +h + h2 ... h
+ (n−1)! + hn!
2 n−1 n
1 +2h +22 h2 ... h
+2n−1 (n−1)! +2n hn!
.. ..
. .
2 n−1 n
1 +(n − 1)h +(n − 1)2 h2 ... h
+(n − 1)n−1 (n−1)! +(n − 1)n hn!
2 n−1 n
1 +nh +n2 h2 ... h
+nn−1 (n−1)! +nn hn!

which are clearly (using a Vandermonde determinant) a basis the Q-vector space A(PnK )Q =
Q[h]/(hn+1 ), it follows immediately that ch : K(PnK )Q → A(PnK )Q is an isomorphism.

52
Theorem 6.5. If X is a smooth variety, then ch : K(X)Q → A(X)Q is an isomorphism
of rings that preserves pullbacks.

Remark 6.6. One important fact about coherent sheaves is that they have a cohomology
theory. The starting point is that, given an exact sequence of sheaves

0 → F 0 → F → F 00 → 0,

over an algebraic variety X, it is not true that the corresponding sequence of sections (even
on any open set U )
0 → F 0 (U ) → F(U ) → F 00 (U ) → 0

is exact, although it fails only on the right side. However, there is a cohomology theory in
which H 0 (X F ) = F(X), such that. for any short exact sequence as above, there is a long
exact sequence

0 → H 0 (X, F 0 ) → H 0 (X, F) → H 0 (X, F 00 ) → H 1 (X, F 0 ) → H 1 (X, F) → H 1 (X, F 00 ) → . . .

→ H n−1 (X, F 0 ) → H n−1 (X, F) → H n−1 (X, F 00 ) → H n (X, F 0 ) → H n (X, F) → H n (X, F 00 ) → 0

(where n = dim X). As a consequence, if we call the Euler characteristic of F to χ(F) =


i i 0 00
P
i (−1) dim H (X, F), then we have χ(F) = χ(F ) + χ(F ) any time we have a short
exact sequence of sheaves as above. This means that it makes sense to define the Euler
characteristic of classes in K(X).

Example 6.7. Coming back to the example of the projective space, and forgetting about
tensoring with Q, we have seen in Example 6.4 that K(PnK ) is a free abelian group with
basis OPnK , OPnK (1), . . . , OPnK (n − 1), OPnK (n), since we proved them to generate and to be
linearly independent. As a consequence, the class of any sheaf F over PnK can be written
as
[F] = a0 [OPnK ] + a1 [OPnK (1)] + . . . + an−1 [OPnK (n − 1)] + an [OPnK (n)]

for some a0 , . . . , an ∈ Z. This implies that

χ(F) = a0 χ(OPnK ) + a1 χ(OPnK (1)) + . . . + an−1 χ(OPnK (n − 1)) + an χ(OPnK (n)).

This equals a0 n+0 n+1 2n+1 2n


   
0 + a 1 1 + . . . + a n−1 n−1 + an n , since it is possible to prove
i n
that H (PK , OPnK (a)) = 0 for all i > 0 and any a ∈ Z). The important part is that
there is a precise formula depending on a0 , a1 , . . . , an−a , an . Since, by the Chern character
isomorphism, the coefficients a0 , a1 , . . . , an−a , an are uniquely determined by the Chern
classes of F, there is a formula for χ(F) depending on the Chern classes of F. This is a
Riemann-Roch type theorem, which can be generalized to any ambient space X.

53
Theorem 6.8 (Hirzebruch-Riemann-Roch). For any vector bundle F with Chern roots
α1 , . . . , αr , define its Todd class as

α1 αr α1 α12 α14 αr αr2 αr4


td(F) = . . . = (1 + + − + . . .) . . . (1 + + − + . . .) =
1 − e−α1 1 − e−αr 2 12 720 2 12 720

c1 (F) c1 (F)2 + c2 (F) c1 (F)c2 (F)


=1+ + + +
2 12 24
−c1 (F)4 + 4c1 (F)2 c2 (F) + c1 (F)c3 (F) + 3c2 (F)2 − c4 (F)
+ + ...
720
If X is smooth and TX is its tangent bundle and F is any sheave over X, then

χ(F) = deg ch(F)td(TX )

where deg means to take the degree (i.e. the number of points) of the part of degree
n = dim X of the element of A(X)Q .

Example 6.9. Let us apply the above result to the case in which X is a smooth curve
of genus g and F = OX (D), the sheaf of sections of LD , where D is a divisor of degree d.
In this situation,
KX
td(TX ) = 1 −
2
where KX is the canonical divisor, which has degree 2g − 2, and

ch(OX (D)) = 1 + D.

Hence
KX KX
ch(OX (D))td(TX ) = (1 + D)(1 − ) = 1 + (D − )
2 2
so that
χ(F) = d + 1 − g

which is the classical Riemann-Roch theorem. More generally, if F is a vector bundle of


rank r with sheaf of sections F, we leave as an exercise to prove χ(F) = d + r(1 − g), where
d is the degree of F, i.e. the degree of its determinant, which is defined as the degree of
any of its associated divisors.

Example 6.10. If X is now a surface and D is a divisor, we have now

KX K 2 + c2 (TX )
td(TX ) = 1 − + X
2 12

54
and
D2
ch(OX (D)) = 1 + D +
2
so that
D2 KX K 2 + c2 (TX )
ch(OX (D))td(TX ) = (1 + D + )(1 − + X )=
2 2 12
KX D2 − DKX K 2 + c2 (TX )
= 1 + (D − )+( + X ).
2 2 12
Using the same notation for the degree of a product and using also the Noether formula
2
KX +c2 (TX )
12 = χ(OX ), we get the standard Riemann-Roch theorem for surfaces

D2 − DKX
χ(OX (D)) = χ(OX ) + .
2
We leave now as an exercise to find the formula for the sheaf of sections of a vector bundle
of arbitrary rank.

Remark 6.11. As we have seen, when F is globally generated, we can imitate the
situation for line bundles: a general section vanishes at some cycle Y of codimension r
(when r = 1, we only need the section not to be zero), and we can define the intersection
of Y with any subvariety i : Z ,→ X as cr (F|Z ) (considering the intersections as a cycle in

Z or as i∗ cr (F|Z ) (as a cycle in X). In particular, when Y = Z (i.e. the cycle is a cycle
consisting only of an irreducible subvariety), the self intersection of Y can be computed as

i∗ cr (F|Y ) . The main difference now is that it is not true that any irreducible subvariety
of codimension r is the zero locus of a section of a vector bundle, even when the subvariety
is smooth. However, in this case it is at least possible to compute the self-intersection.
The first observation is that, when Y is a smooth subvariety of codimension r, there is a
natural epimomorphism from the cotangent bundle of X (restricted to Y ) to the cotangent
bundle of Y , defining a kernel

0 → NY,X → i∗ ΩX → ΩY → 0

called the conormal bundle of Y in X, whose dual NY,X is called the normal bundle of Y
in X, which has rank r. If we are in the good situation as above that Y is the zero locus
of a section of a vector bundle F of rank r over X, it follows that F|Y = NY,X , so that

i∗ [Y ] = cr (NY,X )

(and hence [Y ]2 = i∗ cr (NY,X ) ). The interesting point is that the same relation (called
the self-intersection formula) remains true even if the vector bundle does not exist.

Example 6.12. Let us apply the self-intersecion formula to a concrete example. We will
consider a smooth curve i : C ,→ P2K of degree d and genus g. Then, the class of C in

55
A(P2K ) = Z[h]/(h3 ) is dh, so that [C]2 = d2 h2 , which is the class of d2 points. On the other
hand, from the exact sequence

∗ ∗
0 → NC,P2 → i ΩP2 → ΩC → 0
K
K

we get
c1 (NC,P2K ) = c1 (i∗ TP2K ) + c1 (ΩC ) = (−3i∗ h) + (c1 (OC (KC )))

(we used that the determinant of the tangent bundle of P2K is L−3 ), and this is the class
of −3d + (2g − 2) points in C. Hence i∗ c1 (NC,P2K ) is the class of −3d + (2g − 2) points in
P2K . As a consequence,
d2 = 3d + (2g − 2)

which is equivalent to
(d − 1)(d − 2)
g= ,
2
the known formula for the genus of a smooth plane curve of degree d.

Example 6.13. We can use the ideas of Remark 6.11 to prove that the twisted cubic
cannot be described as the zero locus of a vector bundle of rank two over P3 . If it were
so, then the normal bundle would be the restriction of a vector bundle over P3K , and in
particular its Chern classes would be restriction of classes in P3K . Observe that, by the self-
intersection formula, the second Chern class of the normal bundle is indeed the restriction
of the class of the curve as an element of A2 (P3K ), so that we need to prove that the first
Chern class of the normal bundle is not the restriction of a class in A1 (P3K ). The most
practical thing to do so is to work directly on P1K instead of the twisted cubic, which is
isomorphic via the embedding
i : P1K → P3K

given by L3 , so that i∗ L01 = L3 (we use primes to distinguish the line bundles L0d over P3k
from the line bundles Ld over P1k . From this point of view, the conormal bundle of the
twisted cubic is isomorphic to the kernel in the exact sequence

0 → N ∗ → i∗ ΩP3 → ΩP1K → 0.

From this, we get

c1 (N ) = c1 (ΩP1K ) − c1 (i∗ ΩP3K ) = c1 (L−2 ) − i∗ c1 (L0−4 ) = c1 (L−2 ) − c1 (L−12 ) = c1 (L10 )

and this cannot be of the type i∗ c1 (L0a ) = c1 (L3a ) for any a ∈ Z.

56
References

[A] E. Arrondo, Vector bundles in Algebraic Geometry, notes of a course taught at the
First Summer School on Complex Geometry (Villarrica, Chile 7-9 December 2010),
https://www.dropbox.com/s/dwx1r5zw7885izx/curso-chile.pdf.
[F] W. Fulton, Intersection Theory, Springer 1984 (2nd edition in 1998).
[H] R. Hartshorne, Algebraic Geometry, Springer 1977.
[Sh] I. R. Shafarevich, Basic Algebraic Geometry, vol. 1, Springer-Verlag 1994 (rev. and
expanded ed.).

57

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy