MATH3031 Ch 2 notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Chapter 2: Manifolds

2.1: Reboot of Multivariable Calculus


As you know, Rn denotes the set of n-tuples of real numbers, i.e.

Rn = {(x1 , x2 , . . . , xn ) | xi ∈ R for i = 1, . . . , n}.

When we write, for example, x ∈ Rn , this means that “x” stands for such an n-tuple, and
we will denote by xi ∈ R the ith entry of x. Similarly for a ∈ Rn , p ∈ Rn , etc. (we will
not put arrows over the letters or lines under them).

Using this notation, we can write the formulas defining addition of two elements
x, y ∈ Rn in the following space-saving way: (x + y)i = xi + y i for all i = 1, . . . , n.

Similarly, for a scalar α ∈ R and x ∈ Rn , the element αx ∈ Rn is defined by (αx)i = αxi


for all i = 1, . . . , n.

These operations make Rn into a vector space, and the notation gives us a convenient
way of defining the standard basis vectors of Rn : e1 , . . . , en ∈ Rn are defined, for each
i = 1, . . . , n and each j = 1, . . . , n, by
(
j 1 if i = j
(ei )j = δi :=
0 otherwise

(this function δij is called the Kronecker delta; it will come up a lot in later chapters).

We also give the formulas for the Euclidean inner product and Euclidean norm in
terms of this notation: n
X
hx, yi := xi y i
i=1

is the inner product of any two vectors x, y ∈ Rn ; and


n
!1/2
p X
|x| := hx, xi = (xi )2
i=1

is the norm (or magnitude) of any vector x ∈ Rn (note that we’ve dropped the double
lines used in previous courses; this shouldn’t bother you or make you worry whether norm
or absolute value is meant, since the norm and absolute value are exactly the same thing
when n = 1). The inner product and norm have the usual properties, and geometric
interpretation, which we’ll review as needed.

Finally, let’s recall that a map L : Rn → Rm is called linear iff

L(x + y) = L(x) + L(y) and L(αx) = αL(x)

holds for all x, y ∈ Rn and all α ∈ R. You probably recall that a linear map L : Rn → Rm
corresponds to a m × n matrix (meaning, a matrix with m rows and n columns). The

1
(standard) correspondence is given as follows, using the notation introduced above: For
each i = 1, . . . , n we consider the basis vector ei ∈ Rn . Since it gets mapped to some
vector L(ei ) ∈ Rm , we can express this as a linear combination of the standard basis
vectors e1 , . . . , em of Rm , i.e.
m
X
L(ei ) = Lji ej , for some L1i , . . . , Lm
i ∈ R.
j=1

This gives us a matrix of real numbers, denoted (Lji ), where the subscript i denotes which
column an entry is in (and ranges from 1 to n) and the superscript j denotes which row
an entry is in (and ranges from 1 to m).

An important idea in this course is that differentiability of a function f : Rn → Rm


means that it can be “approximated, locally” by linear maps, and that the derivative
should be thought of as the linear maps that give this approximation. The exact sense of
this is given in the following definition:

Definition 2.1: (a) Let U ⊂ Rn be open, f : U → Rm a map (function), and p ∈ U .


Then f is differentiable at p iff there exists a linear map L : Rn → Rm such that
|f (p + h) − f (p) − L(h)|
lim = 0.
h→0 |h|
If f is differentiable at every point p ∈ U , we say f is differentiable.

(b) If f : U → Rm is differentiable at p ∈ U , then the linear map given by the definition


in (a) is unique and we call it the derivative of f at p, denoted by Df (p) := L : Rn → Rm .

Example 2.2: If the function f : Rn → Rm is a constant function, i.e. there is a ∈ Rm


such that f (x) = a for all x ∈ Rn , then f is differentiable and its derivative at all points
p ∈ Rn is Df (p) = 0 : Rn → Rm (the linear map which sends all vectors x ∈ Rn to the
zero vector 0 ∈ Rm ). This is easy to see, since f (p + h) = f (p) = a, so
|f (p + h) − f (p) − Df (p)(h)| |0|
lim = lim = 0.
h→0 |h| h→0 |h|

Example 2.3: If L : Rn → Rm is a linear map, then we would hope that it can be


approximated at every point p by some linear map – namely by L itself! In other words,
we should expect that L is differentiable and its derivative at any point p is given by
DL(p) = L : Rn → Rm . This is indeed the case, as L(p + h) = L(p) + L(h) by linearity,
from which we see
|L(p + h) − L(p) − DL(p)(h)| |0|
lim = lim = 0.
h→0 |h| h→0 |h|

Now, you are probably not used to thinking of the derivative this way, and certainly
you wouldn’t usually use Definition 2.1 to calculate the derivative of most maps you are
given. Instead, you would use the notion of partial derivatives:

2
Definition 2.4: (a) If U ⊂ Rn is open and f : U → R is a real-valued function and
1 ≤ i ≤ n, then the ith partial derivative of f at p ∈ U , if it exists, is the number given
by the limit
f (p + hei ) − f (p)
lim .
h→0 h
∂f
If it exists, we denote this number by Di f (p) (the notation ∂xi
(p) is also used).

(b) If f = (f 1 , . . . , f m ) : U → Rm , then the ith partial derivative at p ∈ U , if it exists,


is the column vector in Rm given, with respect to the standard basis, by
 
Di f 1 (p)
 Di f 2 (p) 
Di f (p) =  .
 
..
 . 
m
Di f (p)

The following theorem tells us that these two notions of derivatives are equivalent
under reasonable assumptions:

Theorem 2.5: If U ⊂ Rn is open and f : U → Rm is a function which is differentiable


at p ∈ U , then all partial derivatives Di f (p) exist, and we have

Di f (p) = Df (p)(ei ),

for 1 ≤ i ≤ n. In particular, the m × n matrix corresponding to Df (p) : Rn → Rm is


given by the Jacobian matrix
 
D1 f 1 (p) D2 f 1 (p) . . . Dn f 1 (p)
 D1 f 2 (p) D2 f 2 (p) . . . Dn f 2 (p) 
(Di f j (p)) =  .
 
.. . . . . ..
 . . . . 
m m m
D1 f (p) D2 f (p) . . . Dn f (p)

On the other hand, if all partial derivatives Di f (x) exist for x in a neighborhood of p,
and they are continuous in x, then f is differentiable at p, and the derivative of f at p is
given in matrix form by the Jacobian above.

(For a proof, see [Spivak], Theorem 2-7.) We will generally work with functions that
have continuous partial derivatives (and f is said to be C 1 ), and moreover assume that the
functions given by these partial derivatives, Di f : U → Rm , themselves have continuous
partial derivatives, written Dj (Di f ) : U → Rm for each j = 1, . . . , n, and so on. If this
can be repeated r times (with any combination of partial derivatives), then f is said to
be C r . If this is true, then we can take partial derivatives up to order r in any order: for
example, if f is C 2 , then Di (Dj f ) = Dj (Di f ) for all i, j = 1, . . . , n. A function is said to
be smooth, or C ∞ , iff it is C r for all r = 1, 2, . . ..

The payoff for thinking of derivatives of functions f : U → Rm as good linear approx-


imations is conceptual: it allows us to look for solutions to problems involving non-linear
functions by using the simpler model of linear algebra as our guide.

3
A prime example is the inverse function theorem, which answers the question, for
U ⊂ Rn open and p ∈ U , when it is possible to “locally invert” a function f : U → Rn
near p. In other words, we ask when it is possible to find open subsets V ⊂ Rn with
p ∈ V and W ⊂ Rn with f (p) ∈ W , such that the restriction f : V → W is one-to-
one and has a differentiable inverse g = f −1 : W → V (so g ◦ f (x) = x for all x ∈ V
and f ◦g(y) = y for all y ∈ W . If this is true, we say that f is a local diffeomorphism near p.

If we were only worried about linear maps L : Rn → Rn , the answer is simple: we


know from linear algebra that L is invertible if and only if its determinant, det(L), is
non-zero. So, by the philosophy that the derivative Df (p) : Rn → Rn is the best linear
approximation to f near p, we can guess that f is a local diffeomorphism near p if and
only if det(Df (p)) 6= 0. This guess turns out to be correct:

Theorem 2.6 (Inverse Function Theorem): Let U ⊂ Rn be an open subset, f : U →


Rn a differentiable function, and p ∈ U . Then f is a local diffeomorphism near p if and
only if
det(Df (p)) 6= 0.
If this is the case, then det(Df (x)) 6= 0 for all x in a sufficiently small neighborhood of p,
and the derivative of the function f −1 at f (x) is given by
D(f −1 )(f (x)) = [Df (x)]−1 ,
where the right-hand side denotes the inverse of the linear map Df (x) : Rn → Rn .

We will see, in the next section, that a direct consequence of the Inverse Function
Theorem is the Implicit Function Theorem, which allows us to generate a lot of (impor-
tant) examples of manifolds as level sets of functions, analogous to the level surfaces in
R3 that were discussed in Chapter 1.1. For now, let us discuss a few examples of how the
Inverse Function Theorem is applied to aid us with changes of coordinates in Rn :

Example 2.7 (Polar Coordinates): (a) Define f : R2 → R2 by


 
r cos θ
f (r, θ) =
r sin θ
(the change from Polar to Cartesian coordinates in two dimensions). This is clearly a
differentiable function, and its Jacobian matrix is easily computed as
 
cos θ −r sin θ
Df (r, θ) = ,
sin θ r cos θ
from which we see that det(Df (r, θ)) = r. Thus, the Inverse Function Theorem tells us
that f is a local diffeomorphism near all points (r, θ) where r 6= 0. Of course, to ensure
that f is one-to-one we need to restrict θ to an interval of width less than 2π.

(b) In three dimensions, define f : R3 → R3 by


 
r cos θ cos ϕ
f (r, θ, ϕ) =  r sin θ cos ϕ 
r sin ϕ

4
(the change from Polar-Spherical to Cartesian coordinates in three dimensions). Then we
have  
cos θ cos ϕ −r sin θ cos ϕ −r cos θ sin ϕ
Df (r, θ, ϕ) =  sin θ cos ϕ r cos θ cos ϕ −r sin θ sin ϕ  ,
sin ϕ 0 r cos ϕ
and from this we can calculate
det(Df (r, θ, ϕ)) = r2 cos ϕ.
Therefore, the Inverse Function Theorem tells us that f is a local diffeomorphism near
all points (r, θ, ϕ) with r 6= 0 and ϕ 6= 2k+1
2
π for any integer k. In other words, except
along the z-axis, we can find a differentiable inverse to f transforming (locally) Cartesian
coordinates (x, y, z) back into Polar-Spherical coordinates (r, θ, ϕ). (For points along the
z-axis, we need to modify the formula for f .)

2.1 TUT Problems

1. For k = 1, 2, 3, . . ., we write (Rn )k = Rn × Rn × . . . × Rn (k times). A map


T : (Rn )k → Rm
“eats” k vectors in Rn and “spits out” a single vector in Rm . We say that T is multilinear
iff it is “linear in each slot”, i.e. iff for each i = 1, . . . , k we have
T (v1 , . . . , vi−1 , αvi + βw, vi+1 , . . . , vk ) = αT (v1 , . . . , vi−1 , vi , vi+1 , . . . , vk )
+ βT (v1 , . . . , vi−1 , w, vi+1 , . . . , vk )
for any vectors w, v1 , . . . , vk ∈ Rn and any scalars α, β ∈ R.

Use Definition 2.1 to show that a multilinear map T : (Rn )k → Rm is differentiable at


every point p = (p1 , . . . , pk ) ∈ (Rn )k , and that its derivative at p,
DT (p) : (Rn )k → Rm ,
is the linear map given, for any vector v = (v1 , . . . , vk ) ∈ (Rn )k , by
k
X
DT (p)(v) = T (p1 , . . . , pi−1 , vi , pi+1 , . . . , pk ).
i=1

2. For any map f : R2 → R2 , write f = (u, v), where u, v : R2 → R are the component
functions. Let J : R2 → R2 be the linear map with matrix
 
0 −1
J= .
1 0

(a) Calculate J 2 .

(b) Interpret the identity Df ◦ J = J ◦ Df in terms of equations relating the partial


derivatives ux = ∂u
∂x
, uy , vx and vy . Use this to formulate complex differentiability in terms
of the map J, identifying C with R2 in the usual way.

5
2.2: Manifolds
The Chinese mathematician, astronomer and geographer Zhang Heng (78-139 AD) is
credited as the one of the first to make use of a mathematical grid reference in cartography.
The Book of Later Han states that Zhang “cast a network of coordinates about heaven
and earth, and reckoned on the basis of it”. In this section, we introduce the modern
mathematical notion of spaces for which it is possible to “cast a network of coordinates”
that can be used for “reckoning”: these are manifolds, the generalisation of surfaces, on
which differential and integral calculus can be developed.

Definition 2.8: (a) For n any non-negative integer, a parametrised n-manifold is given
by a map
r : U → Rn+k , with M = r(U ),
where U ⊂ Rn is a domain (open, connected subset) and k is some non-negative integer.
(The case n = 2, k = 1 just gives us a parametrised surface as in Definition 1.1.)

(b) In this course, we almost always work with regular parametrisations (often without
explicitly mentioning this). This means that the map r : U → Rn+k is differentiable; the
vectors D1 r(x), . . . , Dn r(x) ∈ Rn+k are linearly independent for all x ∈ U ; and the map
r : U → M is one-to-one, with continuous inverse.

(c) A (general, regular) n-dimensional manifold is any subset M ⊂ Rn+k that can be
“patched together” out of regular parametrised n-manifolds. More precisely, this means
that for any point p ∈ M , it is possible to find ε > 0 and a regular parametrisation
r : U → Rn+k of the subset M ∩ B(p, ε), for B(p, ε) = {y ∈ Rn+k | |y − p| < ε} the
open ball of radius ε around p. For a point p ∈ M , such a parmetrisation is called a local
coordinate system near p.

Example 2.9: (a) The easiest example is if M ⊂ Rn is any open subset of Rn . Then
we can take U = M and r : U → Rn the identity map (in other words, r(x) = x for all
x ∈ U ). This shows us that M is a regular parametrised n-manifold. It includes the case
M = Rn , since Rn itself is an open subset of Rn (we never said that subsets have to be
proper subsets), thus showing that Rn is a n-dimensional manifold. (That “ought to” be
true, since n-dimensional manifolds are in some sense “modelled on” Rn .) Note that in
this example k = 0.

(b) Slightly more interesting than example (a), consider the case where M ⊂ Rn+k is
a linear subspace of dimension n. By definition, this means that M has a basis consisting
of n vectors, so let’s suppose that v1 , . . . , vn ∈ M is a basis. We can use this to define a
parametrisation of M , with map r : Rn → Rn+k : for any x ∈ Rn , define
n
X
1 2 n
r(x) = x v1 + x v2 + . . . + x vn = xi vi .
i=1

(Remember, xi is the ith coordinate of the vector x ∈ Rn for each i = 1, . . . , n.) You
should try on your own to verify that this is a regular parametrisation of M ; some details

6
will be discussed in the lecture video.

(c) We can generalise explicit surfaces (see Example 1.2(b) in Chapter 1.1) to get
explicit n-manifolds M ⊂ Rn+k for any non-negative integers n and k: Suppose that
f : U → Rk is a differentiable map, for U ⊂ Rn some domain. This defines a subset
M ⊂ Rn+k , by letting

M := {y ∈ Rn+k | (y 1 , . . . , y n ) ∈ U and y n+j = f j (y 1 , . . . , y n ) for all j = 1, . . . , k}.

This subset is just the graph of the function f , as is clear from writing it in the more
compact notation
M = {(x, f (x)) | x ∈ U }.
(With this notation, it should be kept in mind that x ∈ Rn and f (x) ∈ Rk for any x ∈ U ,
so (x, f (x)) is technically an element of the Cartesian product Rn × Rk , which we are
silently identifying with Rn+k .) You should try, yourself, to use the function f to write
down a formula for a parametrisation r : U → Rn+k of M ; and then verify that this
parametrisation is regular. We can discuss this more in lecture videos if there’s a desire
to do so.

(d) We can also generalise level surfaces (Example 1.2(d)); for now, we’ll just discuss
how this is done for k = 1 (k > 1 will be discussed later in Chapter 2). Let g : W → R
be a differentiable function, for W ⊂ Rn+1 an open subset, and let c ∈ R be a value such
that the subset
M = g −1 (c) := {y ∈ W | g(y) = c}
is non-empty. Suppose, moreover, that the gradient ∇g(y) is non-zero for all y ∈ M .
Then it is a theorem that M ⊂ Rn+1 is a (general, regular) n-dimensional manifold.

We list two particular cases of Example 2.9(d) as theorems of their own:

Theorem 2.10 (The n-Sphere): For any integer n ≥ 0 and any real number R > 0,
the n-sphere of radius R, SRn ⊂ Rn+1 , defined as

SRn := {y ∈ Rn+1 | |y| = R},

is a n-dimensional manifold.

Proof: Define g : Rn+1 → R by letting


n+1
X
2
g(y) = |y| = (y i )2
i=1

for any y ∈ Rn+1 . This function has gradient given by ∇g(y) = 2y for all y ∈ Rn+1 , which
is clearly non-zero for all vectors y 6= 0. Hence, since SRn = g −1 (R2 ), and this set does not
contain the 0-vector of Rn+1 , it follows from the theorem mentioned in Example 2.9(d)
that SRn is a n-manifold. (QED)

7
Theorem 2.11 (The Special Linear Group): For any integer n ≥ 1, let SL(n, R)
denote the set of real n × n matrices with determinant 1, which we consider as a subset
2
of Rn by identifying n × n matrices with vectors with n2 coordinates. Then SL(n, R) is a
manifold of dimension n2 −1 (it is also group under the operation of matrix multiplication,
called the special linear group).

2 2
Proof: Identifying n × n matrices with vectors in Rn , we have the map det : Rn → R
given by taking the determinant of a matrix. Using Question 1 from the Chapter 2.1 TUT
2
Problems, you can derive the following formula for the derivative Ddet(m) : Rn → R
2 2
at a point m ∈ Rn . We think of m ∈ Rn as a n × n matrix with column vectors
2
m1 , . . . , mn ∈ Rn . Then, using the same notation for any other vector x ∈ Rn , we can
show (see TUT problems for this chapter) that
n
X
Ddet(m)(x) = det(m1 , . . . , mi−1 , xi , mi+1 , . . . , mn ),
i=1

where (m1 , . . . , mi−1 , xi , mi+1 , . . . , mn ) denotes the n×n matrix with the indicated column
vectors. In particular, if m ∈ SL(n, R) then
n
X
Ddet(m)(m) = det(m) = n.
i=1

But in general the gradient of a R-valued function g satisfies h∇g(p), vi = Dg(p)(v), so


this means that
h∇det(m), mi = n 6= 0,
which implies ∇det(m) 6= 0 for all m ∈ SL(n, R). Thus, since SL(n, R) is a level set,

SL(n, R) = det−1 (1),

the theorem mentioned in Example 2.9(d) tells us that SL(n, R) is a (n2 − 1)-manifold.
(QED)

8
2.3: Implicit Manifolds
In this section, we’re going to state and prove the theorem mentioned in Example
2.9(d), in its general form. This theorem gives us a large and important class of exam-
ples, called implicit manifolds (they generalise implicit surfaces).

Linear Algebra Example 2.12: Before talking about the general theory of implicit
manifolds, let’s consider the “baby example” from linear algebra. Let L : Rn+k → Rk be
a linear map, and a ∈ Rk a fixed vector. Then the level set

M := L−1 (a) = {y ∈ Rn+k | L(y) = a}

is just the set of solutions to a system of k linear equations depending on n + k variables.


We know how to describe these solutions from Linear Algebra. Suppose the set of solutions
is non-empty. Then pick SOME solution y1 ∈ M (i.e. L(y1 ) = a). The first important
observation is that any OTHER solution y2 ∈ M can be written as y2 = y1 + y0 , for some

y0 ∈ Ker(L) := {y | L(y) = 0}.

This fact can be easily proven: if y2 ∈ M then L(y2 ) = a and hence

L(y1 − y2 ) = L(y1 ) − L(y2 )


=a−a
= 0.

This shows that y0 := y1 − y2 ∈ Ker(L), and clearly with y0 thus defined we have
y2 = y1 + y0 , as claimed.

Now, from linear algebra we know that Ker(L) ⊂ Rn+k is a linear subspace. (You
should prove this, as an exercise, if you’re unsure.) Furthermore, its dimension is given
by the following formula, known as the “Rank-Nullity Theorem” of linear algebra (ditto
for this):
dim(Ker(L)) = (n + k) − rank(L),
where the integer rank(L) is given by any of the following three (equivalent) definitions:

rank(L) = number of linearly independent row vectors of L


= number of linearly independent column vectors of L
= dim(L(Rn+k )).

Using any of these definitions, it follows that rank(L) ≤ k. The case where equality
holds is the main one we’re interested in (we often say L has maximal rank ), and we get:

Theorem 2.13 (“Baby” Implicit Manifold Theorem): Let L : Rn+k → Rk be a


linear map, and suppose that a ∈ Rk is a choice of vector for which the level set

M = L−1 (a) = {y ∈ Rn+k | L(y) = a}

is non-empty. If rank(L) has maximal rank, then M ⊂ Rn+k is a n-dimensional manifold.

9
Proof: Exercise, using the facts from Example 2.12 and adapting (slightly) the method
from Example 2.9(b).

Theorem 2.14 (Implicit Manifold Theorem): Let V ⊂ Rn+k be an open subset and
g : V → Rk a differentiable map, and suppose that a ∈ Rk is a choice of vector for which
the level set
M = g −1 (a) = {y ∈ Rn+k | g(y) = a}
is non-empty. If p ∈ M is a point where the derivative Dg(p) : Rn+k → Rk has maximal
rank, then M has a regular parametrisation as a n-manifold near p. If Dg(p) has maximal
rank for all p ∈ M , then M ⊂ Rn+k is a n-dimensional manifold.

Idea of the Proof: The philosophy of this theorem, like for the Inverse Function Theorem,
is that we can “carry over” the result from the linear algebra version above, using the
philosophy that since the derivative Dg(p) is a “good linear approximation” of g near p,
what holds for this linear approximation should be transferrable to g, at least “near p”.
In fact, this works and we use the Inverse Function Theorem to make the proof go through.

To get a feel for what’s actually going on in the proof given below (and in exam-
ples where we apply it), it helps to think of it in the following way: the level set is
defined by k equations (which may now be non-linear) depending on n + k variables.
We can prove the result if we manage, in each of these k equations, to make a different
one of the variables the subject of that equation. For example, we could try to make
y n+1 the subject of the first equation g 1 (y) = a1 (this means rewriting the equation as
y n+1 = f 1 (y 1 , . . . , y n , y n+2 , . . . , y n+k ) for some real-valued function f 1 ); then try to make
y n+2 the subject of the second equation g 2 (y) = a2 ; etc., until we get to making y n+k the
subject of the last equation g k (y) = ak . If this can be done, in some neighbourhood of
p, then it means we have succeeded in writing M (locally) as the graph of the function
f = (f 1 , . . . , f k ), and this gives us a regular parametrisation using the solution to Exam-
ple 2.9(c).

Of course, we can’t do all of this explicitly, since we don’t know the form of the func-
tion g. We only know that rank(Dg(p)) = k. So we will use this fact to build a certain
function that does the job for us (actually, its inverse does the job, using the Inverse
Function Theorem) and then we get the function f almost as if by magic.

It’s also clear that we won’t necessarily be able to make the last k variables into the
subjects of our equations near a given point p. (To see this, and get some more intuition
for the abstract proof, consider the case of an implicit surface in R3 , i.e. the case where
we have V ⊂ R3 open and g : V → R, so n = 2 and k = 1. Then, for well-known examples
like a cylinder, with g(x, y, z) = x2 + y 2 , or a sphere, with g(x, y, z) = x2 + y 2 + z 2 , there
are many points where it’s not possible to make z the subject of the defining equation.)
However, it’s enough to make ANY k different variables into the subjects of the equa-
tions. This only differs from making the last k variables the subject by a combination
of rotations of Rn+k needed to interchange the variables. Since such a rotation doesn’t
change the regularity of our parametrisation, we will make our lives easier by starting off
with the assumption (which does not limit the generality) that g and p are of the form

10
where a solution for the last k variables is possible. How this works will be clear below.

Actual Proof: We use the definition of rank(Dg(p)) = k that tells us that the matrix of
Dg(p) has k linearly independent column vectors. Without loss of generality, we assume
that the last k columns are linearly independent (this is the assumption discussed above,
which will make it possible for us to solve for the last k variables). Since the ith column
of Dg(p), for 1 ≤ i ≤ n + k, is given by

Dg(p)(ei ) = Di g(p)

(in other words, the ith partial derivative of g at p), this means that the vectors

Dn+1 g(p), . . . , Dn+k g(p) ∈ Rk

are assumed to be linearly independent.

We define a function ϕ : V → Rn+k by letting

ϕ(y) := (y 1 , . . . , y n , g 1 (y), . . . , g k (y)), y = (y 1 , . . . , y n+k ) ∈ V.

Taking partial derivatives of the component functions, we see that this function is differ-
entiable and its derivative is given, in block form, by
 
In×n 0n×k
Dϕ(y) = n+k , y ∈ V.
(Di g(y))ni=1 (Dj g(y))j=n+1

(Note that the blocks on top each have n rows, the blocks on bottom each have k rows,
while each block on the left has n columns and each block on the right has k columns, so
that overall the size of the matrix is (n + k) × (n + k).) This shows that the determinant
is given by
det(Dϕ(y)) = det(Dn+1 g(y) · · · Dnk g(y)),
which shows in particular that det(Dϕ(p)) 6= 0, since Dn+1 g(p), . . . , Dn+k g(p) are linearly
independent and hence the k × k matrix having them as column vectors has non-zero
determinant (linear algebra).

Thus, the Inverse Function Theorem lets us conclude that ϕ is a local diffeomorphism
near p. We will assume that the open set V is taken to be small enough that ϕ : V → W
is a diffeomorphism for some open subset W ⊂ Rn+k containing ϕ(p). Let ψ : W → V be
its inverse, so ψ(ϕ(y)) = y for all y ∈ V and ϕ(ψ(w)) = w for all w ∈ W . Then define an
open subset U ⊂ Rn and a map f : U → Rk by letting

U = {x ∈ Rn | (x, a) ∈ W },

where we identify an element of the Cartesian product, (x, a) ∈ Rn × Rk , with an element


in Rn+k in the obvious way; and, letting π : Rn+k → Rk denote the projection onto the
last k coordinates (so π(y) = (y n+1 , . . . , y n+k ) for y ∈ Rn+k ), we define, for x ∈ U ,

f (x) = π(ψ(x, a)).

Finally, we verify that M is the graph of f near p, meaning that we need to show

M ∩ V = {(x, f (x)) | x ∈ U }.

11
To see this, let y ∈ M ∩ V . Then y ∈ M ⇒ g(y) = a, and y ∈ V means that ϕ(y) ∈ W
and y = ψ(ϕ(y)). But

ϕ(y) = (y 1 , . . . , y n , g(y)) = (y 1 , . . . , y n , a), (1)

which shows that (y 1 , . . . , y n ) ∈ U for all y ∈ M ∩ V . On the other hand, using the
definition we have

f (y 1 , . . . , y n ) = π(ψ(y 1 , . . . , y n , a))
= π(ψ(ϕ(y)), by (2)
= π(y), since ψ, ϕ are inverses.

This shows that (y n+1 , . . . , y n+k ) = f (y 1 , . . . , y n ) for all y ∈ M ∩ V , in other words we


have made the last k variables the subject, expressing M ∩ V as the graph of f .

On the other hand, if x ∈ U is any point, then we see that (x, f (x)) ∈ M ∩ V as
follows: Since (x, a) ∈ W ⇒ ψ(x, a) ∈ V . Writing y := ψ(x, a), then since ϕ and ψ are
inverses,

(x, a) = ϕ(ψ(x, a))


= ϕ(y)
= (y 1 , . . . , y n , g(y)).

This shows that g(y) = a, and so y ∈ M ; also, it shows that (y 1 , . . . , y n ) = x ∈ U . Since


(y n+1 , . . . , y n+k ) = f (x) by definition of y = ψ(x, a) and f (x) = π(ψ(x, a)), this shows us
that y = (x, f (x)), and hence (x, f (x)) ∈ M ∩ V . (QED)

Example 2.15: As an application of this theorem, giving a new family of manifolds of


arbitrary dimensions, we define, for any positive integer n ≥ 1, the n-dimensional torus
(or n-torus) T n ⊂ R2n . Define the function g : R2n → Rn by letting

g(x1 , y 1 , x2 , y 2 , . . . , xn , y n ) = ((x1 )2 + (y 1 )2 , (x2 )2 + (y 2 )2 , . . . , (xn )2 + (y n )2 )

for any point (x1 , y 1 , . . . , xn , y n ) ∈ R2n . Fix the vector a = (1, 1, . . . , 1) ∈ Rn , and define

T n = g −1 (a).

We claim that T n is an n-dimensional manifold.

Solution: To prove this, we just have to show that the hypotheses of Theorem 2.14 are
satisfied. The derivative of g is easily calculated (note that the size of the matrix is n
rows and 2n columns):
 
2x1 2y 1 0 0 ··· 0 0
 0
 0 2x2 2y 2 ··· 0 0 
1 1 n n  .. .
. . . . . . . .
. .
.
Dg(x , y , . . . , x , y ) =  . .

. . . . . .
n−1 n−1
 
 0 0 · · · 2x 2y 0 0 
0 0 ··· 0 0 2xn 2y n

12
This shows that the derivative has maximal rank (i.e. rank(Dg)(p) = n) at all points
p = (x1 , y 1 , . . . , xn , y n ) ∈ R2n for which we have (xi , y i ) 6= (0, 0) for all i = 1, . . . , n. But
this is clearly the case for all points of T n , since

g(x1 , y 1 , . . . , xn , y n ) = a ⇔ (xi )2 + (y i )2 = 1 for all i = 1, . . . , n.

Hence, Dg(p) : R2n → Rn has maximal rank for all p ∈ T n , which by Theorem 2.14
implies that T n ⊂ R2n is a n-dimensional manifold.

Note: From the defining equations of T n , it is a Cartesian product of n spheres,

T n = S 1 × . . . × S 1.

We will see, later in this chapter, how to identify the 2-torus T 2 = S 1 × S 1 (note: T 2 ⊂ R4
by this description) with the more familiar “torus of revolution” which you know as a
surface in R3 . This is one example showing that we can have two n-manifolds that are
basically the same, even though they “live in” Euclidean spaces of different dimension.
In other words, the integer k in our definition of a manifold is not really fundamental
to the manifold M that we are actually interested in. In technical terms, k is called the
codimension and we have defined M as an embedded submanifold of Euclidean space, with
codimension k.

13
2.4: Tangent Spaces and Derivatives
In this section, we’ll see how the coordinate systems on a manifold allow us to define
tangent spaces and to make sense of the derivative of a map between two manifolds.

Before discussion all of that, we need to be able to talk about what happens if we
change parametrisation of “change coordinates” on our manifold. The key to this is the
following result, which we will not worry about proving:

Lemma 2.16 (Change of Coordinates): Let M ⊂ Rn+k be a (general, regular) n-


dimensional manifold, and suppose that rα : Uα → Rn+k , rβ : Uβ → Rn+k are two local
coordinate systems for M near p. Then the change of coordinates map

hαβ := r−1
α ◦ rβ : Vβ → Vα

is a diffeomorphism, where Vβ = r−1 −1


β (rα (Uα )) and Vα = rα (rβ (Uβ )) are open subsets of
n
R given by restricting the domains of rβ and rα , respectively.

Definition 2.17 (Tangent Space): For M ⊂ Rn+k a manifold and p ∈ M a point, the
tangent space of M at p, Tp M , is the set of tangent vectors of curves through p in M :

Tp M := {γ 0 (0) | γ : (−ε, ε) → M with γ(0) = p}.

Theorem 2.18: Let M ⊂ Rn+k be a manifold and p ∈ M a point. If r : U → Rn+k is a


local coordinate system for M near p, and q = r−1 (p) ∈ U , then

Tp M = Dr(q)(Rn ) = span{D1 r(q), . . . , Dn r(q)}.

In particular, Tp M is a n-dimensional linear subspace of Rn+k .

If M is an implicit manifold for some function g : V → Rk and a value a ∈ Rk (that


is, if M = g −1 (a) and g, a satisfy the hypotheses of Theorem 2.14), then

Tp M = Ker(Dg(p)) = {v ∈ Rn+k | Dg(p)(v) = 0}.

Proof: First, note that since Dr(q) : Rn → Rn+k is a linear map, its image is a linear
subspace of Rn+k and satisfies

Dr(q)(Rn ) = {Dr(q)(v) | v ∈ Rn }
= span{Dr(q)(e1 ), . . . , Dr(q)(en )}
= span{D1 r(q), . . . , Dn r(q)}.

Therefore, since the vectors D1 r(q), . . . , Dn r(q) ∈ Rn+k are always linearly independent
for a regular parametrisation r, this linear subspace is n-dimensional.

Second, we can show that this linear subspace is contained in Tp M with an argument
analogous to what was done in the lectures and TUT problems to prove part of Theorem

14
1.4 in Chapter 1.1. This means we let α1 , . . . , αn ∈ R be any real coefficients and show
that there is a curve γ = γα : (−ε, ε) → M with γ(0) = p and

γ 0 (0) = α1 D1 r(q) + . . . + αn Dn r(q). (2)

This curve is defined by letting h : (−ε, ε) → U be given by


   
q1 α1
h(t) = q + tα =  ...  + t  ...  ,
   
qn αn

and letting γ = r ◦ h : (−ε, ε) → M . Then γ(0) = r(h(0)) = r(q) = p, and its derivative
γ 0 (0) is shown to satisfy (2) using the Chain Rule.

The remainder of the proof will be done only in the case that M is an implicit manifold.
In fact, this is not a restriction since ANY manifold can be written, near a point p ∈ M ,
as an implicit manifold. This means that for a suitable function g : V → Rk , a ∈ Rk
and an open set V ⊂ Rn+k with p ∈ V , we have M ∩ V = g −1 (a) (the proof does not
involve any major new ideas, but is too “technical” to justify including it). So, suppose
M = g −1 (a) for some g : V → Rk and a ∈ Rk satisfying the hypotheses of Theorem 2.14.
We show that Tp M ⊂ Ker(Dg(p)) as follows: suppose γ : (−ε, ε) → M is a curve in M
through p, and define h = g ◦ γ : (−ε, ε) → Rk . Then h(t) = a for all t ∈ (−ε, ε) since
M = g −1 (a), so h0 (0) = 0 (the 0-vector in Rk ). But by the Chain Rule,

h0 (0) = (g ◦ γ)0 (0)


= Dg(γ(0))(γ 0 (0))
= Dg(p)(γ 0 (0)).

This shows that γ 0 (0) ∈ Ker(Dg(p)), and hence Tp M ⊂ Ker(Dg(p)).

Combining the two inclusions that we have proven, we have

Dr(q)(Rn ) ⊂ Tp M ⊂ Ker(Dg(p)).

However, the sets on the left and the right are both linear subspaces of Rn+k , and they
both have dimension n. (We discussed the dimension of Dr(q)(Rn ) above, and the dimen-
sion of Ker(Dg(p)) is seen to be n using the Rank-Nullity Theorem of linear algebra, and
the fact that Dg(p) has rank k.) But this implies that the two linear subspaces are in fact
equal, and therefore Tp M must also equal both of them, which concludes the proof. (QED)

Definition 2.19 (Differentiable Functions): Let M ⊂ Rn+k be a n-manifold, and


f : M → R a real-valued function. For a point p ∈ M , we say that the function f is
differentiable at p iff for any system of local coordinates r : U → Rn+k for M near p, the
function f ◦ r : U → R is differentiable at q = r−1 (p) ∈ U . The function f is differentiable
iff it is differentiable at every point p ∈ M .

Note: We’ve defined differentiability of a function f : M → R by referring to ALL local


coordinate systems for M near a given point p. Since there are an infinite number of such

15
local coordinate systems, this might make you worried about the possibility of ever figur-
ing out if a given function is differentiable. Luckily, that’s not a problem, as a result of
the following Lemma, which also tells us how the tangent space is related to differentiable
functions.

Lemma 2.20: Let M ⊂ Rn+k be a n-manifold, f : M → R a real-valued function define


on M , and p ∈ M a point.

(a) If rα : Uα → Rn+k and rβ : Uβ → Rn+k are two local coordinate systems near p,
then f ◦ rα is differentiable at qα = r−1
α (p) ∈ Uα if and only if f ◦ rβ is differentiable at
−1
qβ = rβ (p) ∈ Uβ .

Define, for a parametrisation r : U → Rn+k of M and i = 1, . . . , n, the ith coordinate


function xi = xir : M → R by letting

xi (p) = π i (r−1 (p)), p ∈ M,

where π i : Rn → R denotes projection onto the ith coordinate in Rn , and define the ith
partial derivative with respect to this parametrisation as

∂f
i
(p) := Di (f ◦ r)(q), q = r−1 (p) ∈ U.
∂x
Then

(b) The coordinate functions xi : M → R are differentiable, and the ith partial
derivatives exist at a point p ∈ M if and only of f is differentiable at p. For two different
local coordinate systems rα , rβ near p, the partial derivatives of a differentiable function
are related by
n
X ∂xj
∂f α ∂f
i
(p) = i
(p) j (p) (3)
∂xβ j=1
∂xβ ∂xα

Proof: Suppose rα and rβ are two parametrisations near a point p. Then for Vα , Vβ ⊂ Rn
as in Lemma 2.16, we know that the change of coordinates map

hαβ = r−1
α ◦ rβ : Vβ → Vα

is a diffeomorphism (a differentiable bijection with differentiable inverse map). Therefore,


if f : M → R is any real-valued function, then on Vβ we can write

f ◦ rβ = f ◦ rα ◦ r−1
α ◦ rβ
= f ◦ rα ◦ hαβ .

Thus, since qβ ∈ Vβ and qα ∈ Vα , the fact that hαβ is a diffeomorphism, together with the
Chain Rule, implies that f ◦ rβ is differentiable at qβ if and only if f ◦ rα is differentiable
at qα , procing (a).

16
To prove (b), we note first that xi ◦ r = π i : U → R, so since the projection map
π i : Rn → R is linear (and hence differentiable), this shows that xi is differentiable using
(a). Moreover, writing f ◦ rβ = f ◦ rα ◦ hαβ as above, then xjα ◦ rβ = π j ◦ hαβ for any
j = 1, . . . , n and hence
n
X ∂xjα
Dhαβ (qβ )(ei ) = (p)ej .
j=1
∂xiβ
Thus, using the Chain Rule for the derivative, we calculate
∂f
(p) = Di (f ◦ rβ )(qβ )
∂xiβ
= D(f ◦ rβ )(qβ )(ei )
= D(f ◦ rα )(hαβ (qβ )[Dhαβ (qβ )(ei )]
" n #
X ∂xj
α
= D(f ◦ rα )(qα ) i
(p)ej
j=1
∂xβ
n
X ∂xj α ∂f
= i
(p) j (p).
j=1
∂xβ ∂xα

2.2-2.4 TUT Problems

1. Let f : U → Rk be a differentiable function, for U ⊂ Rn an open subset.

(a) For the graph of f , Mf , defined as

Mf := {(x, f (x)) | x ∈ U } ⊂ Rn+k ,

prove that Mf is a differentiable n-manifold by giving a regular parametrisation r : U →


Rn+k of all points of Mf .

(b) Given a point p ∈ Mf , write p = (q, f (q)) for a unique point q ∈ U . Use Theorem
2.18 to show that the tangent space of Mf at p is given by

Tp Mf = {(v, Df (q)(v)) | v ∈ Rn }.

2. Let V1 , V2 ⊂ R3 be two open subsets and gi : Vi → R, i = 1, 2, two differentiable


functions with nowhere-vanishing gradients: ∇gi (x, y, z) 6= (0, 0, 0) for (x, y, z) ∈ Vi .
Given a value a ∈ R, denote by

Si,a = gi−1 (a) = {(x, y, z) ∈ Vi | gi (x, y, z) = a}

the implicit surface defined by the equation gi (x, y, z) = a.

(i) Using Theorem 2.14 (Implicit Manifold Theorem), prove that for a given value
a ∈ R, Si,a ⊂ R3 is either empty or a 2-dimensional manifold (i.e. a general regular
surface).

17
(ii) Let a, b ∈ R be two values such that S1,a and S2,b have nonempty intersection, and
denote their intersection by Γa,b ⊂ R3 :

Γa,b = S1,a ∩ S2,b


= {(x, y, x) | g1 (x, y, z) = a and g2 (x, y, z) = b}.

Use Theorem 2.14 to prove that if the gradient vectors ∇g1 (x, y, z) and ∇g2 (x, y, z) are
linearly independent for all (x, y, z) ∈ Γa,b , then Γa,b is a 1-dimensional manifold (i.e.
a regular differentiable curve). Use Theorem 2.18 to show that in this case, for any
p = (x, y, z) ∈ Γa,b , we have

Tp Γa,b = ∇g1 (p)⊥ ∩ ∇g2 (p)⊥ .

(iii) Use the results of (i) and (ii) to prove that the subset

H = {(x, y, z) | x2 + y 2 − z 2 = 1}

is a 2-dimensional manifold, and that if P ⊂ R3 is any 2-dimensional plane through the


origin, then P ∩ H is either empty or a 1-dimensional manifold. (Hint: H is already
given as an implicit surface, H = g1−1 (1), for an obvious choice of differentiable function
g1 : R3 → R; P can also be written as an implicit surface, P = g2−1 (0), where

g2 (x, y, z) = n1 x + n2 y + n3 z,

for n = (n1 , n2 , n3 ) ∈ R3 any non-zero vector that is normal to the plane P .)

3. The Special Linear Group (SL(n, R)): For this problem and the next, let n ≥ 2
be a given integer and let Mn = Mn (R) denote the set of n × n matrices with real
2
entries. We identify Mn with Rn whenever needed, and we will also use the notation
m = (m1 , . . . , mn ) for any m ∈ Mn , where m1 , . . . , mn denote the columns of a matrix
m, which identifies Mn = (Rn )n (see Question 1 of Chapter 2.1 TUT problems).

(a) Show that the determinant map det : Mn → R is multilinear if we think of it as


a map det : (Rn )n → R. Hence, by Question 1 of the Chapter 2.1 TUT problems, if
m ∈ Mn is any matrix, then the derivative of det at m, D det(m) : Mn → R, is given, for
x ∈ Mn , by
Xn
D det(m)(x) = det(m1 , . . . , mi−1 , xi , mi+1 , . . . , mn ).
i=1

(b) Define
SL(n, R) = det−1 (1) = {m ∈ Mn | det(m) = 1}.
Use the result of (a), and Theorem 2.14, to show that this is a manifold of dimen-
sion n2 − 1. It is also a group, under the operation of matrix multiplication, since
det(mp) = det(m) det(p) for m, p ∈ Mn , from Linear Algebra. Moreover, the opera-
tions of taking inverses and matrix multiplication are differentiable, and hence we have
what is called a Lie group.

18
(c) Let e ∈ SL(n, R) denote the n × n identity matrix (which has 1s along the diagonal
entries and 0s elsewhere). Use Theorem 2.18 and the result of part (a) to show that the
tangent space of SL(n, R) at e consists of all trace-free matrices:

Te SL(n, R) = {x ∈ Mn | Tr(x) = 0}.


Pn
(Recall that the trace of a n × n matrix is the sum of its diagonal entries: Tr(x) = i=1 xii
for a matrix x = (xji ).)

4. The Orthogonal Group (O(n)): With Mn as in Question 3, now we let Sn ⊂ Mn


denote the set of symmetric n × n matrices. In other words,

Sn = {m ∈ Mn | mT = m}
= {m = (mji ) | mji = mij for all i, j = 1, . . . , n},

where mT denotes the transpose of a matrix m, which you should remember from Linear
Algebra (if not, please ask for further clarification). Since a symmetric matrix is com-
pletely determined by just its entries that are on the diagonal and above it (i.e. by the
entries mji with i ≥ j), we see that the dimension of Sn (as a vector space) is

n(n + 1)
dim(Sn ) = 1 + 2 + . . . + n = ,
2
and hence we can identify Sn = Rn(n+1)/2 if we want to.

(a)* Given a matrix m ∈ Mn , let g(m) = mmT , the (matrix) product of m with its
transpose. Show that this defines a map

g : Mn → Sn ,
2
(which, using the above remarks, we can think of as a map g : Rn → Rn(n+1)/2 ). Moreover,
show that the derivative of this map is given, at any m ∈ Mn , by the linear map Dg(m) :
Mn → Sn defined, for any x ∈ Mn , by

Dg(m)(x) = mxT + xmT .

(b) Define the orthogonal group, O(n) ⊂ Mn , as

O(n) = g −1 (e) = {m ∈ Mn | mmT = e},

where e ∈ Sn is the n × n identity matrix (as in Question 3), which is clearly symmetric.
Use the result of (a) and Theorem 2.14 to prove that O(n) is a manifold of dimension

n(n − 1)
dim(O(n)) = .
2
Show that O(n) is closed under matrix multiplication, using the fact that (mp)T = pT mT
for any m, p ∈ Mn (this is a fact from Linear Algebra; please ask if you don’t believe it).
Hence, O(n) is another example of a Lie group.

19
(c) Using the result of part (a), and Theorem 2.18, show that the tangent space of
O(n) at the identity e ∈ O(n) consists of all anti-symmetric n × n matrices:

Te O(n) = {x ∈ Mn | xT = −x}.

(d)* By the above, O(2) is a manifold of dimension 1. Can you identify it with any
1-dimensional manifold we have already met? (Hint: Consider 2 × 2 matrix that rotates
R2 about the origin. What form does it have, and why should it be in O(2)? On the
other hand, consider a matrix that reflects R2 (say about the horizontal axis). Is this in
O(2)? Is it in the family of matrices describing rotations?)

20
2.5: Differentiable Maps Between Manifolds
Definition/Lemma 2.21 (Differentiable Maps): Let M ⊂ Rn+k be a n-manifold,
N ⊂ Rm+l a m-manifold, ϕ : M → N a map between them, and p ∈ M a point. Then
the following are equivalent, and if any one of them holds we say that ϕ is differentiable
at p (as usual, we say ϕ is differentiable iff it is differentiable at all points p ∈ M ):

(a) For some local coordinate system r = rM : U → Rn+k for M near p, the function
ϕ ◦ r : U → Rm+l is differentiable at the point q = r−1 (p) ∈ U ;

(b) For each j = 1, . . . , k + l, and π j : Rm+l → R the jth coordinate function, the
real-valued function π j ◦ ϕ : M → R is differentiable at p (in the sense of Definition 2.19);

(c) For some choices of local coordinate systems r = rM : U → Rn+k for M near p,
and s = sN : V → Rm+l for N near ϕ(p), the map

ϕsr := s−1 ◦ ϕ ◦ r : U → V

is differentiable at q = r−1 (p) ∈ U .

We will not go through the proof, since it is very similar to the proof of Lemma 2.20.
Note that Lemma 2.20 immediately implies that the definition is independent of which
choices of local coordinate coordinate systems one makes in either (a) or (c).

Of course, we expect that a differentiable map ϕ : M → N should have a derivative


of some sort, and hopefully you’d guess that its derivative, at each point p ∈ M , ought
to be a linear map defined on the tangent space Tp M . This is absolutely correct, as the
following Definition/Lemma (once again, stated without proof) makes clear:

Definition/Lemma 2.22 (Derivatives): If M is a n-manifold, N a m-manifold, and


ϕ : M → N a map between them that is differentiable at a point p ∈ M , then the deriva-
tive of ϕ at p is the linear map Dϕ(p) : Tp M → Tϕ(p) N , defined in any of the following
equivalent ways:

(a) If r = rM : U → Rn+k is a local coordinate system for M near p, then for


v ∈ Tp M , write v = Dr(q)(α) = α1 D1 r(q) + . . . + αn Dn r(q), for α = (α1 , . . . , αn ) ∈ Rn
and q = r−1 (p) ∈ U (this can be done by Theorem 2.18). Then define

Dϕ(p)(v) = D(ϕ ◦ r)(q)(α).

(b) Given any differentiable curve γ : (−ε, ε) → M with γ(0) = p, define

Dϕ(p)(γ 0 (0)) := (ϕ ◦ γ)0 (0) ∈ Tϕ(p) N.

(c) If r = rM : U → Rn+k is a local coordinate system for M near p, and s = sN : V →


Rm+l a local coordinate system for N near ϕ(p), then for v ∈ Tp M , write v = Dr(q)(α)
as in part (a), and define

Dϕ(p)(v) = Ds(ϕs,r (q))(Dϕsr (q)(α)),

21
with ϕsr : U → V as defined in part (c) of Def./Lemma 2.21.

Using these equivalent definitions, it can be proven that the derivatives of differentiable
maps between manifolds have all the usual properties, such as the Chain Rule, etc.

22

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy