0% found this document useful (0 votes)
68 views

MCLA Concise Review

The document is a comprehensive review of Multivariable Calculus and Linear Algebra authored by Krisnajit Rajeshkhanna, covering various mathematical topics such as parametric equations, infinite sequences, vectors, and vector calculus. It includes detailed chapters on concepts like partial derivatives, multiple integrals, and applications of Taylor polynomials. The content is structured in a chapter format with sub-sections for ease of understanding and reference.

Uploaded by

krajeshk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

MCLA Concise Review

The document is a comprehensive review of Multivariable Calculus and Linear Algebra authored by Krisnajit Rajeshkhanna, covering various mathematical topics such as parametric equations, infinite sequences, vectors, and vector calculus. It includes detailed chapters on concepts like partial derivatives, multiple integrals, and applications of Taylor polynomials. The content is structured in a chapter format with sub-sections for ease of understanding and reference.

Uploaded by

krajeshk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

Multivariable Calculus and Linear Algebra

A Concise Review

Author: Krisnajit Rajeshkhanna


Institute: South Brunswick High School
Date: May 31, 2022

A world of mathematical nuance.


Contents

Chapter 1 Parametric Equations and Polar Coordinates 1


1.1 Parametric Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Calculus with Parametric Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Area and Length of Polar Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Conic Sections in Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Chapter 2 Infinite Sequences and Series 8


2.1 Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 The Integral Test and Estimate of Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 The Comparison Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Alternating Series Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Absolute Convergence, Root, and Ratio Tests . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 Strategy for Testing Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.9 Representation of Functions as Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.10 Taylor and Maclaurin Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.11 Applications of Taylor Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Chapter 3 Vectors and the Geometry of Space 16


3.1 Three-Dimensional Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 The Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 The Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5 Equations of Lines and Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.6 Cylinders and Quadric Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.7 Cartesian, Cylindrical, and Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . 21

Chapter 4 Vector Functions 23


4.1 Vector Functions and Space Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Derivatives and Integrals of Vector Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 Arc Length and Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Motion in Space: Velocity and Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 Differential Distances: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Chapter 5 Partial Derivatives 29


5.1 Functions of Several Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Limits and Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4 Tangent Planes and Linear Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
CONTENTS

5.5 The Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


5.6 Directional Derivatives and the Gradient Vector . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7 Maximum and Minimum Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.8 Lagrange Multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.9 Gradient and Laplacian in Other Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Chapter 6 Multiple Integrals 36


6.1 Double Integrals Over Rectangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.2 Double Integrals over General Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.3 Double Integrals in Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.4 Applications of Double Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.5 Surface Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.6 Triple Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.7 Triple Integrals in Other Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.8 Change of Variables in Multiple Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Chapter 7 Vector Calculus 42


7.1 Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.2 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.3 The Fundamental Theorem for Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.5 Curl and Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.6 Parametric Surfaces and Their Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.7 Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8 Stokes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.9 The Divergence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Chapter 8 Second-Order Differential Equations 50


8.1 Second-Order Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.2 Nonhomogenous Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.3 Applications of Second-Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . 51
8.4 Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Chapter 9 MC Appendix 54
9.1 Numbers, Inequalities, and Absolute Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.2 Coordinate Geometry and Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9.3 Graphs of Second-Degree Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.4 Trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.5 Sigma Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Chapter 10 Vectors 59
10.1 The Geometry and Algebra of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
10.2 Length and Angle: The Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

ii
CONTENTS

10.3 Lines and Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Chapter 11 Systems of Linear Equations 62


11.1 Introduction to Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.2 Direct Methods for Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.3 Spanning Sets and Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Chapter 12 Matrices 64
12.1 Matrix Operations: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
12.2 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
12.3 The Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
12.4 The LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
12.5 Subspaces, Basis, Dimension, and Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
12.6 Introduction to Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Chapter 13 Eigenvalues and Eigenvectors 68


13.1 Introduction to Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 68
13.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
13.3 Eigenvalues and Eigenvectors of n × n Matrices . . . . . . . . . . . . . . . . . . . . . . . . 69
13.4 Similarity and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Chapter 14 Orthogonality 72
14.1 Orthogonality in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
14.2 Orthogonal Complements and Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . 73
14.3 The Gram-Schmidt Process and the QR Factorization . . . . . . . . . . . . . . . . . . . . . . 73
14.4 Orthogonal Diagonalization of Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . . 74

Chapter 15 Vector Spaces 77


15.1 Vector Spaces and Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
15.2 Linear Independence, Basis, and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
15.3 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
15.4 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
15.5 The Kernel and Range of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . . . . 79
15.6 The Matrix of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Chapter 16 Distance and Approximation 82


16.1 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
16.2 Norms and Distance Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
16.3 Least Squares Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
16.4 The Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
16.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Chapter 17 LA Appendix 89
17.1 Mathematical Notation and Methods of Proof . . . . . . . . . . . . . . . . . . . . . . . . . . 89
17.2 Mathematical Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

iii
CONTENTS

17.3 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89


17.4 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

iv
Chapter 1 Parametric Equations and Polar Coordinates

1.1 Parametric Curves


If x and y are both given as functions of a third variable t (known as the parameter) then:

x = x(t) and y = y(t)

Parametric Curve: The curve formed the by plotting the points along (x, y) = (x(t), y(t))

Ex: x(t) = cos(t) and y(t) = sin(t) when 0 ≤ t ≤ 2π makes

1.2 Calculus with Parametric Equations


Parametric First Derivative: Consider the parameterizations x = x(t), y = y(t), and y = y(x), obtain-
ing dy/dx is as follows:

y(t) = y(x(t))

d d
(y(t)) = [y(x(t))]
dt dt

dy dy dx
= ·
dt dx dt

dy
dy dx
= dt
dx
, where ̸= 0 (1.1)
dx dt
dt

Parametric Second Derivative: Applying the d/dx operator on dy/dx will result in the second parame-
terized derivative via chain rule once again:
   
d2 y d dy d dy dt
2
= = ·
dx dx dx dt dx dx
 
d dy
d2 y dt dx
= (1.2)
dx2 dx
dt
1.2 Calculus with Parametric Equations

Ex: Let x(t) = 3t2 + 1 and y(t) = 3t2 + 5t. Obtain d2 y/dx2 :

dx dy
= 6t, and = 6t + 5
dt dt

dy
dy 6t + 5 5
= dt = =1+
dx dx 6t 6t
dt
 
d dy −5
d2 y dt dx 2 −5
2
= = 6t =
dx dx 6 36t2
dt
Area: Consider the function y = y(x) along a ≤ x ≤ b:

Zb
A= y(x) dx
a

Now the parameterizations x = x(t), and y = y(t) where α ≤ t ≤ β make the area the following:


dt
A= y(x(t)) dx ·
dt
α


dx
A= y(x(t)) · dt
dt
α


A= y(x(t)) · x′ (t)dt (1.3)
α

Arc Length: Consider the function y = y(x) along a ≤ x ≤ b, the infinitesimal change in position is as
follows:
p
ds = dx2 + dy 2

Now obtaining the length of the curve, we integrate tiny bits of position from a ≤ x ≤ b:

Zb
L= ds
a

Zb p
L= dx2 + dy 2
a

2
1.2 Calculus with Parametric Equations

s  2
Zb
dy
L= 1+ dx (1.4)
dx
a

Now the parameterizations x = x(t), and y = y(t) where α ≤ t ≤ β makes the length the following:

Zβ p
dt
L= dx2 + dy 2 ·
dt
α

Zβ p
1
L= dx2 + dy 2 · dt
dt
α

Zβ r
1
L= (dx2 + dy 2 ) dt
dt2
α

s 2  2

dx dy
L= + dt (1.5)
dt dt
α

Ex: Find the arc length of a circle given by x(t) = r cos(t) and y(t) = r sin(t) where r is a constant
radius:
s 2  2
Zβ Z2π p Z2π
dx dy 2 2 2 2
L= + dt = r sin t + r cos t dt = rdt = 2πr
dt dt
α 0 0

Surface Area: When a curve y = y(x) is parameterized as x = x(t) and y = y(t) is rotated about the
x-axis, the surface are can be seen as the following:

Zβ r
dx 2 dy
S= 2πy ( ) + ( )2 dt (1.6)
dt dt
α

When a curve y = y(x) is parameterized as x = x(t) and y = y(t) is rotated about the y-axis, the surface
are can be seen as the following:

s 2

dx dy 2
S= 2πx +( ) dt (1.7)
dt dt
α

Ex: Find the surface area of a sphere given by x(t) = r cos(t) and y(t) = r sin(t) where r is a constant
radius:

3
1.3 Polar Coordinates

s 2  2
Zβ Zπ q Zπ
dx dy 2
S= 2πy + dt = 2π 2
r sin t r2 (sin t + cos2 t) dt = 2πr sin t dt = 4πr2
dt dt
α 0 0

1.3 Polar Coordinates


Polar Coordinates: A transformation from the coordinates (x, y) to (r, θ) where r is the radial distance
from the origin, and θ is the angle formed from the x-axis to the radial position:
Cartesian to Polar Coordinates: x = r cos θ & y = r sin θ (1.8)

p y
Polar to Cartesian Coordinates: r = x2 + y 2 & θ = arctan ( ) (1.9)
x

Consider the polar function r = r(θ). Obtaining dy/dx is as follows:

x(θ) = r(θ) cos θ & y(θ) = r(θ) sin θ

dx dr dy dr
= · cos θ + (− sin(θ)) · r(θ) & = · sin θ + cos(θ) · r(θ)
dθ dθ dθ dθ

r′ cos(θ) − r sin(θ)
dy
dy
≡ dθ
= (1.10)
dx dx

r′ sin(θ) + r cos(θ)

Ex: Obtain dy/dx given r(θ) = 1 + sin(θ) when θ = π/3:

x(θ) = (1 + sin(θ))(cos(θ)) & y(θ) = (1 + sin(θ))(sin(θ))

x(θ) = cos(θ) + sin(θ) cos(θ) & y(θ) = sin(θ) + sin2 (θ)

dx dy
= − sin(θ) + cos(2θ) & = cos(θ) + sin(2θ)
dθ dθ

dy cos(2θ) − sin(θ)
=
dx sin(2θ) + cos(θ)

dy 1+ 3
At θ = π/3: = √ = −1
dx −1 − 3

1.4 Area and Length of Polar Curves


Area of Polar Functions: Beginning with the area of a circular sector:

4
1.5 Conic Sections in Cartesian Coordinates

1
A = r2 θ
2
Now let r be a function of theta defined to be r = r(θ), the area of such a generalized polar function is as
follows:

1
dA = (r(θ))2 dθ
2


1
A= (r(θ))2 dθ (1.11)
2
α

Arc Length of a Polar Function: Looking back at Equation 8, if θ is the parameter, the length of the curve
is as follows:

s 2  2

dx dy
L= + dθ (1.12)
dθ dθ
α

Rearranging to obtain the length in terms of r and θ is as follows:

dx dr dy dr
= cos θ − sin(θ) r(θ) & = sin θ + cos(θ) r(θ)
dθ dθ dθ dθ

 2  2  2
dx dy dr dr dr
+ = sin(2θ) − r sin(2θ) + r2 (cos2 (θ) + sin2 (θ)
(cos2 (θ) + sin2 (θ)) + r
dθ dθ dθ dθ dθ
 2   2  2
dx dy dr
+ = + r2
dθ dθ dθ

s  2

dr
L= r2 + dθ (1.13)

α

Ex: Find the Area and Length enclosed by one loop of the four-leaved rose r = cos(2θ)

Zβ Zπ/4 Zπ/4
1 1 1 π
A= (r(θ))2 dθ = (cos(2θ)) dθ =2
(1 + cos(4θ))dθ =
2 2 2 8
α −π/4 0

s  2
Zβ Zπ/4 q
dr 1 π π
L= r2 + dθ = cos2 (2θ) + 4 sin2 (2θ) dθ = (E( | − 3) − E(− | − 3))
dθ 2 4 4
α −π/4

(Note: E(k) in this answer comes from the elliptic integral of the second kind)

5
1.5 Conic Sections in Cartesian Coordinates

1.5 Conic Sections in Cartesian Coordinates


Parabola: An equation of the parabola with vertex at (0, 0), focus (0, p), and directrix y = −p is given by
the following:

x2 = 4py (1.14)

Interchanging x and y, changing the focus to (p, 0), and directrix to x = −p an is given by the following:

y 2 = 4px (1.15)

Ellipse: An equation of the ellipse with vertices (±a, 0), foci on the x-axis at the points (±c, 0), and where
c2 = a2 − b2 is given by the following:

x2 y 2
+ 2 = 1, where a ≥ b > 0 (1.16)
a2 b

Interchanging x and y also flips the vertices to be (0, ±c), foci at (0, ±c), and where c2 = a2 − b2 is given
by the following:

x2 y 2
+ 2 = 1, where b ≥ a > 0 (1.17)
b2 a

Hyperbola: An equation of the hyperbola with vertices(±a, 0), foci (±c, 0) where c2 = a2 + b2 , and
asymptotes y = ±( ab )x is given by the following:

x2 y 2
− 2 = 1, where a ≥ b > 0 (1.18)
a2 b

Interchanging x and y, changing vertices to (0, ±a), foci (0, ±c) where c2 = a2 + b2 , and asymptotes
y = ±( ab )x is given by the following:

y 2 x2
− 2 = 1, where b ≥ a > 0 (1.19)
a2 b

Ex: Finding the foci, vertices, and asypmtotes of ax2 − by 2 = r

x2 y2
Standard form: p 2 − p 2 = 1
r r
a b
r r
r r r(a + b)
Foci: ± + =±
a b ab

6
1.5 Conic Sections in Cartesian Coordinates

 r 
r
Vertices: ± ,0
a
r 
a
Asymptotes:y = ± x
b

7
Chapter 2 Infinite Sequences and Series

2.1 Sequences
Sequence: Can be thought of as a list of numbers written in a definite order

a1 , a2 , a3 , · · · , an , · · ·

Limit of a Sequence: If L exists, the sequence an is convergent. If L does not exist, the sequence is
divergent.

lim an = L or an → L as n → ∞ (2.1)
n→∞

Squeeze Theorem Revisited: Analogous to basic limits and derivatives, when it comes to sequences, if
two known sequences converge to L, then the sequence in question must also converge to L

If: an ≤ bn ≤ cn and lim an = lim cn = L


n→∞ n→∞

Then: lim bn = L.
n→∞

Increasing, Decreasing, and Monotonic Sequences:


1. A sequence an is defined to be increasing if an < an + 1 for all n ≥ 1, that is,
a1 < a2 < ... < an
2. A sequence an is defined to be decreasing if an > an + 1 for all n ≥ 1, , that is,
a1 > a2 > ... > an
3. A sequence is defined to be monotonic if it is neither increasing nor decreasing

Boundedness of Sequences:
1. A sequence an is bounded above if there is a number M such that
an ≤ M for all n ≥ 1
2. A sequence an is bounded below if there is a number m such that
m ≤ an for all n ≥ 1
3. If it is bounded above and below, then an is a bounded sequence
4. Every bounded, monotonic sequence is convergent

2.2 Series
Series: The sum of all terms in a sequence

X
S= an = a1 + a2 + a3 + ... + an + ... (2.2)
n=1
2.2 Series

The partial sum of a series is denoted by the following


X
n
Sn = ai = a1 + a2 + ... + an (2.3)
i=1

Convergence and Divergence of Series:


P
1. If the sequence {Sn } is convergent and limn→∞ Sn = S exists as a real number, then the series an is
convergent.
P
2. If the sequence {Sn } is divergent, then the series an is divergent

Geometric Series: Let’s consider the following series, known as the geometric series

X
S = a + ar + ar2 + ar3 + · · · + arn−1 + · · · = arn−1 where a ̸= 0
n=1
1. If r = 1, then we have:
Sn = a + a + · · · + a = na → ±∞

Since limn→∞ Sn doesn’t exist, the geometric series diverges in this case.
2. If r ̸= 1, then we have:
Sn = a + ar + ar2 + ar3 + · · · + arn−1

rSn = ar + ar2 + ar3 + · · · + arn−1 + arn

Sn − rSn = a − arn

a(1 − rn )
Sn = (2.4)
1−r
As noticed from our test conditions above, if we look at the convergence condition |r| < 1
a(1 − rn ) a a a
lim Sn = lim = − lim rn =
n→∞ n→∞ 1−r 1 − r 1 − r n→∞ 1−r
Therefore, our geometric series evaluates to the following, given the converge condition |r| < 1

X a
S= arn−1 = a + ar + ar2 + · · · = (2.5)
1−r
n=1

P

Ex. Is the series 22n · 31−n convergent or divergent?
n=1
X∞ ∞
X X 4∞ X 4 ∞
3
(22 )n · 31 · 3−n = 4n · n
= 3( )n = 4 ( )n−1
3 3 3
n=1 n=1 n=1 n=1
4
Notice this is a geometric series with a = 4 and |r| = . Because the convergence condition for geometric
3
series, |r| < 1, does not hold true, the series is divergent.

9
2.3 The Integral Test and Estimate of Sums

P

Test for Divergence: If limn→∞ an does not exist or if limn→∞ an ̸= 0, then the series an is divergent.
n=1
P
∞ n2
Ex. Show that the series 2
diverges.
n=1 5n + 4
n2 1 1
lim an = lim = lim = ̸= 0
n→∞ n→∞ 5n2 + 4 n→∞ 4 5
5+ 2
n

2.3 The Integral Test and Estimate of Sums


Integral Test: Begin with the Reimann Sum Remainder approximation
Zn
a1 + a2 + a3 + · · · + an ⩽ f (x)dx
1
If f(x) is decreasing, then

Zn
f (x)dx ⩽ a1 + a2 + · · · + an−1
1

Rn
(i) If f (x)dx is convergent, then the top integral gives
1

X
n Zn Z∞
ai ⩽ f (x)dx ⩽ f (x)dx
i=2 1 1
since f (x) ⩾ 0. Therefore
X
n Z∞
s n = a1 + ai ⩽ a1 + f (x)dx = M, say
i=2 1
Since sn ⩽ M for all n, the sequence {sn } is bounded above. Also
sn+1 = sn + an+1 ⩾ sn
since an+1 = f (n + 1) ⩾ 0. Thus {sn } is an increasing bounded sequence and so it is convergent by the
P R∞
Monotonic Sequence Theorem (11.1.12). This means that an is convergent. (ii) If f (x)dx is divergent,
1
Rn
then f (x)dx → ∞ as n → ∞ because f (x) ⩾ 0. But the second inequality gives
1
Zn X
n−1
f (x)dx ⩽ al = sn−1
1 i=1

and so sn−1 → ∞. This implies that sn → ∞ and so Σan diverges.

Thus our tests for series consists of the following:


R∞ P

(i) If f (x)dx is convergent, then an is convergent.
1 n=1

R∞ P

(ii) If f (x)dx is divergent, then an is divergent.
1 n=1

10
2.4 The Comparison Tests

Remainder Theorem: Where Rn is the remainder


Z∞ Z∞
f (x)dx ⩽ Rn ⩽ f (x)dx
n+1 n

P
∞ 1
Ex. Test the series 2
for convergence or divergence.
n=1 1 + n
Z∞ Z∞ ∞
1 −1 (x) = π − π = π −→ Convergent
f (x)dx = dx = tan
1 + x2 1 2 4 4
1 1

P∞ 1
P-Series: The p-series p
is convergent if p > 1 and divergent if p ≤ 1.
n=1 n

P∞ 1
Ex. For what values of p is the series p
convergent or divergent?
n=1 n
Z∞ Z∞ ∞
1 x1−p
f (x)dx = dx = −→ Conv: p > 1 & Div: p ≤ 1
xp 1−p 1
1 1

2.4 The Comparison Tests


P
Direct Comparison Test: Suppose that an and Σbn are series with positive terms.
P
1. If bn is convergent and an ≤ bn for all n, then Σan is also convergent.
P P
2. If bn is divergent and an ≥ bn for all n, then an is also divergent.

P
Limit Comparison Test: Suppose that an and Σbn are series with positive terms. If
an
lim =c
n→∞ bn
where c is a finite number and c > 0, then either both series converge or both diverge.

P
∞ 5
Ex. Determine whether the series converges or diverges.
n=1 2n2
+ 4n + 3
5 5
Given: an = 2 −→ Let: bn = 2
2n + 4n + 3 2n
Through p-series test, bn converges as p = 2 > 1, so now using direct comparison between the two sequences
an and bn ,
X 5 5 5 X 5
= conv. & ⩽ =⇒ = conv.
2n2 2n2 + 4n + 3 2n2 2n2 + 4n + 3

2.5 Alternating Series Test


Alternating Series: A series whose terms are alternately positive and negative

11
2.6 Absolute Convergence, Root, and Ratio Tests


X
(−1)n−1 an = a1 − a2 + a3 − a4 + · · ·
n=1

Alternating Series Test:



X
If: (−1)n−1 bn = b1 − b2 + b3 − b4 + · · · bn > 0
n=1

Satisfies: (i) bn+1 ⩽ bn for all n

(ii) lim bn = 0
n→∞

Then: The series is convergent.

2.6 Absolute Convergence, Root, and Ratio Tests


P P
Absolute Convergence: Given a series an , if the series of absolute values |an | is convergent, then
P
an is absolutely convergent

P∞ (−1)n−1
Ex: Given the series , determine its convergence.
n=1 n2
X∞
(−1)n−1 1 1 1
2
= 1 − 2 + 2 − 2 + ···
n 2 3 4
n=1
is absolutely convergent because of the alternating series test and
X∞ X∞
(−1)n−1 1 1 1 1
= = 1 + 2 + 2 + 2 + ···
n2 n2 2 3 4
n=1 n=1
is a convergent p-series where p = 2

P P
Conditional Convergence: Given a series an is convergent, but the series absolute values |an | is
P
not, then an is conditionally convergent

P∞ (−1)n−1
Ex: Given the series , determine its convergence.
n=1 n

X (−1)n−1 1 1 1
= 1 − + − + ···
n 2 3 4
n=1
converges via the alternating series test, but its absolute values
X∞ X∞
(−1)n−1 1 1 1 1
= = 1 + + + + ···
n n 2 3 4
n=1 n=1
is divergent via the p-series test. Therefore the series is conditionally convergent.

Ratio Test:
an+1 P

1. If limn→∞ an = L < 1, then the series an is absolutely convergent
n=1

12
2.7 Strategy for Testing Series

an+1 an+1 P

2. If limn→∞ an = L > 1 or limn→∞ an = ∞, then the series an is divergent.
n=1
an+1
3. If limn→∞ an = 1, the Ratio Test is inconclusive

Root Test:
p P

1. If limn→∞ n | an | = L < 1, then the series an is absolutely convergent
n=1
p p P∞
2. If limn→∞ n
|an | = L > 1 or limn→∞ n |an | = ∞, then the series an is divergent.
p n=1
3. If limn→∞ n
|an | = 1, the Root Test is inconclusive.

2.7 Strategy for Testing Series


1. Check for p-series
2. Check for geometric series
3. If 1 or 2 apply, perform a comparison test
4. If limn→∞ an ̸= 0 perform the divergence test
5. If the series is in the form of an alternating series perform the Alternating series test
6. If factorials or products are present try the ratio test
7. If an is in the form of (bn )n use the root test
R∞
8. If an = f (n) where f (x)dx is an easily evaluated integral, use the integral test
1

2.8 Power Series


Power Series: A function represented by a geometric series with a = cn and r = x given by
X∞
P (x) = cn (x)n = c0 + c1 x + c2 x2 + · · ·
n=0
and when centered about x = a the function becomes
X∞
P (x − a) = cn (x − a)n = c0 + c1 (x − a) + c2 (x − a)2 + · · ·
n=0
The power series has three possibilities being:
1. The series converges only when x = a.
2. The series converges for all x.
3. There is a positive number R such that the series converges if |x − a| < R and diverges if |x − a| > R.

2.9 Representation of Functions as Power Series


Functions can be expressed as a power series using:
X ∞
1
f (x) = = 1 + x + x2 + x3 + · · · = xn
1−x
n=0
Notice: This is just a geometric sequence where r = x and the a = 1.

13
2.10 Taylor and Maclaurin Series

x3
Ex: Convert f (x) = x+2 into a power series.
∞ ∞
x3 1 x3 1 x3 X −x n X (−1)n n+3
f (x) = = x3 · = · = ( ) = x
x+2 2+x 2 1 − (− x2 ) 2 2 2n+1
n=0 n=0

P

Differentiating Power Series: Let f (x) = co + c1 (x − a) + c2 (x − a)2 + · · · = cn (x − a)n
n=0

X
f ′ (x) = c1 + 2c2 (x − a) + 3c3 (x − a)2 + · · · = ncn (x − a)n−1
n=1

P

Integrating Power Series: Let f (x) = co + c1 (x − a) + c2 (x − a)2 + · · · = cn (x − a)n
n=0
Z ∞
X
(x − a)2 (x − a)3 (x − a)n+1
f (x)dx = C + co (x − a) + c1 + c2 + ··· = C + cn
2 3 n+1
n=0

Ex. Find a power series representation of f (x) = tan−1 (x) where f (0) = 0.
X∞ X∞
′ 1 1
f (x) = = = 2 n
(−x ) = (−1)n (x)2n
1 + x2 1 − (−x2 )
n=0 n=0
Z Z X ∞ X∞ Z ∞
X x2n+1
f (x) = f ′ (x)dx = (−1)n (x)2n dx = (−1)n x2n dx = (−1)n +C =⇒ f (0) = C = 0
2n + 1
n=0 n=0 n=0

X x2n+1
f (x) = tan−1 (x) = (−1)n
2n + 1
n=0

2.10 Taylor and Maclaurin Series


Taylor Series: If we analytically solve for cn in the power series representations of functions, we get that
f (n) (a)
cn = . Thus, the Taylor Series is defined by,
n!

X f (n) (a) f ′ (a) f ′′ (a)
f (x) = (x − a)n = f (a) + (x − a) + (x − a)2 + · · · (2.6)
n! 1! 2!
n=0

Maclaurin Series: The Maclaurin Series is the simplest form of the Taylor series where the function is
centered around a = 0. Thus, the Maclaurin Series is defined by,

X f (n) (0) f ′ (0) f ′′ (0) 2
f (x) = xn = f (0) + x+ x + ··· (2.7)
n! 1! 2!
n=0

14
2.11 Applications of Taylor Polynomials

Binomial Series: If k is any real number and |x| < 1, then



!
X k k(k − 1) 2 k(k − 1)(k − 2) 3
(1 + x)k = xn = 1 + kx + x + x + ··· (2.8)
n 2! 3!
n=0

Common Maclaurin Series: A few common Maclaurin series are shown in the table below along with
their general expansions and radius of convergence

Function Series Polynomial Expansion


1 P

xn 1 + x + x2 + . . . where R = 1
1−x n=0
P
∞ xn x2
ex 1+x+ + . . . where R = ∞
n=0 n! 2!
P
∞ (−1)n−1 x2n−1 x3 x5
sin(x) x− + − . . . where R = ∞
n=0 (2n − 1)! 3! 5!
P
∞ (−1)n x2n x2 x4
cos(x) 1− + − . . . where R = ∞
n=0 (2n)! 2! 4!
P
∞ x2n+1 x3 x5
tan−1 (x) (−1)n x− + − . . . where R = 1
n=0 2n + 1 3 5
P
∞ xn x2 x3
ln(1 + x) (−1)n−1 x− + − . . . where R = 1
n=1 n 2 3
!
P
∞ k k(k − 1) 2
(1 + x)k xn 1 + kx + x + . . . where R = 1
n=0 n 2!

2.11 Applications of Taylor Polynomials


Approximating Functions: Suppose f (x) is a function centered at x = a,
X∞
f n (a)
f (x) = (x − a)n
n!
n=0
Then we can approximate f (x) with a function Tn (x) which is an n-th degree polynomial of f (x) centered
at x = a,
Xn
f k (a)
Tn (x) = (x − a)k
k!
k=0

Notice that by approximating with Tn (x), we end up with a remainder Rn (x) given by,
|Rn (x)| = |f (x) − Tn (x)| (2.9)

15
Chapter 3 Vectors and the Geometry of Space

3.1 Three-Dimensional Coordinate Systems


Distance Formula (Point to Point): The distance |P1 P2 | between the points P1 (x1 , y1 , z1 ) and P2 (x2 , y2 , z2 )
q
|P1 P2 | = (x2 − x1 )2 + (y2 − y1 )2 + (z2 − z1 )2 (3.1)

Ex: The distance from the point P (2, −1, 7) to the point Q(1, −3, 5) is
p √
|P Q| = (1 − 2)2 + (−3 + 1)2 + (5 − 7)2 = 1 + 4 + 4 = 3

Equation of a Sphere: With a center of C(h, k, l) and radius r is:


(x − h)2 + (y − k)2 + (z − l)2 = r2 (3.2)

Ex: Centered at the origin with radius r = 1 (a.k.a. unit sphere):


x2 + y 2 + z 2 = 1

3.2 Vectors
Vector Addition: If ⃗u and ⃗v are vectors positioned so the initial point of ⃗v is at the terminal point of ⃗u,
then the sum ⃗u + ⃗v is the vector from the initial point of ⃗u to the terminal point of ⃗v .

Vector Multiplication: If c is a scalar and ⃗v is a vector, then the scalar multiple c⃗v is the vector whose
length is |c| times the length of ⃗v and whose direction is the same as ⃗v if c > 0 and is opposite to ⃗v if c < 0 If
c = 0 or ⃗v = ⃗0, then c⃗v = ⃗0
3.3 The Dot Product

Magnitude of a Vector: The magnitude of the three-dimensional vector ⃗a = ⟨a1 , a2 , a3 ⟩ is


q
∥⃗a∥ = a21 + a22 + a23 (3.3)

Properties of Vectors: If ⃗a, ⃗b, and ⃗c are vectors in Vn and c and d are scalars, then

1. ⃗a + ⃗b = ⃗b + ⃗a 2. ⃗a + (⃗b + ⃗c) = (⃗a + ⃗b) + ⃗c


3. ⃗a + 0 = ⃗a 4. ⃗a + (−⃗a) = ⃗0
5. c(⃗a + ⃗b) = c⃗a + c⃗b 6. (c + d)⃗a = c⃗a + d⃗a
7. (cd)⃗a = c(d⃗a) 8. 1⃗a = ⃗a

Unit Vectors: Known as standard basis vectors, point in the direction of positive xyz-axes, and have a
magnitude of 1
î = ⟨1, 0, 0⟩ ĵ = ⟨0, 1, 0⟩ k̂ = ⟨0, 0, 1⟩ (3.4)

Ex: Find the unit vector in the direction of the vector ⃗a = 2⃗i − ⃗j − 2⃗k
p
∥⃗a∥ = 22 + (−1)2 + (−2)2 = 3

⃗â = ⃗a = 2i − j − 2k = 2⃗î − 1⃗ĵ − 2 ⃗k̂
⃗ ⃗
∥⃗a∥ 3 3 3 3

⃗â = ⟨ 2 , − 1 , − 2 ⟩
3 3 3

3.3 The Dot Product

Dot Product: If ⃗a = ⟨a1 , a2 , a3 ⟩ and ⃗b = ⟨b1 , b2 , b3 ⟩, then the dot product of ⃗a and ⃗b is the number
⃗a · ⃗b = a1 b1 + a2 b2 + a3 b3 (3.5)

Ex: Find the dot product of ⃗a = ⟨−1, 7, 4⟩ and ⃗b = ⟨6, 2, − 21 ⟩


   
1 1
⟨−1, 7, 4⟩ · 6, 2, − = (−1)(6) + (7)(2) + (4) − =6
2 2

Properties of Dot Products: If ⃗a, ⃗b, and ⃗c are vectors in V3 and c is a scalar, then

1. ⃗a · ⃗a = ∥⃗a∥2 2. ⃗a · ⃗b = ⃗b · ⃗a

3. ⃗a · (⃗b + ⃗c) = ⃗a · ⃗b + ⃗a · ⃗c 4. (c⃗a) · ⃗b = c(⃗a · ⃗b) = ⃗a · (c⃗b)

5. ⃗0 · ⃗a = 0

Angle Between Vectors: If θ is the angle between the vectors ⃗a and ⃗b, then by using the law of cosines,

17
3.4 The Cross Product

we can get one expression for ∥⃗a − ⃗b∥2 given by


∥⃗a − ⃗b∥2 = ∥⃗a∥2 − 2∥⃗a∥∥⃗b∥ cos θ + ∥⃗b∥2

By using properties 1, 2, and 3 of the dot product, we can find another expression given by
∥⃗a − ⃗b∥2 = (⃗a − ⃗b) · (⃗a − ⃗b)
= ⃗a · ⃗a − 2⃗a · ⃗b + ⃗b · ⃗b
= ∥⃗a∥2 − 2⃗a · ⃗b + ∥⃗b∥2

Therefore, by combining both expressions for ∥⃗a − ⃗b∥2 , we get


∥⃗a∥2 − 2⃗a · ⃗b + ∥⃗b∥2 = ∥⃗a∥2 − 2∥⃗a∥ ∥⃗b∥ cos θ + ∥⃗b∥2
−2⃗a · ⃗b = −2∥⃗a∥∥⃗b∥ cos θ

⃗a · ⃗b = ∥⃗a∥ ∥⃗b∥ cos θ (3.6)

Note: Two vectors ⃗a and ⃗b are orthogonal if and only if ⃗a · ⃗b = 0.

Projections: Given vectors ⃗a and ⃗b, the component of ⃗b along the direction of ⃗a is known as the projection
of ⃗b onto ⃗a

Generally, this is seen by the following formula,


⃗b · ⃗a
proj~a⃗b = ⃗a (3.7)
∥⃗a∥2

Ex: Find the projection of ⃗b = ⟨1, 1, 2⟩ onto ⃗a = ⟨−2, 3, 1⟩


⃗b · ⃗a = (1)(−2) + (1)(3) + (2)(1) = 3 and ∥⃗a∥2 = (−2)2 + 32 + 12 = 14
3
proj~a⃗b = ⟨−2, 3, 1⟩
14

3.4 The Cross Product

Cross Product: If ⃗a = ⟨a1 , a2 , a3 ⟩ and ⃗b = ⟨b1 , b2 , b3 ⟩, then the cross product of ⃗a and ⃗b is the vector

18
3.4 The Cross Product

⃗a × ⃗b = ⟨a2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 ⟩ (3.8)

Angle Between Vectors: If θ is the angle between the vectors ⃗a and ⃗b, from the definitions of the cross
product and magnitude of a vector we have
∥⃗a × ⃗b∥2 = (a2 b3 − a3 b2 )2 + (a3 b1 − a1 b3 )2 + (a1 b2 − a2 b1 )2
= a22 b23 − 2a2 a3 b2 b3 + a23 b22
+ a23 b21 − 2a1 a3 b1 b3 + a21 b23
+ a21 b22 − 2a1 a2 b1 b2 + a22 b21
= (a21 + a22 + a23 )(b21 + b22 + b23 ) − (a1 b1 + a2 b2 + a3 b3 )2
= (∥⃗a∥2 )(∥⃗b∥2 ) − (⃗a · ⃗b)2
= ∥⃗a∥2 ∥⃗b∥2 − ∥⃗a∥2 ∥⃗b∥2 cos2 θ
= ∥⃗a∥2 ∥⃗b∥2 (1 − cos2 θ)
= ∥⃗a∥2 ∥⃗b∥2 sin2 θ

∥⃗a × ⃗b∥ = ∥⃗a∥∥⃗b∥ sin θ (3.9)

(Note: The vector ⃗a × ⃗b is orthogonal to both ⃗a and ⃗b)

Properties of Cross Products: If ⃗a, ⃗b, and ⃗c are vectors in V3 and c is a scalar, then

1. ⃗a × ⃗b = −⃗b × ⃗a 2. (c⃗a) × ⃗b = c(⃗a × ⃗b) = ⃗a × (c⃗b)

3. ⃗a × (⃗b + ⃗c) = ⃗a × ⃗b + ⃗a × ⃗c 4. (⃗a + ⃗b) × ⃗c = ⃗a × ⃗c + ⃗b × ⃗c

5. ⃗a · (⃗b × ⃗c) = (⃗a × ⃗b) · ⃗c 6. ⃗a × (⃗b × ⃗c) = (⃗a · ⃗c)⃗b − (⃗a · ⃗b)⃗c

Volume of a Parallelepiped: The volume of the parallelepiped determined by the vectors ⃗a, ⃗b, and ⃗c is
the magnitude of their scalar triple product
V = |⃗a · (⃗b × ⃗c)| (3.10)

Ex: Given ⃗a = ⟨1, 4, −7⟩ and ⃗b = ⟨2, −1, 4⟩, find (1) the angle between the vectors, (2) a normal vector ⃗n

19
3.5 Equations of Lines and Planes

that is perpendicular to both ⃗a and ⃗b, and (3) the volume of the parallelepiped formed with ⃗c = ⟨0, −9, 18⟩
⃗a · ⃗b (1)(2) + (4)(−1) + (−7)(4) 10 10
(1): cos θ = =p p = −√ =⇒ θ = cos−1 (− √ )
∥⃗a∥∥b∥ ⃗ 1 + 4 + (−7) 2 + (−1) + 4
2 2 2 2 2 2 154 154
⃗a × ⃗b ⟨(16 − 7), (−14 − 4), (−1 − 8)⟩ 1 1
(2): ⃗n = =p = √ ⟨9, −18, −9⟩ =⇒ ⃗n = √ ⟨1, −2, −1⟩
∥⃗a × ⃗b∥ (16 − 7)2 + (−14 − 4)2 + (−1 − 8)2 9 6 6
(3): V = |⃗c · (⃗a × ⃗b)| = |⟨0, −9, 18⟩ · ⟨9, −18, −9⟩| = 0

3.5 Equations of Lines and Planes

Equations of Lines
Vector Parametric Symmetric
x = xo + at
x − xo y − yo z − zo
⃗r = ⃗ro + t⃗v y = yo + bt = =
a b c
z = zo + ct

Equations of Planes
Vector Scalar General

⃗n · (⃗r − ⃗ro ) = 0 a(x − xo ) + b(y − yo ) + c(z − zo ) = 0 ax + by + cz + d = 0

Distance Formula (Point to Plane): The distance D between the point P (xo , yo , zo ) and plane
ax + by + cz + d = 0 can be written as
|axo + byo + czo + d|
D= √ (3.11)
a 2 + b2 + c 2

20
3.6 Cylinders and Quadric Surfaces

3.6 Cylinders and Quadric Surfaces

3.7 Cartesian, Cylindrical, and Spherical Coordinates


Cartesian Coordinates: We represent any given point in this coordinate system with a set of distances x,
y, and z.

Cylindrical Coordinates: We represent any given point in this coordinate system with a radial distance
away from the xy-plane r, an angle away from the xy-plane θ, and a height z. We can convert between Cartesian
and Cylindrical coordinates by the following equations
  p
 x = r cos θ 
 r = x2 + y 2

 
 y
y = r sin θ ⇐⇒ θ = tan−1 ( ) (3.12)

 
 x
 

z=z z=z

Spherical Coordinates: We represent any given point in this coordinate system with a radial distance
away from the z-axis ρ, an angle away from the xy-plane θ, and an angle away from the z-axis ϕ. We can

21
3.7 Cartesian, Cylindrical, and Spherical Coordinates

convert between Cartesian and Spherical coordinates by the following equations


 p
  ρ = x2 + y 2 + z 2
 x = ρ cos θ sin ϕ 


 
 y
⇐⇒ θ = tan−1 ( ) (3.13)
y = ρ sin θ sin ϕ x

 
 p
 
 2 2
z = ρ cos ϕ ϕ = tan−1 ( x + y )
z

22
Chapter 4 Vector Functions

4.1 Vector Functions and Space Curves


Vector Function: A function whose domain is a set of real numbers governed by scalar functions, and
whose range is a set of vectors given by
⃗r(t) = ⟨x(t), y(t), z(t)⟩ (4.1)

Limits of Vector Functions: The limit of a vector function is the limit of each of its components
lim ⃗r(t) = ⟨ lim x(t), lim y(t), lim z(t)⟩ (4.2)
t→∞ t→∞ t→∞ t→∞

4.2 Derivatives and Integrals of Vector Functions


Derivative of a Vector Function: Given ⃗r = ⃗r(t), the derivative of ⃗r is the same as for scalar functions
d⃗r ⃗r(t + h) − ⃗r(t)
⃗r ′ (t) = = lim (4.3)
dt h→0 h
We can now find what the derivative of a vector in term of its components by the following
1
⃗r ′ (t) = lim [⃗r(t + h) − ⃗r(t)]
∆t→0 ∆t
1
= lim [⟨x(t + ∆t), y(t + ∆t), z(t + ∆t)⟩ − ⟨x(t), y(t), z(t)⟩]
∆t→0 ∆t
1 x(t + ∆t) − x(t) y(t + ∆t) − y(t) z(t + ∆t) − z(t)
= lim ⟨ , , ⟩
∆t→0 ∆t ∆t ∆t ∆t
x(t + ∆t) − x(t) y(t + ∆t) − y(t) z(t + ∆t) − z(t)
= ⟨ lim , lim , lim ⟩
∆t→0 ∆t ∆t→0 ∆t ∆t→0 ∆t
= ⟨x′ (t), y ′ (t), z ′ (t)⟩

d⃗r
⃗r ′ (t) = = ⟨x′ (t), y ′ (t), z ′ (t)⟩ (4.4)
dt

Differentiation Rules: If ⃗u and ⃗v are differentiable vector functions, c is a scalar, and f is a real-valued
function, then
d d
1. [⃗u(t) + ⃗v (t)] = ⃗u ′ (t) + ⃗v ′ (t) 2. [c⃗u(t)] = c⃗u ′ (t)
dt dt

d d
3. [f (t)⃗u(t)] = f (t)⃗u ′ (t) + ⃗u(t)f ′ (t) 4. [⃗u(t) · ⃗v (t)] = ⃗u(t) · ⃗v ′ (t) + ⃗v (t) · ⃗u ′ (t)
dt dt

d d
5. [⃗u(t) × ⃗v (t)] = ⃗u(t) × ⃗v ′ (t) + ⃗v (t) × ⃗u ′ (t) 6. [⃗u(f (t))] = ⃗u ′ (f (t))f ′ (t)
dt dt
4.3 Arc Length and Curvature

Integral of a Vector Function: Given ⃗r = ⃗r(t), the integral of ⃗r is the same as for scalar functions
Zb X
n
⃗r(t)dt = lim ⃗r(t∗i )∆t (4.5)
n→∞
a i=1

We can now find what the integral of a vector in term of its components is by the following
Zb X n
⃗r(t)dt = lim ⟨x(t∗i ), y(t∗i ), z(t∗i )⟩∆t
n→∞
a i=1
X
n X
n X
n
= lim [⟨ x(t∗i )∆t, y(t∗i )∆t, z(t∗i )∆t⟩]
n→∞
i=1 i=1 i=1
Zb Zb Zb
= ⟨ x(t)dt, y(t)dt, z(t)dt⟩
a a a

Zb Zb Zb Zb
⃗r(t)dt = ⟨ x(t)dt, y(t)dt, z(t)dt⟩ (4.6)
a a a a

4.3 Arc Length and Curvature


Arc Length of a Vector Function: We know the arc length of parameterized equations in 2D before
Zb r
dx dy
L= ( )2 + ( )2 dt
dt dt
a
Notice each term in the square root is simply each component of a vector function ⃗r(t) squared, so the arc length
becomes generalized for any vector function as
Zb
L= ∥⃗r ′ (t)∥dt (4.7)
a

Arc Length Function: Suppose the arc length of ⃗r isn’t bounded between a ≤ t ≤ b, but to some arbitrary
point t, then the arc length function becomes
Zt
s(t) = ∥⃗r ′ (u)∥du (4.8)
a

If we differentiate both sides with respect to t, using the fundamental theorem of calculus we get
ds
= ∥⃗r ′ (t)∥ (4.9)
dt

Unit Tangent Vector: We know that for a vector function ⃗r(t) its derivative is tangent to the curve, so the

24
4.3 Arc Length and Curvature

unit tangent vector is represented by


⃗r ′
T⃗ = (4.10)
∥⃗r ′ ∥

Curvature: Defined as the reciprocal of radius, but with vectors it is represented by


dT⃗ ∥T⃗ ′ ∥ ∥⃗r ′ × ⃗r ′′ ∥
κ=∥ ∥= = (4.11)
ds v ∥⃗r ′ ∥3

Principle Unit Normal Vector: A vector that points to ”the center” of the curve, which can be used to
indicate the direction in which the curve is turning at each point
⃗′
⃗ = T
N (4.12)
∥T⃗ ′ ∥

Binormal Vector: A vector that is perpendicular to both the unit tangent vector and the principle unit
normal vector
⃗ = T⃗ × N
B ⃗ (4.13)

Torsion: A scalar that explains how much the curve twists along its path
dB⃗
τ =− ·N⃗ (4.14)
ds
We can derive an expresion for torsion in terms of the tangent and normal vectors by the following
d(T⃗ × N⃗)
τ = − ·N⃗
ds
dT⃗ ⃗
⃗ + T⃗ × dN ) · N
= ( ×N ⃗
ds ds
T⃗ ′ ⃗
⃗ + T⃗ × dN ) · N
= ( ×N ⃗
v ds
∥T⃗ ′ ∥ ⃗ ⃗
⃗ + T⃗ × dN ) · N
= ( N ×N ⃗
v ds
dN⃗
= (T⃗ × )·N⃗
ds


dN
τ = (T⃗ × )·N
⃗ (4.15)
ds

25
4.4 Motion in Space: Velocity and Acceleration

4.4 Motion in Space: Velocity and Acceleration


Velocity Vector: The first time derivative of the position vector ⃗r that points tangential to a curve
⃗r(t + h) − ⃗r(t)
⃗v (t) = lim = ⃗r ′ (t) (4.16)
h→0 h

Speed: The magnitude of the first time derivative of a motion vector function, the velocity vector
v(t) = ∥⃗v (t)∥ = ∥⃗r ′ (t)∥ = s′ (t) (4.17)

Acceleration Vector: The first time derivative of the velocity vector ⃗v (t) that has centripetal and tangential
components
⃗v (t + h) − ⃗v (t)
⃗a(t) = lim = ⃗v ′ (t) (4.18)
h→0 h
We can decompose the acceleration vector into its centripetal and tangential components by using the other
vectors from section 4.3
d⃗v
⃗a =
dt
d(v T⃗ )
=
dt
dv ⃗ dT⃗
= T +v
dt dt
′⃗
= v T + v(N ∥T⃗ ∥)

= v ′ T⃗ + v N
⃗ (κv)

= v ′ T⃗ + κv 2 N

⃗a = v ′ T⃗ + κv 2 N
⃗ (4.19)

Tangential Acceleration: at (t) = v ′ (t) Centripetal Acceleration: ac (t) = κ(t)v(t)2 (4.20)

Projectile Motion: A projectile is fired with angle of elevation α and initial velocity ⃗vo . Assuming that
air resistance is negligible and the only external force is due to gravity, find the 2D position functions x(t) &

26
4.5 Differential Distances:

y(t) of the projectile.


Start with Newton’s Second Law
F⃗ = m⃗a = −mg ĵ

Canceling out mass yields the acceleration vector


⃗a(t) = −g ĵ
d~v
We know ⃗a = dt , so solving the differential equations with initial condition ⃗
v (0) = ⃗v0 yields the velocity vector
⃗v (t) = ⃗vo − gt ĵ
d~
r
We also know ⃗v = dt , so solving the differential equations with initial condition ⃗r(0) = 0 yields the position
vector
1
⃗r(t) = ⃗vo t − gt2 ĵ
2
We can rewrite the initial velocity in terms of its components and let ∥⃗vo ∥ = vo
⃗vo = vo cos α î + vo sin α ĵ
Substituting this into the position function and grouping like components yields
1
⃗r(t) = [vo cos α]t î + [(vo sin α)t − gt2 ] ĵ
2
We can finally break down the motion from the position vector’s components into two motion functions
1
x(t) = (vo cos α)t y(t) = (vo sin α)t − gt2 (4.21)
2

4.5 Differential Distances:


Cartiesian Differential: We have seen the differential distance operator in this coordinate system when
deriving the arc length of a function. We represent the differential position as a vector
d⃗s = dx ⃗i + dy ⃗j + dz ⃗k (4.22)
We can obtain the differential distance by taking the magnitude of the differential position
(
d⃗s · d⃗s = ∥d⃗s∥2 p
=⇒ ds = ∥d⃗s∥ = dx2 + dy 2 + dz 2 (4.23)
d⃗s · d⃗s = dx2 + dy 2 + dz 2

Cylindrical Differential: We have seen the conversions for Cartesian to Cylindrical back in chapter 3, so
we can use these figures to represent the differential position in the new coordinate system
Note: x = r cos θ, y = r sin θ, z = z
d⃗s = dx ⃗i + dy ⃗j + dz ⃗k
d⃗s = (cos θdr − r sin θdθ) ⃗i + (sin θdr + r cos θdθ) ⃗j + dz ⃗k
d⃗s = dr(cos θ ⃗i + sin θ ⃗j) + rdθ(− sin θ ⃗i + cos θ ⃗j) + dz ⃗k

d⃗s = dr ⃗er + rdθ ⃗eθ + dz ⃗k (4.24)

Looking back at the last step of the derivation, the unit vectors in this new coordinate system can be defined as

27
4.5 Differential Distances:

the following
⃗er = cos θ ⃗i + sin θ ⃗j
⃗eθ = − sin θ ⃗i + cos θ ⃗j (4.25)
⃗k = ⃗k

We can obtain the differential distance by taking the magnitude of the differential position
(
d⃗s · d⃗s = ∥d⃗s∥2 p
=⇒ ds = ∥d⃗s∥ = dr2 + r2 dθ2 + dz 2 (4.26)
d⃗s · d⃗s = dr2 + r2 dθ2 + dz 2

Spherical Differential: We have seen the conversions for Cartesian to Spherical back in chapter 3, so we
can use these figures to represent the differential position in the new coordinate system
Note: x = ρ cos θ sin ϕ, y = ρ sin θ sin ϕ, z = ρ cos ϕ
d⃗s = dx ⃗i + dy ⃗j + dz ⃗k
d⃗s = (cos θ sin ϕdρ + ρ cos θ cos ϕdϕ − ρ sin θ sin ϕdθ) ⃗i
+ (sin θ sin ϕdρ + ρ cos θ cos ϕdϕ + ρ cos θ sin ϕdθ) ⃗j
+ (cos ϕdρ − ρ sin ϕdϕ) ⃗k
d⃗s = dρ(cos θ sin ϕ ⃗i + sin θ sin ϕ ⃗j + cos ϕ ⃗k)
+ ρdϕ(cos θ cos ϕ ⃗i + cos θ cos ϕ ⃗j − sin ϕ ⃗k)
+ ρ sin ϕdθ(− sin θ ⃗i + cos θ ⃗j)

d⃗s = dρ ⃗eρ + ρdϕ ⃗eφ + ρ sin ϕdθ ⃗eθ (4.27)

Looking back at the last step of the derivation, the unit vectors in this new coordinate system can be defined as
the following
⃗eρ = cos θ sin ϕ ⃗i + sin θ sin ϕ ⃗j + cos ϕ ⃗k
⃗eφ = cos θ cos ϕ ⃗i + cos θ cos ϕ ⃗j − sin ϕ ⃗k (4.28)
⃗eθ = − sin θ ⃗i + cos θ ⃗j

We can obtain the differential distance by taking the magnitude of the differential position
( q
d⃗s · d⃗s = ∥d⃗s∥2
=⇒ ds = ∥d⃗s∥ = dρ2 + ρ2 dϕ2 + ρ2 sin2 ϕdθ2 (4.29)
d⃗s · d⃗s = dρ + ρ dϕ + ρ sin ϕdθ
2 2 2 2 2 2

In chapter 5 we will see how to derive these expressions using partial derivatives for our formulas from chapter
3 for coordinate conversions.

28
Chapter 5 Partial Derivatives

5.1 Functions of Several Variables


Two-Variable Function: A function f of two variables is a rule that assigns to each ordered pair of real
numbers (x, y) in a set D a unique real number denoted by f (x, y). The set D is the domain of f and its range
is the set of values that f takes on, that is, {f (x, y) | (x, y) ∈ D}.

Level Curves: The level curves of a function f of two variables are the curves with equations f (x, y) = k,
where k is a constant (in the range of f ).

5.2 Limits and Continuity


Continuity of Multivariable Functions: A function f of two variables is called continuous at (a, b) if
lim f (x, y) = f (a, b) (5.1)
(x,y)→(a,b)

f is continuous on D if f is continuous at every point (a, b) in D.

5.3 Partial Derivatives


First Partial Derivatives: Given a differentiable function f (x, y), it has the first partial derivatives defined
by
f (x + h, y) − f (x, y) ∂f
fx (x, y) = lim =
h→0 h ∂x
(5.2)
f (x, y + h) − f (x, y) ∂f
fy (x, y) = lim =
h→0 h ∂y

(Note: Differentiating one variable, x, makes any other variables a constant, y)

More generally speaking, when looking at functions with n variables, the first partial derivatives become
f (x1 , . . . , xi + h, . . . , xn ) − f (x1 , . . . , xi , . . . , xn ) ∂f
fxi (x1 , . . . , xi , . . . , xn ) = lim = (5.3)
h→∞ h ∂xi
5.4 Tangent Planes and Linear Approximations

Second Partial Derivatives: Given a twice differentiable function f (x, y), it has the second partial deriva-
tives defined by
∂ ∂f ∂2f ∂ ∂f ∂2f
fxx = ( )= f yx = ( ) =
∂x ∂x ∂x2 ∂x ∂y ∂x∂y
(5.4)
∂ ∂f ∂2f ∂ ∂f ∂2f
fxy = ( )= fyy = ( )=
∂y ∂x ∂y∂x ∂y ∂y ∂y 2

Clairaut’s Theorem: Suppose f is defined on a disk D that contains the point (a, b). If the functions fxy
and fyx are both continuous on D, then
fxy (a, b) = fyx (a, b) (5.5)

5.4 Tangent Planes and Linear Approximations


Tangent Plane: Suppose f has continuous partial derivatives. An equation of the tangent plane to the
surface z = f (x, y) at the point P (xo , yo , zo ) is
z − zo = fx (xo , yo )(x − xo ) + fy (xo , yo )(y − yo ) (5.6)

Linear Approximation: We know from 5.6 that an equation of the tangent plane to the graph of a function
f of two variables at the point (a, b, f (a, b)) is
z − f (a, b) = fx (a, b)(x − a) + fy (a, b)(y − b)
Therefore, the linear function whose graph is this tangent plane is the linearization of f at (a, b) defined by
L(x, y) = f (a, b) + fx (a, b)(x − a) + fy (a, b)(y − b) (5.7)

Differentials: For a differentiable function of two variables, z = f (x, y), the differential dz, also called
the total differential, is defined by
∂z ∂z
dz = fx (x, y)dx + fy (x, y)dy = dx + dy (5.8)
∂x ∂y
More generally speaking, when looking at functions with n variables, the differentials become
∂f ∂f ∂f
df = dx1 + dx2 + · · · + dxn (5.9)
∂x1 ∂x2 ∂xn

30
5.5 The Chain Rule

5.5 The Chain Rule


Chain Rule (Case 1): Suppose that z = f (x, y) is a differentiable function of x and y, where x = g(t)
and y = h(t) are both differentiable functions of t, then
dz ∂f dx ∂f dy
= + (5.10)
dt ∂x dt ∂y dt

Chain Rule (Case 2): Suppose that z = f (x, y) is a differentiable function of x and y, where x = g(s, t)
and y = h(s, t) are both differentiable functions of s and t, then
∂z ∂f ∂x ∂f ∂y ∂z ∂f ∂x ∂f ∂y
= + = + (5.11)
ds ∂x ∂s ∂y ∂s dt ∂x ∂t ∂y ∂t

Chain Rule (General): Suppose that f is a differentiable function of x1 , x2 , . . . , xn , and each xn is a


differentiable function of t1 , t2 , . . . , tm , then
∂f ∂f ∂x1 ∂f ∂x2 ∂f ∂xn
= + + ··· + (5.12)
dti ∂x1 ∂t1 ∂x2 ∂t2 ∂xn ∂ti

5.6 Directional Derivatives and the Gradient Vector


Directional Derivative: If f is a differentiable function of x and y, then f has a directional derivative in
the direction of any unit vector ⃗u = ⟨a, b⟩ given by
Du f (x, y) = fx (x, y)a + fy (x, y)b (5.13)

Gradient: Notice from 5.13 that the directional derivative of a differentiable function can be written as
the dot product of two vectors
Du f (x, y) = fx (x, y)a + fy (x, y)b
= ⟨fx (x, y), fy (x, y)⟩ · ⟨a, b⟩
= ⟨fx (x, y), fy (x, y)⟩ · ⃗u
If f is a function of two variables x and y, the first vector is considered to be the gradient vector which holds
the first derivatives of f
∂f ⃗ ∂f ⃗
∇f = ⟨fx (x, y), fy (x, y)⟩ = i+ j
∂x ∂y
More commonly working with three variables x, y, and z, the gradient and directional derivative becomes
generalized to the following
∂f ⃗ ∂f ⃗ ∂f ⃗
∇f = ⟨fx , fy , fz ⟩ = i+ j+ k (5.14)
∂x ∂y ∂z

Du f (x, y, z) = ∇f (x, y, z) · ⃗u (5.15)

31
5.7 Maximum and Minimum Values

5.7 Maximum and Minimum Values


Hessian Matrix: Given a differentiable function f with variables x1 , x2 , . . . , xn , we can store its second
partial derivatives in a matrix given by
 
∂2f ∂2f ∂2f
 ∂x2 ...
 1 ∂x1 ∂x2 ∂x1 ∂xn  
 
 
 
 ∂2f ∂2f ∂2f 
 ... 
 ∂x2 ∂x1 ∂x 2 ∂x ∂x 
 2 2 n 
H(f ) =   (5.16)
 
 .. .. .. .. 
 . . . . 
 
 
 
 
 ∂ f2 2
∂ f ∂ f 
2
...
∂xn ∂x1 ∂xn ∂x2 ∂x2n

Second Derivative Test: Suppose the second partial derivatives of f are continuous on a disk with center
(a, b), and suppose that fx (a, b) = 0 and fy (a, b) = 0 [that is, (a, b) is a critical point of f ]. Let

fxx fxy
D = det(H(f )) = = fxx fyy − fxy
2
(5.17)
fyx f yy

1. If D(a, b) < 0, then the point (a, b) is a saddle point


2. If D(a, b) = 0, then the point (a, b) is inconclusive
3. If D(a, b) > 0 and
(a). if fxx (a, b) > 0, then the points (a, b) is a local minimum
(b). if fxx (a, b) < 0, then the points (a, b) is a local maximum

Extreme Value Theorem: If f is continuous on a closed, bounded set D in R2 , then f attains an absolute
maximum value f (x1 , y1 ) and an absolute minimum value f (x2 , y2 ) at some points (x1 , y1 ) and (x2 , y2 ) in D.

Ex: Obtain the local maxima & minima and saddle points of the function f (x, y) = xye−x
2 −y 2


(  1
2 −x2 −y 2 
 x = ±√ & y = 0
y(1 − x )e =0 2
∇f (x, y) = 0 =⇒ =⇒
2 −x2 −y 2
x(1 − y )e =0 
 1
x = 0 & y = ± √
2
1 1 1 1 1 1
Critical Points: (0, 0), (0, ± √ ), (± √ , 0), (± √ , ± √ ), (± √ , ∓ √ )
2 2 2 2 2 2

−2xy(2 − 3x2 ) (1 − 2x2 )(1 − 2y 2 )


D = e−2(x
2 +y 2 )

(1 − 2x2 )(1 − 2y 2 ) −2xy(2 − 3y 2 )

32
5.8 Lagrange Multipliers

0 1
D(0, 0) = e0= −1 < 0 =⇒ (0, 0) is a saddle point.
1 0

1 0 0 
D(0, ± √ ) = e−1 = 0


2 0 0  1 1
=⇒ (0, ± √ ) & (± √ , 0) are all inconclusive points.
0 0 
 2 2
= 0
1
D(± √ , 0) = e−1 

2 0 0

1 1 −2 0 4 1 1
D(± √ , ± √ ) = e−2 = 2 > 0 & fxx = −2e−1 < 0 =⇒ (± √ , ± √ ) are both local maxima.
2 2 0 −2 e 2 2

1 1 2 0 4 1 1
D(± √ , ∓ √ ) = e−2 = 2 > 0 & fxx = 2e−1 > 0 =⇒ (± √ , ∓ √ ) are both local minima.
2 2 0 2 e 2 2

5.8 Lagrange Multipliers


One Constraint Optimization: To find the maximum and minimum values of f (x, y, z) subject to the
constraint g(x, y, z) = k [assuming that these extreme values exist and ∇g ̸= ⃗0 on the surface g(x, y, z) = k],
we simply need to solve the system of equations formed by
(
∇f (x, y, z) = λ∇g(x, y, z)
(5.18)
g(x, y, z) = k

Ex: A cylindrical container without a lid has a volume of 8π m3 . Find the optimum surface of such a box.
A(r, θ, z) = 2πrz + πr2 and V (r, θ, z) = πr2 z ≡ 8π

∇A(r, θ, z) = λ∇V (r, θ, z)

⟨2πz + 2πr, 0, 2πr⟩ = λ⟨2πrz, 0, πr2 ⟩



2πz + 2πr = λ(2πrz) 

  z + r = λrz

 

0 = λ(0)
=⇒ 2 = λr =⇒ r = z = 2

 2πr = λ(πr2 ) 


  2

 r z=8
πr2 z = 8π

Amax = A(2, θ, 2) = 2π(2)(2) + π(2)2 = 12π

Two Constraint Optimization: To find the maximum and minimum values of f (x, y, z) subject to the
constraint g(x, y, z) = k and h(x, y, z) = c [assuming that these extreme values exist, ∇g ̸= ⃗0 on the surface

33
5.9 Gradient and Laplacian in Other Coordinates

g(x, y, z) = k, and same for h(x, y, z)], we simply need to solve the system of equations formed by


 ∇f (x, y, z) = λ∇g(x, y, z) + µ∇h(x, y, z)

g(x, y, z) = k (5.19)



h(x, y, z) = c

Ex: Obtain the system of equations in order to find the maximum surface area of a cylindrical slice the
volume of which is constrained to be Vo with a second constraint z 2 + r2 = a where Vo and a are both constants.
1
A(r, θ, z) = θr2 + θrz and V (r, θ, z) = θr2 z ≡ Vo and C(r, θ, z) = z 2 + r2 ≡ a
2

∇A(r, θ, z) = λ∇C(r, θ, z) + µ∇V (r, θ, z)

∂ ∂ ∂
(Note: In cylindrical coordinates ∇ = ⃗er + ⃗eθ + ⃗ez )
∂r r∂θ ∂z
1 1
⟨2θr + θz, r + z, θr⟩ = λ⟨2r, 0, 2z⟩ + µ⟨θrz, rz, θr2 ⟩
2 2


 2θr + θz = λ(2r) + µ(θrz)



 1




r+z = λ(0) + µ( rz)

 2
1
θr = λ(2z) + µ( θr2 )

 2



 z 2 + r2 =a





 1 θr2 z = Vo
2

5.9 Gradient and Laplacian in Other Coordinates


Cartesian Gradient and Laplacian: The gradient operator holds the partial derivatives of each variable
x, y, and z, in a vector as such
∂ ⃗ ∂ ⃗ ∂ ⃗
∇= i+ j+ k (5.20)
∂x ∂y ∂z
The Laplacian operator is simply the dot product of the gradient operator with itself given by
∂2 ∂2 ∂2
∇2 = ∇ · ∇ = + + (5.21)
∂x2 ∂y 2 ∂z 2

Cylindrical Gradient and Laplacian: The gradient operator holds the partial derivatives of each variable
r, θ, and z, in a vector as such
∂ 1 ∂ ∂ ⃗
∇= ⃗er + ⃗eθ + k (5.22)
∂r r ∂θ ∂z
Now to obtain the laplacian, we will have to look at the chain rule for partial derivatives in order to obtain each

34
5.9 Gradient and Laplacian in Other Coordinates

term
p y
Note: r(x, y) = x2 + y 2 & θ(x, y) = tan−1 ( )
 x
 −y 
 ∂ ∂ ∂r ∂ ∂θ 
 ∂ ∂
p
x ∂  ∂ ∂ sin θ ∂

 = + 
 ∂x = + 2 + y2 
 = cos θ −
∂x ∂r ∂x ∂θ ∂x ∂r x 2 + y2 ∂θ x ∂x ∂r r ∂θ
=⇒ =⇒

 ∂ ∂ ∂r ∂ ∂θ 
 ∂ ∂ y ∂ x 
 ∂ ∂ cos θ ∂
 = + 
 = p + 2 2
 = sin θ +
∂y ∂r ∂y ∂θ ∂y ∂y ∂r x + y2 2 ∂θ x + y ∂y ∂r r ∂θ
 2

 ∂ = (cos θ ∂ − sin θ ∂ )(cos θ ∂ − sin θ ∂ )

∂x2 ∂r r ∂θ ∂r r ∂θ

 ∂ 2 ∂ cos θ ∂ ∂ cos θ ∂
 2 = (sin θ + )(sin θ + )
∂y ∂r r ∂θ ∂r r ∂θ
 2

 ∂ ∂2 sin2 θ ∂ sin θ cos θ ∂ sin2 θ ∂ 2
 2 = cos2 θ 2 + + +
∂x ∂r r ∂r r2 ∂θ r2 ∂θ2

 ∂ 2
2 ∂
2 2
cos θ ∂ sin θ cos θ ∂ cos2 θ ∂ 2
 = sin θ + − +
∂y 2 ∂r2 r ∂r r2 ∂θ r2 ∂θ2
∂2 1 ∂ 1 ∂2 ∂2
∇2 = (cos2 θ + sin2 θ) + (sin 2
θ + cos 2
θ) + (sin 2
θ + cos 2
θ) +
∂r2 r ∂r r2 ∂θ2 ∂z 2

∂2 1 ∂ 1 ∂2 ∂2
∇2 = + + + (5.23)
∂r2 r ∂r r2 ∂θ2 ∂z 2

Spherical Gradient and Laplacian: The gradient operator holds the partial derivatives of each variable
ρ, ϕ, and θ, in a vector as such
∂ 1 ∂ 1 ∂
∇= ⃗eρ + ⃗eφ + ⃗eθ (5.24)
∂ρ ρ ∂ϕ ρ sin ϕ ∂θ
Now to obtain the laplacian, we will have to look at the chain rule for partial derivatives in order to obtain each
term (the derivation is similar to the one for cylindrical coordinates, but this is a lengthy derivation left for the
student to complete)
∂2 2 ∂ 1 cos ϕ ∂ 1 ∂2 1 ∂2
∇2 = + + + + (5.25)
∂ρ2 ρ ∂ρ ρ2 sin ϕ ∂ϕ ρ2 ∂ϕ2 ρ2 sin2 ϕ ∂θ2

35
Chapter 6 Multiple Integrals

6.1 Double Integrals Over Rectangles


Double Integral (Rectangle): If f (x, y) > 0, then the volume V of the solid that lies above the rectangle
R and below the surface z = f (x, y) is
x X
m X
n

V = f (x, y) dA = lim f x∗ij , yij

∆A (6.1)
m,n→∞
R i=1 j=1

Midpoint Rule: Where x̄i is the midpoint of [xi−1 , xi ] and ȳj is the midpoint of [yj−1 , yj ]
x X
m X
n
f (x, y) dA ≈ f (x̄i , ȳj ) ∆A (6.2)
R i=1 j=1

Fubini’s Theorem: If f is continuous on the rectangle R = {(x, y) | a < x < b, c < y < d}, then

x Zb Zd Zd Zb Zb Zd
f (x, y) dA = f (x, y) dydx = f (x, y) dxdy = g(x) dx h(y) dy (6.3)
R a c c a a c

6.2 Double Integrals over General Regions


Double Integral (General): If f is continuous on a type I region D such that D = {(x, y) | a ≤ x ≤
b, g1 (x) ≤ y ≤ g2 (x)} or type II region D such that D = {(x, y) | c ≤ y ≤ d, h1 (y) ≤ x ≤ h2 (y)}, then

x Zb gZ2 (x)
Type I: = f (x, y)dA = f (x, y) dydx
D a g1 (x)
(6.4)
x Zd hZ2 (y)
Type II: = f (x, y)dA = f (x, y) dxdy
D c h1 (y)

Properties of Double Integrals: For a general region D, f (x, y) and g(x, y) continuous functions, and c
6.3 Double Integrals in Polar Coordinates

is a scalar, then x x x
1. [f (x, y) + g(x, y)]dA = f (x, y)dA + g(x, y) dA
D D D

x x
2. cf (x, y)dA = c f (x, y) dA
D D

x x x
3. f (x, y)dA = f (x, y)dA + f (x, y) dA
D D1 D2

Area: If f (x, y) = 1, the area of the region D is defined by


x
A(D) = dA (6.5)
D

6.3 Double Integrals in Polar Coordinates


Double Integral (Polar): If f is continuous on a polar region of the form D = {(r, θ) | α ⩽ θ ⩽
β, h1 (θ) ⩽ r ⩽ h2 (θ)},then

x Zβ hZ2 (θ)
f (x, y)dA = f (r cos θ, r sin θ) rdrdθ (6.6)
R α h1 (θ)

6.4 Applications of Double Integrals


Density and Mass: Suppose the lamina occupies a region D of the xy-plane and its density at a point
(x, y) in D is given by ρ(x, y), where ρ is a continuous function on D. This means that
∆m
ρ(x, y) = lim (6.7)
∆A
The total mass of the lamina in D is defined by adding up the density multiplied with differential bits of area in
double integral form
x
m= ρ(x, y) dA (6.8)
D

Moments and Center of Mass: The moment of a lamina that occupied a region D is the product of its

37
6.5 Surface Area

mass and its directed distance from the axis of interest


x
Mx = yρ(x, y) dA
D
x (6.9)
My = xρ(x, y) dA
D

The coordinates (x, y) of the center of mass of a lamina occupying the region D and having density function
ρ(x, y) are
My 1 x
x̄ = = yρ(x, y) dA
m m
D
(6.10)
Mx 1 x
ȳ = = xρ(x, y) dA
m m
D

Moment of Inertia: The moment of inertia of a particle of mass m about an axis is defined to be mr2 for
a lamina with density function ρ(x, y) and occupying a region D
x
Ix = y 2 ρ(x, y) dA
D
x
Iy = x2 ρ(x, y) dA (6.11)
D
x
Io = (x2 + y 2 )ρ(x, y) dA
D

6.5 Surface Area


Surface Area: The area of the surface with equation z = f (x, y), (x, y) ∈ D, where fx and fy are
continuous, is
s
x  ∂f 2  ∂f 2
A(S) = + + 1 dA (6.12)
∂x ∂y
D

6.6 Triple Integrals


Triple Integral: The triple integral of f over the box B = {(x, y, z) | a ⩽ x ⩽ b, c ⩽ y ⩽ d, r ⩽ z ⩽ s}
is
y X
l X
m X
n
f (x, y, z) dV = lim f (xi , yj , zk ) ∆V (6.13)
L,m,n→∞
B i=1 j=1 k=1

Fubini’s Theorem for Triple Integrals: If f is continuous on the rectangular box B = [a, b]×[c, d]×[r, s]

38
6.7 Triple Integrals in Other Coordinates

y Zs Zd Zb
f (x, y, z) dV = f (x, y, z) dxdydz (6.14)
B r c a

Volume: If f (x, y, z) = 1, the volume of the region E is defined by


y
V (E) = dV (6.15)
E

6.7 Triple Integrals in Other Coordinates


Triple Integral (Cylindrical): Suppose that E is a type I region whose projection D onto the xy-plane is
conveniently described in polar coordinates. In particular, suppose that f is continuous and

E = {(x, y, z) | (x, y) ∈ D, u1 (x, y) ⩽ z ⩽ u2 (x, y)}

where D is given in polar coordinates by

D = {(r, θ) | α ⩽ θ ⩽ β, h1 (θ) ⩽ r ⩽ h2 (θ)}

then the triple integral in these coordinates becomes

y Zβ hZ2 (θ) u2 (rZ cos θ)


f (x, y, z)dV = f (r cos θ, r sin θ, z) r dzdrdθ (6.16)
E α h1 (θ) u1 (r cos θ)

Triple Integral (Spherical): In the spherical coordinate system the counterpart of a rectangular box is a
spherical wedge E = {(ρ, ϕ, θ) | a < ρ < b, ϕ1 < ϕ < ϕ2 , θ1 < θ < θ2 }

y Zθ2 Zφ2 Zb
f (x, y, z)dV = f (ρ cos θ sin ϕ, ρ sin θ sin ϕ, ρ cos ϕ) ρ2 sin ϕ dρdϕdθ (6.17)
E θ1 φ1 a

6.8 Change of Variables in Multiple Integrals


Jacobian: The Jacobian of the transformation T given by x = g(u, v) and y = h(u, v) is
∂x ∂x
∂(x, y) ∂u ∂v ∂x ∂y ∂x ∂y
= = − (6.18)
∂(u, v) ∂u ∂v ∂v ∂u
∂y ∂y
∂u ∂v

39
6.8 Change of Variables in Multiple Integrals

The Jacobian of the transformation T given by x = g(u, v, w), y = h(u, v, w), and z = k(u, v, w) is
∂x ∂x ∂x
∂u ∂v ∂w

∂(x, y, z) ∂y ∂y ∂y
= (6.19)
∂(u, v, w) ∂u ∂v ∂w

∂z ∂z ∂z
∂u ∂v ∂w

Change of Variables: For a Suppose that f is continuous on R and that R and S are type I or type II plane
regions. Suppose also that T is one-to-one, except perhaps on the boundary of S. Then
x x ∂(x, y)
f (x, y) dA = f (x(u, v), y(u, v)) dudv (6.20)
∂(u, v)
R S

This is similar for triple integrals as well for continuous function f on V transformed by T to E
y y ∂(x, y, z)
f (x, y, z) dV = f (x(u, v, w), y(u, v, w), z(u, v, w)) dudvdw (6.21)
∂(u, v, w)
V E

Proof of Cylindrical Jacobian:


∂x ∂x ∂x
∂r ∂θ ∂z cos θ −r sin θ 0

∂(x, y, z) ∂y ∂y ∂y
= = sin θ r cos θ 0
∂(r, θ, z) ∂r ∂θ ∂z

∂z ∂z ∂z 0 0 1
∂r ∂θ ∂z
(6.22)
r cos θ 0 sin θ 0
= cos θ + r sin θ
0 1 0 1
= r cos2 θ + r sin2 θ
= r(cos2 θ + sin2 θ)
=r

40
6.8 Change of Variables in Multiple Integrals

Proof of Spherical Jacobian:


∂x ∂x ∂x
∂ρ ∂ϕ ∂θ cos θ sin ϕ ρ cos θ cos ϕ −ρ sin θ sin ϕ

∂(x, y, z) ∂y ∂y ∂y
= = sin θ sin ϕ ρ sin θ cos ϕ ρ cos θ sin ϕ
∂(ρ, ϕ, θ) ∂ρ ∂ϕ ∂θ

∂z ∂z ∂z cos ϕ −ρ sin ϕ 0
∂ρ ∂ϕ ∂θ
ρ sin θ cos ϕ ρ cos θ sin ϕ sin θ sin ϕ ρ cos θ sin ϕ
= cos θ sin ϕ − ρ cos θ cos ϕ
−ρ sin ϕ 0 cos ϕ 0
sin θ sin ϕ ρ sin θ cos ϕ (6.23)
− ρ sin θ sin ϕ
cos ϕ −ρ sin ϕ
= cos θ sin ϕ(0 + ρ2 cos θ sin2 ϕ) − ρ cos θ cos ϕ(0 − ρ cos θ sin ϕ cos ϕ)
− ρ sin θ sin ϕ(−ρ sin θ sin2 ϕ − ρ sin θ cos2 ϕ)
= ρ2 cos2 θ sin3 ϕ + ρ2 cos2 θ sin ϕ cos2 ϕ + ρ2 sin2 θ sin3 ϕ + ρ2 sin2 θ sin ϕ cos2 ϕ
= ρ2 sin ϕ(cos2 θ sin2 ϕ + cos2 θ cos2 ϕ + sin2 θ sin2 ϕ + sin2 θ cos2 ϕ)
= ρ2 sin ϕ(cos2 θ(sin2 ϕ + cos2 ϕ) + sin2 θ(sin2 ϕ + cos2 ϕ))
= ρ2 sin ϕ(cos2 θ + sin2 θ)
= ρ2 sin ϕ

41
Chapter 7 Vector Calculus

7.1 Vector Fields

Vector Fields: Let D be a set in R2 (a plane region). A vector field on R2 is a function F⃗ that assigns to
each point (x, y) in D a two-dimensional vector F⃗ (x, y).
F⃗ (x, y) = P (x, y) ⃗i + Q(x, y) ⃗j (7.1)

In three-dimensions, let E be a subset of R3 . A vector field on R3 is a function F⃗ that assigns to each point
(x, y, z) in E a three-dimensional vector F⃗ (x, y, z).
F⃗ (x, y, z) = P (x, y, z) ⃗i + Q(x, y, z) ⃗j + R(x, y, z) ⃗k (7.2)

Note that by letting an arbitrary position vector ⃗r = x ⃗i + y ⃗j in R2 and ⃗r = x ⃗i + y ⃗j + z ⃗k in R3 allows us to


rewrite vector fields as the following
F⃗ (⃗r) = P (⃗r) ⃗i + Q(⃗r) ⃗j or F⃗ (⃗r) = P (⃗r) ⃗i + Q(⃗r) ⃗j + R(⃗r) ⃗k (7.3)

Conservative Vector Fields: A vector field F⃗ is called a conservative vector field if it is the gradient
of some scalar function, that is, if there exists a function f such that F⃗ = ∇f . In this situation f is called a
7.2 Line Integrals

potential function for F⃗

kQ
Ex: Show that the electric field is a conservative vector field fora potential function V (x, y, z) = p
x2 + y 2 + z 2

E⃗ = ∇V = ∂V ⃗i + ∂V ⃗j + ∂V ⃗k
∂x ∂y ∂z
⃗ = −kQx ⃗i + −kQy ⃗j + −kQz ⃗k
E
(x2 + y 2 + z 2 )3/2 (x2 + y 2 + z 2 )3/2 (x2 + y 2 + z 2 )3/2
E⃗ = −kQ ⃗r = −kQ r̂
|⃗r|3 |⃗r|2

7.2 Line Integrals


Line Integral (Scalar): If f is defined on a smooth curve C, then the line integral of f along C is
Z Zb r
dx 2 dy dz
f (x, y, z) ds = f (x(t), y(t), z(t)) ( ) + ( )2 + ( )2 dt (7.4)
dt dt dt
C a

In application, this is the mass of a curve C with its mass density function given by f (x, y, z).

Line Integral (Vector): Let F⃗ be a continuous vector field defined on a smooth curve C given by a vector
function ⃗r(t), a ⩽ t ⩽ b. Then the line integral of F⃗ along C is
Z Zb Z
F⃗ (⃗r) · d⃗r = F⃗ (⃗r(t)) · ⃗r ′ (t) dt = F⃗ · T⃗ ds (7.5)
C a C

In application, this is the work done on an object as it travels through a vector field F⃗ along a path C.

7.3 The Fundamental Theorem for Line Integrals


Fundamental Theorem of Line Integrals: Let C be a smooth curve given by the vector function ⃗r(t),
a ⩽ t ⩽ b. Let f be a differentiable function of two or three variables whose gradient vector ∇f is continuous
on C. Then
Z Zb
∇f · d⃗r = ∇f (⃗r) · ⃗r ′ (t) dt
C a
Zb
∂f dx ∂f dy ∂f dz
= ( + + ) dt
∂x dt ∂y dt ∂z dt
a
Zb
d
= [f (⃗r(t))] dt
dt
a

= f (⃗r(b)) − f (⃗r(a))

43
7.4 Green’s Theorem

Z
∇f · d⃗r = f (⃗r(b)) − f (⃗r(a)) (7.6)
C

R R
Path Independence: F⃗ · d⃗r is independent of path in D if and only if F⃗ · d⃗r = 0 for every closed
C C
path C in D.

Conservative Vector Fields (cont.): Suppose F⃗ is a vector field that is continuous on an open connected
R
region D. If F⃗ · d⃗r is independent of path in D, then F⃗ is a conservative vector field on D; that is, there exists
C
a function f such that ∇f = F⃗ .

Therefore if F⃗ (x, y) = P (x, y)⃗i + Q(x, y)⃗j is a conservative vector field, where P and Q have continuous
first-order partial derivatives on a domain D, then throughout D we have
∂P ∂Q
= (7.7)
∂y ∂x

Ex: Prove the conservation of energy given that a continuous force field F⃗ that moves an object along a
path C given by ⃗r(t), a ⩽ t ⩽ b, where ⃗r(a) = A is the initial point and ⃗r(b) = B is the terminal point of C,
and F⃗ = −∇U where P is the potential energy of the object.

Z Zb Z
W = F⃗ · d⃗r = F⃗ (⃗r(t)) · ⃗r ′ (t) dt W = F⃗ · d⃗r
C a
C
Zb Zb
= m⃗r ′′ (t) · ⃗r ′ (t) dt =− ∇U · d⃗r
a a
Zb
m d ′ = −[U (⃗r(b)) − U (⃗r(a))]
= [⃗r (t) · ⃗r ′ (t)] dt
2 dt = U (A) − U (B)
a
Zb
m d m  ′ 2 b
= [|⃗r ′ (t)|2 ]dt = |⃗r (t)| a
2 dt 2
a
m
= (|⃗r ′ (b)|2 − |⃗r ′ (a)|2 )
2
1 1
= m|⃗v (b)|2 − m|⃗v (a)|2
2 2
= K(B) − K(A)

Thus, by setting the work equal, the conservation of energy law becomes K(A) + U (A) = K(B) + U (B)

44
7.4 Green’s Theorem

7.4 Green’s Theorem


Greens Theorem: Let C be a positively oriented, piecewise-smooth, simple closed curve in the plane
and let D be the region bounded by C. If P and Q have continuous partial derivatives on an open region that
contains D, then
I x ∂Q ∂P
P dx + Q dy = ( − ) dA (7.8)
C ∂x ∂y
D

In vector form, we rewrite the left integral with a line integral of a vector field F⃗ and an arbitrary position
differential d⃗r as
I x  ∂Q ∂P 
F · d⃗r =
⃗ − dA (7.9)
C ∂x ∂y
D

7.5 Curl and Divergence

Curl: If F⃗ = P ⃗i + Q ⃗j + R ⃗k is a vector field on R3 and the partial derivatives of P , Q, and R all exist,
then the curl of F⃗ is the vector field on R3 defined by
∂R ∂Q ⃗ ∂P ∂R ⃗ ∂Q ∂P ⃗
curl(F⃗ ) = ∇ × F⃗ = ( − )i+( − )j+( − )k (7.10)
∂y ∂z ∂z ∂x ∂x ∂y
Notice that if we take the curl of a gradient, say ∇f , and f is continuous with three variables, then
⃗i ⃗j ⃗k
∂ ∂ ∂
curl(∇f ) = ∇ × ∇f = ∂x ∂y ∂z
∂f ∂f ∂f
∂x ∂y ∂z
2
∂ f 2
∂ f ⃗ 2
∂ f ∂2f ⃗ ∂2f ∂2f ⃗
curl(∇f ) = ( − )i+( − )j+( − )k
∂y∂z ∂z∂y ∂z∂x ∂x∂z ∂x∂y ∂y∂x
curl(∇f ) = 0 ⃗i + 0 ⃗j + 0 ⃗k

curl(∇f ) = ⃗0 (7.11)

Therefore if F⃗ is a vector field defined on all of R3 whose component functions have continuous partial deriva-
tives and curl(F⃗ ) = ⃗0, then F⃗ is a conservative vector field.

Divergence: If F⃗ = P ⃗i + Q⃗j + R⃗k is a vector field on R3 and the partial derivatives of P , Q, and R all
exist, then the curl of F⃗ is the vector field on R3 defined by
∂P ⃗ ∂Q ⃗ ∂R ⃗
div(F⃗ ) = ∇ · F⃗ = i+ j+ k (7.12)
∂x ∂y ∂z
Notice that if we take the divergence of the curl of any vector field F⃗ , then we have
div[curl(F⃗ )] = ∇ · (∇ × F⃗ )

45
7.6 Parametric Surfaces and Their Areas

∂ ∂R ∂Q ⃗ ∂ ∂P ∂R ⃗ ∂ ∂Q ∂P ⃗
div[curl(F⃗ )] = ( − )i+ ( − )j+ ( − )k
∂x ∂y ∂z ∂y ∂z ∂x ∂z ∂x ∂y

∂2R ∂2Q ⃗ ∂2P ∂2R ⃗ ∂2Q ∂2P ⃗


div[curl(F⃗ )] = − i+ − j+ − )k
∂x∂y ∂x∂z ∂y∂z ∂y∂x ∂z∂x ∂z∂y

div[curl(F⃗ )] = 0

7.6 Parametric Surfaces and Their Areas


Parametric Equations and Surfaces: Suppose we consider the following vector valued function param-
eterized in two variables u and v
⃗r(u, v) = x(u, v) ⃗i + y(u, v) ⃗j + z(u, v) ⃗k
The set of all points (x, y, z) in R3 such that
x = x(u, v) y = y(u, v) z = z(u, v)
and (u, v) varies throughout D, is called a parametric surface S and the set of equations above are called para-
metric equations of S.

Surface Area: If a smooth parametric surface S is given by the equation


⃗r(u, v) = x(u, v) ⃗i + y(u, v) ⃗j + z(u, v) ⃗k (u, v) ∈ D
and S is covered just once as (u, v) ranges throughout the parameter domain D, then the surface area of S is
x x
A(S) = dS = |⃗ru × ⃗rv | dA (7.13)
S D

∂x ⃗ ∂y ⃗ ∂z ⃗ ∂x ⃗ ∂y ⃗ ∂z ⃗
where ⃗ru = i+ j+ k and ⃗rv = i+ j+ k
∂u ∂u ∂u ∂v ∂v ∂v

Ex: Find the surface area of a sphere with a radius a

x = a cos θ sin ϕ y = a sin θ sin ϕ z = a cos ϕ (1)


⃗i ⃗j ⃗k
⃗rφ × ⃗rθ = a cos θ cos ϕ a sin θ cos ϕ −a sin ϕ (2)
−a sin θ sin ϕ a cos θ sin ϕ 0

46
7.7 Surface Integrals

⃗rφ × ⃗rθ = a2 sin2 ϕ cos θ ⃗i + a2 sin2 ϕ sin θ ⃗j + a2 sin ϕ cos ϕ ⃗k


q
|⃗rφ × ⃗rθ | = a4 sin4 ϕ cos2 θ + a4 sin4 ϕ sin2 θ + a2 sin2 ϕ cos2 ϕ (3)
q
|⃗rφ × ⃗rθ | = a4 sin4 ϕ cos2 θ + a4 sin4 ϕ sin2 θ + a4 sin2 ϕ cos2 ϕ
q
|⃗rφ × ⃗rθ | = a4 sin4 ϕ + a2 sin2 ϕ cos2 ϕ
q
|⃗rφ × ⃗rθ | = a4 sin2 ϕ = a2 sin ϕ

x Z2π Zπ
A(S) = |⃗ru × ⃗rv | dA = a2 sin ϕ dϕdθ
D 0 0
Z2π Zπ (4)
= a2 dθ sin ϕ dϕ
0 0
2
=a (θ|0 )(− cos ϕ|π0 )

= 4πa2

7.7 Surface Integrals


Surface Integral (Scalar): Suppose that a surface S has a vector equation
⃗r(u, v) = x(u, v) ⃗i + y(u, v) ⃗j + z(u, v) ⃗kwhere (u, v) ∈ D
The surface of the integral can be defined as the following
x x
f (x, y, z) dS = f (⃗r(u, v)) |⃗ru × ⃗rv | dA (7.14)
S D

In application, this is the mass of a surface S with a mass density function given by f (x, y, z).

Surface Integral (Vector): If F⃗ is a continuous vector field defined on an oriented surface S, and if S is
given by a vector function ⃗r(u, v) where D is the parameter domain, then the surface integral of F⃗ over S, is
{ x
F⃗ · dS
⃗= F⃗ · (⃗ru × ⃗rv ) dA (7.15)
S D

In application, this is the amount of a vector field F⃗ that pierces through a surface S, which is more commonly
known as the flux of F⃗ .

7.8 Stokes’ Theorem


Stokes’ Theorem: Let S be an oriented piecewise-smooth surface that is bounded by a simple, closed,
piecewise-smooth boundary curve C with positive orientation. Let F be a vector field whose components have

47
7.9 The Divergence Theorem

continuous partial derivatives on an open region in R3 that contains S. Then


I I
F · d⃗r =
⃗ P dx + Q dy + R dy
C xC
= dP dx + dQ dy + dR dz
S
x ∂P ∂P ∂Q ∂Q ∂R ∂R
= ( dy + dz) dx + ( dx + dz) dy + ( dx + dy) dz
∂y ∂z ∂x ∂z ∂x ∂y
S
x ∂P ∂P ∂Q ∂Q ∂R ∂R
= dydx + dzdx + dxdy + dzdy + dxdz + dydz
∂y ∂z ∂x ∂z ∂x ∂y
S
x ∂P ∂P ∂Q ∂Q ∂R ∂R
= − dxdy + dzdx + dxdy − dydz − dzdx + dydz
∂y ∂z ∂x ∂z ∂x ∂y
S
x ∂R ∂Q ∂P ∂R ∂Q ∂P
= ( dydz − dydz) + ( dzdx − dzdx) + ( dxdy − dxdy)
∂y ∂z ∂z ∂x ∂x ∂y
S
x
= (∇ × F⃗ )x dAx + (∇ × F⃗ )y dAy + (∇ × F⃗ )y dAy
S
x
= (∇ × F⃗ ) · dS

S
I x
F⃗ · d⃗r = (∇ × F⃗ ) · dS
⃗ (7.16)
C S

7.9 The Divergence Theorem


The Divergence Theorem: Let E be a simple solid region and let S be the boundary surface of E, given
with positive (outward) orientation. Let F⃗ be a vector field whose component functions have continuous partial
derivatives on an open region that contains E. Then
{ {
F⃗ · dS
⃗= P dAx + Q dAy + R dAz
S S
y
= dP dAx + dQ dAy + dR dAz
E
y ∂P ∂Q ∂R
= ( dx) dAx + ( dy) dAy + ( dz) dAz
∂x ∂y ∂z
E
y ∂P ∂Q ∂R
= dV + dV + dV
∂x ∂y ∂z
E
y ∂P ∂Q ∂R
= ( + + )dV
∂x ∂y ∂z
E
y
= ∇ · F⃗ dV
E
{ y
F⃗ · dS
⃗= ∇ · F⃗ dV (7.17)
S E

48
7.10 Summary

Ex: Find the gravitational field ⃗g for a source that is radial and uniform given that ∇ · ⃗g = −4πGρmass
 x

 Φ g = ⃗g · dS
⃗ = g (4πr2 )

S
x y y

 ⃗g · dS
⃗= ∇ · ⃗g dV = −4πGρmass dV = −4πGρmass V = −4πGm
 Φg =
S V V
Gm
g (4πr2 ) = −4πGm =⇒ g = −
r2

7.10 Summary
The main results of this chapter are all higher-dimensional versions of the Fundamental Theorem of Cal-
culus. Notice that in each case we have an integral of a “derivative” over a region on the left side, and the right
side involves the values of the original function only on the boundary of the region.

49
Chapter 8 Second-Order Differential Equations

8.1 Second-Order Linear Equations


Nonhomogeneous Linear Differential Equation: A nonhomogeneous second-order linear differential
equation has the form
d2 y dy
P (x) 2 + Q(x) + R(x)y = G(x) (8.1)
dx dx

Homogeneous Linear Differential Equation: A homogeneous second-order linear differential equation


has the form
d2 y dy
P (x) 2 + Q(x) + R(x)y = 0 (8.2)
dx dx
If the functions P , Q, and R are all constant functions, we can see that a more general form of the homogeneous
equation becomes
ay ′′ + by ′ + cy = 0

Characteristic Equation: It’s easy to think of functions that follow the form above such as y = erx ,
which can be used to derive the characteristic equation below
y = erx y ′ = rerx y ′′ = r2 erx
a(r2 erx ) + b(rerx ) + c(erx ) = 0

erx (ar2 + br + c) = 0

ar2 + br + c = 0 (8.3)

Complementary Solution: The solutions to this quadratic yields three cases of solutions: 2 real, 1 real,
and 2 imaginary solutions. This yields three different solutions to the homogeneous differential equation, known
as complementary solutions.
I) b2 − 4ac > 0 : yc (x) = eαx (c1 eβx + c2 e−βx )
II) b2 − 4ac = 0 : yc (x) = eαx (c1 + c2 x) (8.4)
III) b2 − 4ac < 0 : yc (x) = eαx (c1 eiβx + c2 e−iβx )
√ √
−b ± b2 − 4ac −b b2 − 4ac
For: r1,2 = where, α = &β =
2a 2a 2a
8.2 Nonhomogenous Linear Equations

8.2 Nonhomogenous Linear Equations


General Solution: In the last section we saw solutions to homogeneous differential equations, known as
complimentary solutions. We can extend this idea to solve for a general solution for any nonhomogeneous linear
equations by using different methods to obtain a particular solution to the differential equation. Combining the
complementary and particular solution yields the general solution as show below
y(x) = yc (x) + yp (x) (8.5)

Method of Undetermined Coefficients: The rules to solve for particular solutions cam be summarized
as follows
1. If G(x) = ekx P (x), where P is a polynomial of degree n, then try
yp (x) = ekx Q(x)
where Q(x) is an nth-degree polynomial (whose coefficients are determined by substituting in the differ-
ential equation).

2. If G(x) = ekx P (x) cos(mx) or G(x) = ekx P (x) sin(mx), where P is an nth-degree polynomial, then
try
yp (x) = ekx Q(x) cos(mx) + ekx R(x) sin(mx)

where Q and R are nth-degree polynomials.

3. If any term of yp is a solution of the complementary equation, multiply yp by x (or by x2 if necessary)

8.3 Applications of Second-Order Differential Equations


Oscillations (Simple): A basic concept of differential equations comes from simple harmonic motion
where a mass is attached to a string and allowed to oscillate, assuming there are no other forces, because of the
restoring force of the spring Fs = −kx
Fnet = Fs = ma
d2 x
−kx = m
dt2
d2 x k
2
+ x=0
dt m
We can find a general solution to the differential equation by using the characteristic equation
p r
k ± −4(k/m) k
2
r + = 0 =⇒ r = =⇒ r = ±i
m 2 m
r
k
Let: ω = , so x(t) = c1 eiωt + c2 e−iωt
m
q
−vo
Let: A = c21 + c22 and ϕ = tan−1 ( )
ωxo

51
8.4 Series Solutions

x(t) = A cos(ωt + ϕ) (8.6)

Oscillations (Damped): Damped oscillations are an extension of simple oscillations as they include a
dampening force Fd = −γv, so the new differential equation becomes the following
Fnet = Fs + Fd = ma
dx dx
−kx − γ =m 2
dt dt
d2 x dx
m 2
+γ + kx = 0
dt dt
We once again can find a general solution to the differential equation by using the auxiliary equation, but we
will end up with three cases: over damping, critical damping, and under damping
p p
−γ ± γ 2 − 4mk −γ γ 2 − 4mk
2
mr + γr + k = 0 =⇒ r = =⇒ r = ±
2m 2m 2m
p
−γ γ − 4mk
2
Let: α = &β =
2m 2m

I) γ 2 − 4mk > 0 : y(x) = eαx (c1 eβx + c2 e−βx )


II) γ 2 − 4mk = 0 : y(x) = eαx (c1 + c2 x) (8.7)
III) γ 2 − 4mk < 0 : y(x) = eαx (c1 eiβx + c2 e−iβx )

Oscillations (Forced): Forced oscillations are the last addition of the oscillation series of differential
equations. This is a nonhomogeneous differential equation since now there is a applied force Fext = Fo cos(ωo t)
in addition to the other two forces discussed before
Fnet = Fs + Fd + Fext = ma
dx d2 x
−kx − γ + Fo cos(ωo t) = m 2
dt dt
d2 x dx
m +γ + kx = Fo cos(ωo t)
dt2 dt
To find the general solution to this differential equation, we can simply use the methods of undetermined coef-
ficients to obtain a particular solution and then add it onto the complementary solutions derived before
Fo
x(t) = xc (t) + cos(ωo t) (8.8)
m(ω − ωo2 )
2

8.4 Series Solutions


Series Solution: Instead of the auxiliary equation approach where y = erx , we can try a power series for
y to solve a second order differential equation given by

X
y = f (x) = c n xn = c 0 + c 1 x + c 2 x2 + c 3 x3 + · · · (8.9)
n=0

52
8.4 Series Solutions

Ex:Use power series to solve the equation y ′′ + y = 0.



X X∞ X∞ ∞
X
′ ′′
y= cn x n
y = ncn x n−1
y = n(n − 1)cn x n−2
= (n + 2)(n + 1)cn+2 xn
n=0 n=1 n=2 n=0

X ∞
X
(n + 2)(n + 1)cn+2 xn + c n xn = 0
n=0 n=0

X
[(n + 2)(n + 1)cn+2 + cn ] xn = 0
n=0

(n + 2)(n + 1)cn+2 + cn = 0
cn
cn+2 = − n = 0, 1, 2, 3, . . .
(n + 1)(n + 2)


 n=0: c2 = − 1·2
c0

 c0

 = − 2·3
c1
For the even coefficients, c2n = (−1)n
 n=1: c3
(2n)!
n=2: c4 = − 3·4
c2 c0
= 1·2·3·4 = c4!0 =⇒ c1

 For the odd coefficients, n

 n=3: c5 = − 4·5 = 2·3·4·5 = 5!
c3 c1 c1 c 2n+1 = (−1)

 (2n + 1)!

n=5: c7 = − 6·7 = − 5!6·7 = − 7!
c5 c1 c1

y = c 0 + c 1 x + c 2 x2 + c 3 x3 + c 4 x4 + c 5 x5 + · · ·
 
x2 x4 x6 n x
2n
= c0 1 − + − + · · · + (−1) + ···
2! 4! 6! (2n)!
 
x3 x5 x7 n x
2n+1
+ c1 x − + − + · · · + (−1) + ···
3! 5! 7! (2n + 1)!

X Xx
x2n x2n+1
= c0 (−1)n + c1 (−1)n
(2n)! (2n + 1)!
n=0 n=0

= c0 cos x + c1 sin x

53
Chapter 9 MC Appendix

9.1 Numbers, Inequalities, and Absolute Values


Integers: All positive and negative whole numbers given by
. . . , −3, −2, −1, 0, 1, 2, 3, . . .

Rational Numbers: Ratios r of two integers m and n given by


m
r= where m and n are integers and n ̸= 0
n

Irrational Numbers: Any number that cannot be represented as a ratio of two integers

2, sin 1◦ , π, log7 (2)

Set and Elements: A collection of objects called elements


If S is a set, and a ∈ S, then a is an element of S
If S is a set, and a ̸∈ S, then a is not an element of S

Union and Intersection: A set V that has the all the values of two sets S and T is an union of both sets.
A set V that has the common values of two sets S and T is an intersection of both sets.
If V = S ∪ T , then V has all values of S and T
If V = S ∩ T , then V has all common values of S and T

Open and Closed Intervals: A certain set of real numbers between a and b is a set called an open interval.
A certain set of real numbers from a to b is a set called a closed interval.
(a, b) = {x | a < x < b} is an open interval.
[a, b] = {x | a ⩽ x ⩽ b} is an closed interval.
9.2 Coordinate Geometry and Lines

Inequalities: For any inequality, the following rules are followed


1. If a < b, then a + c < b + c.
2. If a < b and c < d, then a + c < b + d.
3. If a < b and c > 0, then ac < bc.
4. If a < b and c < 0, then ac > bc.
5. If 0 < a < b, then 1/a > 1/b.

Absolute Value: The distance from a to 0 on the real number line given by
(
|a| = a if a ⩾ 0
|a| = −a if a < 0

Properties of Absolute Values: Suppose a and b are any real numbers and n is an integer. Then,
a |a|
1. |ab| = |a| |b| 2. = 3. |an | = |a|n
b |b|
For solving equations or inequalities with absolute values, it is often helpful to use the following statements.
Suppose a > 0. Then,
4. |x| = a if an only if x = ±a
5. |x| < a if an only if −a < x < a
6. |x| > a if an only if x > a or x < −a

The Triangle Inequality: If a and b are any real numbers, then


|a + b| ⩽ |a| + |b|

9.2 Coordinate Geometry and Lines


Distance Formula: The distance between the points P1 (x1 , y1 ) and P2 (x2 , y2 ) is given by
p
|P1 P2 | = (x2 − x1 )2 + (y2 − y1 )2

Slope: The slope of a non-vertical (vertical slope is undefined) line that passes through the points P1 (x1 , y1 )
and P2 (x2 , y2 ) is given by
∆y y2 − y1
m= =
∆x x2 − x1

Point Slope Form of a Line: An equation of the line passing through the point P1 (x1 , y1 ) and having
slope m is given by
y − y1 = m(x − x1 )

55
9.3 Graphs of Second-Degree Equations

Slope-Intercept Form of a Line: An equation of the line with slope m and y-intercept b is given by
y = mx + b

Parallel and Perpendicular Lines: For any two particular lines, they can be parallel or perpendicular by
the following
1. Two non-vertical lines are parallel if and only if they have the same slope.
2. Two lines with slopes m1 and m2 are perpendicular if and only if m1 m2 = −1

9.3 Graphs of Second-Degree Equations


Equation of a Circle: An equation of the circle with center (h, k) and radius r is
(x − h)2 + (y − k)2 = r2

Equations of Parabolas: Parabolas can either be oriented vertically or horizontally given by


1. y = ax2 (facing up). 2. y = −ax2 (facing down).
3. x = ay 2 (facing right). 4. x = −ay 2 (facing left).

Equation of an Ellipse: For positive numbers a and b, the equation of an ellipse centered at (h, k) is given
by
(x − h)2 (y − k)2
+ =1
a2 b2

Equation of an Hyperbola: For positive numbers a and b, the equation of an hyperbola centered at (h, k)
is given by
(x − h)2 (y − k)2
− =1
a2 b2

9.4 Trigonometry
Angles: Angles can be measured in degrees or in radians. The angle given by a complete revolution is
360◦ ,which is the same as 2π rad. Therefore, the conversion between the two is given by
π rad = 180◦

Arc Length: For a circle with radius r and a sector with length a given by an angle theta, its proportion
of angle to circumference yields the arc length given by
θ a
= =⇒ a = rθ
2π 2πr

Trigonometric Functions: For an acute angle theta the six trigonometric functions are defined as ratios

56
9.5 Sigma Notation

of lengths of sides of a right triangle given by


opp adj opp
sin θ = cos θ = tan θ =
hyp hyp adj

hyp hyp adj


csc θ = sec θ = sin θ =
opp adj opp
This definition cannot be applied for obtuse or negative angles, so we extend our notation. By looking at some
standard position P (x, y) begin some distance r away from the origin to P , and an angle θ from the x-axis to
the point, the trigonometric functions are given by
y x y
sin θ = cos θ = tan θ =
r r x
r r x
csc θ = sec θ = sin θ =
y x y

Trigonometric Identities: The following are equivalent expressions for a given trigonometric expression
that can be used to simplify or expand

Pythagorean Triples
Sines & Cosines Tangents & Secants Co-tangents & Co-secants

sin2 θ + cos2 θ = 1 tan2 θ + 1 = sec2 θ 1 + cot2 θ = csc2 θ

Even & Odd Identities


Sine Cosine Tangent

sin(−θ) = − sin θ cos(−θ) = cos θ tan(−θ) = − tan θ

Sum & Difference Identities


Sine Cosine Tangent

tan x ± tan y
sin(x ± y) = sin x cos y ± sin y cos x cos(x ± y) = cos x cos y ∓ sin x sin y tan(x ± y) =
1 ∓ tan x tan y

Double & Half Angle Identities


Sine (Double) Cosine (Double) Sine (Half) Cosine (Half)
cos 2x = cos2 x − sin x
2

1 − cos 2x 1 + cos 2x
sin 2x = 2 sin x cos x cos 2x = 1 − 2 sin2 x sin2 x = cos2 x =
2 2
cos 2x = 2 cos2 x − 1

9.5 Sigma Notation


Sigma Notation: If am , am+1 , . . . , an are real numbers and m and n are integers such that m ⩽ n, then
X n
ai = am + am+1 + · · · + an
i=m

57
9.5 Sigma Notation

Sigma Notation Properties: If c is ay constant (that is, it does not depend on i), then
Pn P n Pn P n P
n P n
1. (ai + bi ) = ai + bi 2. (ai − bi ) = ai − bi
i=m i=m i=m i=m i=m i=m
P
n P
n Pn
3. cai = c ai 4. 1=n
i=m i=m i=1
Pn Pn n(n + 1)
5. c = cn 6. i=
2
i=1 i=1  
Pn n(n + 1)(2n + 1) Pn n(n + 1) 2
7. i2 = 8. 3
i =
i=1 6 i=1 2

58
Chapter 10 Vectors

10.1 The Geometry and Algebra of Vectors


Vector Addition: Given two vectors ⃗u and ⃗v , then their sum is given by
⃗u + ⃗v = [u1 + v1 , u2 + v2 ] (10.1)

Head to Tail Rule:

Scalar Multiplication: Given a vector ⃗v and a real number c, then the scalar multiple of ⃗v is given by

c⃗v = c [v1 , v2 ] = [cv1 , cv2 ] (10.2)

Linear Combination: A vector ⃗v is a linear combination of vectors ⃗v1 , ⃗v2 , . . . , ⃗vk if there are scalars
c1 , c2 , . . . , ck such that ⃗v = c1⃗v1 + c2⃗v2 + · · · + ck⃗vk . The scalars c1 , c2 , . . ., ck are called the coefficients of
the linear combination.

10.2 Length and Angle: The Dot Product



  
u1 v1
 .  .
Dot Products: If ⃗u =  .  .
 .  ⃗v =  . , then the dot product of ⃗u and ⃗v is defined by
un vn
⃗u · ⃗v = u1 v1 + u2 v2 + · · · + un vn (10.3)
10.3 Lines and Planes
 
v1
.
Length/Norm: The length (or norm) of a vector ⃗v =  .
 .  in R is the nonnegative scalar ∥⃗v ∥ given by
n

vn
√ q
∥⃗v ∥ = ⃗v · ⃗v = v12 + v22 + · · · + vn2 (10.4)

Cauchy-Schwartz Inequality: For all vectors ⃗u and ⃗v in Rn ,


|⃗u · ⃗v | = |uvcosθ| ≤ ||u||||v|||cosθ| ≤ ||⃗u||||⃗v || as |cosθ| ≤ 1

|⃗u · ⃗v | ⩽ ∥⃗u∥∥⃗v ∥ (10.5)

Triangle Inequality: For all vectors ⃗u and ⃗v in Rn ,


∥⃗u + ⃗v ∥2 = (⃗u + ⃗v ) · (⃗u + ⃗v )
= ⃗u · ⃗u + 2(⃗u · ⃗v ) + ⃗v · ⃗v
⩽ ∥⃗u∥2 + 2∥⃗u∥∥⃗v ∥ + ∥⃗v ∥2
⩽ (∥⃗u∥ + ∥⃗v ∥)2

∥⃗u + ⃗v ∥ ≤ ∥⃗u∥ + ∥⃗v ∥ (10.6)

Distance: The distance between two vectors ⃗u and ⃗v in Rn is given by


d(⃗u, ⃗v ) = ∥⃗u − ⃗v ∥ (10.7)

Projection: If ⃗u and ⃗v are vectors in Rn and ⃗u ̸= ⃗0, then the projection of ⃗v onto ⃗u is given by
 
⃗v · ⃗u
proju (⃗v ) = ⃗u (10.8)
⃗u · ⃗u

10.3 Lines and Planes

Equations of Lines in R2
Normal General Vector Parametric
x = p1 + td1
⃗n · ⃗x = ⃗n · p⃗ ax + by = c ⃗x = p⃗ + td⃗ y = p2 + td2

60
10.3 Lines and Planes

Equations of Lines and Planes in R3


Normal General Vector Parametric
⃗n1 · ⃗x = ⃗n1 · p⃗1 a 1 x + b1 y + c 1 z = d 1 x = p1 + td1
Lines ⃗x = p⃗ + td⃗ y = p2 + td2
⃗n2 · ⃗x = ⃗n2 · p⃗2 a 2 x + b2 y + c 2 z = d 2 z = p3 + td3

x = p1 + tu1 + sv1
Planes ⃗n · ⃗x = ⃗n · p⃗ ax+by+cz=d ⃗x = p⃗ + t⃗u + s⃗v + y = p2 + tu2 + sv2
z = p3 + tu3 + sv3

Distance from Point to Line: For a line l in R2 with its general equation given by ax + by = c, the
distance from a point B = (x0 , y0 ) to the line is given by
|ax0 + by0 − c|
d(B, l) = √ (10.9)
a 2 + b2

Distance from Point to Plane: For a plane P in R3 with its general equation given by ax + by + cz = d,
the distance from a point B = (x0 , y0 , z0 ) to the plane is given by
|ax0 + by0 + cz0 − d|
d(B, P) = √ (10.10)
a 2 + b2 + c 2

61
Chapter 11 Systems of Linear Equations

11.1 Introduction to Systems of Linear Equations


System of Linear Equations: A finite set of linear equations represented by the following
( " # " # " #
ax + by = A a b x
=⇒ A⃗x = ⃗b where A = ⃗x = ⃗b = A (11.1)
cx + dy = B c d y B

Augmented Matrix: A matrix that can represents the solution to a system of linear equations
" #
h i a b A
If A⃗x = ⃗b, then ⃗x = A | ⃗b = (11.2)
c d B

11.2 Direct Methods for Solving Linear Systems


Row Echelon Form: A matrix is considered to be in this form if it satisfies the following two conditions
1. Any row consisting of entirely zeros are at the bottom of the matrix.
2. In each nonzero row, the first nonzero entry (called the leading entry) is in a column to the left of any
leading entries below it.

Elementary Row Operations: The following operations can be applied on a matrix to obtain its row
echelon form
1. Interchange two rows.
2. Multiply a row by a nonzero constant.
3. Add a multiple of a row to another row.

Reduced Row Echelon Form: A matrix is considered to be in this form if it satisfies the following three
conditions
1. It is in row echelon form
2. The leading entry in each nonzero row is 1
3. Each column containing a leading zero has zeros everywhere else

Ex: Solve the following system of linear equations


    

 3x + y + z = 3 x 3 1 1 3

   
x − y − 3z = 2 =⇒ y  =  1 −1 −3 2 


 z −1 −2 −1 1
−x − 2y − z = 1
11.3 Spanning Sets and Linear Independence
     
3 1 1 3 3 1 1 3 3 1 1 3
  R3 +R2 =R3   −3R2 +R1 =R2   4R3 +3R2 =R3
 1 −1 −3 2  −→  1 −1 −3 2  −→  0 4 10 −3  −→
−1 −2 −1 1 0 −3 −4 3 0 −3 −4 3
     39

3 1 1 3 3 1 1 3 3 1 0 14
  14 R3 =R3   R1 −R3 =R1   R2 −10R3 =R2
1

 0 4 10 −3  −→  0 4 10 −3  −→  0 4 10 −3  −→
3 3
0 0 14 3 0 0 1 14 0 0 1 14
     
3 1 0 39
14 3 1 0 39 14 3 0 0 57
14
  4 R2 =R2  9  R1 −R  
1 1
2 =R1
R =R1
3 1
 0 4 0 − 36
7  −→  0 1 0 −7  −→  0 1 0 − 97  −→
3 3 3
0 0 1 14 0 0 1 14 0 0 1 14

 19
     19  
 x=
19
1 0 0 14 x 
 14
14 
 9  =⇒   =  9  =⇒ 9
 0 1 0 −7  y  − 7  y =−

 7
0 0 1 143
z 3 

14 
z = 3
14

11.3 Spanning Sets and Linear Independence


Span & Spanning Sets: If S = {⃗v1 , ⃗v2 , . . . , ⃗vk } is a set of vectors in Rn , then the set of all linear combi-
nations of ⃗v1 , ⃗v2 , . . . , ⃗vk is called the span of ⃗v1 , ⃗v2 , . . . , ⃗vk and is de-noted by span (⃗v1 , ⃗v2 , . . . , ⃗vk ) or span(S).
If span(S) = Rn , then S is called a spanning set for Rn .

Linear Dependence/Independence: A set of vectors ⃗v1 , ⃗v2 , . . . , ⃗vk is linearly dependent if there are
c1⃗v1 + c2⃗v2 + · · · + ck⃗vk = ⃗0

A set of vectors that is not linearly dependent is called linearly independent.

63
Chapter 12 Matrices

12.1 Matrix Operations:


Matrix Addition: Given matrices A and B both with m × n entries, we can define matrix addition as the
following
h i
A + B = aij + bij (12.1)

Matrix Scalar Multiplication: Given matrix A with m × n entries and a constant c, we can define matrix
scalar multiplication as the following
h i h i
cA = c aij = caij (12.2)

Matrix Multiplication: If A is an m × n matrix and B is an n × r matrix, then the product C = A × B


is an m × r matrix. We can compute each entry in C using the following
cij = ai1 b1j + ai2 b2j + · · · + aim bnj (12.3)

(Note: We can only define matrix multiplication between two matrices if the number of columns of matrix A
is equal to the number of rows in matrix B)

Transpose of a Matrix: Transpose of an m×n matrix A is the n×m matrix AT obtained by interchanging
the rows and columns of A.
(Aij )T = Aji (12.4)

12.2 Matrix Algebra


Properties of Matrix Addition and Scalar Multiplication: If A, B, and C are matrices of the same size
and c and d are scalars, then

1. A + B = B + A 2. (A + B) + C = A + (B + C) 3. A + O = A 4. A + (−A) = O

5. c(A + B) = cA + cB 6. (c + d)A = cA + dA 7. c(dA) = (cd)A 8. 1A = A

Properties of Matrix Multiplication: If A, B, and C are matrices (whose sizes are such that the indicated
operations can be performed), and k is a scalar, then
12.3 The Inverse of a Matrix

1. A(BC) = (AB)C 2. A(B + C) = AB + BC 3. (A + B)C = AC + BC

4. k(AB) = (kA)B = A(kB) 5. Im A = A = AIn

Properties of the Transpose: If A and B are matrices (whose sizes are such that the indicated operations
can be performed), and k is a scalar, then

1. (AT )T = A 2. (A + B)T = AT + B T 3. (kA)T = k(AT )

4. (AB)T = B T AT 5. (Ar )T = (AT )r

12.3 The Inverse of a Matrix

Inverse of a Matrix: If A is an n × n matrix, an inverse of A is an n × n matrix A−1 with the property


that
AA−1 = I and A−1 A = I

where I = In is the n × n identity matrix. If such A−1 exists, the A is defined to be invertible.

Using row operations for an augmented matrix, we can obtain an expression for A−1 for a matrix A with 2 × 2
entries. " #
a b
A= =⇒ AA−1 = I =⇒ A−1 = [A|I]
c d
" # " # " #
1 1 b 1 b 1
a b 1 0 1 0 1 0
A−1 =
R 1 & R 2 R −R =R
a
−→c a a 2
−→ 1 2 a a
c d 0 1 1 dc 0 1c 0 dc − ab − a1 1c
" # " # " #
b 1
− 1 0 ad−bc ac
R2 1 ab 1
0 a
R
b 1
a
1 1
0
A = 1 a a −→ a −→ b b
0 ad−bcac − a1 1c 0 1 − ad−bc c a
ad−bc 0 1 − ad−bc c a
ad−bc
" # " # " #
a
1 1
0 a
0 1
+ c
− a b
1 0 1
+ bc
− b
A−1 = b
R −R =R R 1
b 1
−→ 2 1 b b ad−bc ad−bc −→
a a a(ad−bc) ad−bc
0 1 − ad−bc c a
ad−bc 0 1 − c
ad−bc
a
ad−bc 0 1 − c
ad−bc
a
ad−bc
" # " #
ad
1 0 a(ad−bc) − ad−bcb
1 0 ad−bc d
− ad−bc
b h i

A = 1
= = I|A −1
0 1 − ad−bc c a
ad−bc
0 1 − ad−bc c a
ad−bc

" #
1 d −b
A−1 = (12.5)
ad − bc −c a

Determinant of a 2 ×
˜ 2 Matrix: The determinant of a 2 × 2 matrix A is defined to be the following

a b
det(A) = = ad − bc (12.6)
c d

65
12.4 The LU Factorization

12.4 The LU Factorization


LU Factorization: If A is a square matrix that can be reduced to row echelon form without any row
interchanges, then it has an LU factorization where L is unit lower triangular and U is upper triangular

 
2 2 −1
 
Ex: Obtain the LU factorization of A = 4 0 4 
3 4 4
       
2 2 −1 2 2 −1 2 2 −1 2 2 −1
  R2 −2R1 =R2   R3 − 2 R1 =R3   R3 + 4 R2 =R3  
3 1
A = 4 0 4  −→ 0 −4 6  −→ 0 −4 6  −→ 0 −4 6  = U
3 4 4 3 4 4 0 1 11 2 0 0 7
 
1 0 0
3 1  
L21 = 2, L31 = , L32 = − =⇒ L =  2 1 0
2 4
2 −4 1
3 1

    
2 2 −1 1 0 0 2 2 −1
    
A = LU =⇒ 4 0 4  =  2 1 0 0 −4 6 
2 −4 1
3 1
3 4 4 0 0 7

PT LU Factorization: If A is a square matrix that can only be reduced to row echelon form with row
interchanges, then it has a symmetric permutation matrix P , an LU factorization where L is unit lower triangular
and U is upper triangular

 
0 1 4
 
Ex: Obtain the P T LU factorization of A = −1 2 1
1 3 3
         
1 0 0 0 1 4 0 1 4 0 1 0 0 1 4 −1 2 1
         
A = P A = 0 1 0 −1 2 1 = −1 2 1 =⇒ B = P A = 1 0 0 −1 2 1 =  0 1 4
0 0 1 1 3 3 1 3 3 0 0 1 1 3 3 1 3 3
     
−1 2 1 −1 2 1 −1 2 1
  R3 +R1 =R3   R3 −5R2 =R3  
B =  0 1 4 −→  0 1 4 −→ 0 1 4 =U
1 3 3 0 5 4 0 0 −16
 
1 0 0
 
L21 = 0, L31 = −1, L32 = 5 =⇒ L =  0 1 0
−1 5 1

B = LU =⇒ P A = LU =⇒ A = P−1 LU =⇒ A = P T LU
   
0 1 0 1 0 0 −1 2 1
   
A = 1 0 0  0 1 0  0 1 4 
0 0 1 −1 5 1 0 0 −16

66
12.5 Subspaces, Basis, Dimension, and Rank

12.5 Subspaces, Basis, Dimension, and Rank


Subspace: Any collection S of vectors in Rn such that
1. The zero vector ⃗0 is in S
2. If ⃗u and ⃗v are in S, then ⃗u + ⃗v is in S
3. If ⃗u is in S and c is a scalar, then c⃗u is in S

Row & Column Space: If A is an m × n matrix, then


1. The row space of A is the subspace row(A) of Rn spanned by the rows of A.
2. The column space of A is the subspace col(A) of Rm spanned by the columns of A.
3. The null space of A is the subspace null(A) of Rn consisting of solutions of the homogeneous linear
system A⃗x = ⃗0.

Basis: For a subspace S of Rn , a basis is a set of vectors in S that spans S and is linearly independent

12.6 Introduction to Linear Transformations


Linear Transformation: A transformation T : Rn → Rm is defined to be a linear transformation if
1. T (⃗u + ⃗v ) = T (⃗u) + T (⃗v ) for all ⃗u and ⃗v in Rn
2. T (c⃗v ) = cT (⃗v ) for all ⃗v in Rn and all scalars c

Composition of Linear Transformations: Let T : Rm → Rn and S : Rn → Rp be linear transfor-


mations. Then S ◦ T : Rm → RP is a linear transformation. Moreover, their standard matrices are related
by
[S ◦ T ] = [S][T ]

67
Chapter 13 Eigenvalues and Eigenvectors

13.1 Introduction to Eigenvalues and Eigenvectors


Eigenvalues and Eigenvectors: Let A be an n × n matrix. A scalar λ is called an eigenvalue of A if there
is a nonzero vector ⃗x such that A⃗v = λ⃗v . Such a vector x is called an eigenvector of A corresponding to λ.

Eigenspace: Let A be an n × n matrix and let λ be an eigenvalue of A. The collection of all eigenvectors
corresponding to λ, together with the zero vector, is called the eigenspace of λ and is denoted by Eλ

13.2 Determinant
Cofactor of a Matrix: To redfinie our computation for determinants of n × n matrices and make it easier
to compute, we define the cofactor of such matrix A to be
Cij = (−1)i+j det(Aij ) (13.1)

Laplace Expansion Theorem: The determinant of an n × n matrix A = [aij ], where n ⩾ 2, can be


computed as
det A = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin
X
n
(13.2)
= aij Cij
j=1

(which is the cofactor expansion along the ith row) and also as

det A = a1j C1j + a2j C2j + · · · + anj Cnj


X
n
(13.3)
= aij Cij
i=1

(the cofactor expansion along the jth column).

However, for larger dimensions of matrices, the determinant of such matrix is defined as the product of diagonal
entries of said matrix in row echelon form
a # ... #
0 b ... #
det(A) = . . . = a × b × ··· × n (13.4)
.. .. . . ...
0 0 ... n
13.3 Eigenvalues and Eigenvectors of n × n Matrices

Properties of Determinants of Matrices: If A and B are real matrices, and E is an elementary matrix
(whose sizes are such that the indicated operations can be performed), then

1. det(EB) = (det E)(det B) 2. det(kA) = k n det A


1
3. det(AB) = (det A)(det B) 4. det(A−1 ) =
det A
5. det A = det AT

Proof of properties (3) and (4) are as follows


det(AB) = det(E1 E2 · · · En B)
= (det E1 )(det E2 ) · · · (det En )(det B)
= det(E1 E2 · · · En ) det B
= (det A)(det B)

AA−1 = I
det(AA−1 ) = det I
(det A)(det A−1 ) = 1
1
det A−1 =
det A

Cramer’s Rule: Let A be an invertible n × n matrix and let ⃗b be a vector in Rn . Then the unique solution
⃗x of the system A⃗x = ⃗b is given by
A = [⃗a1 ⃗a2 · · · ⃗an ] I = [⃗e1 ⃗e2 · · · ⃗en ] Ii (⃗x) = [⃗e1 ⃗e2 · · · ⃗x · · · ⃗en ]
h i
AIi (⃗x) = A [⃗e1 ⃗e2 · · · ⃗x · · · ⃗en ] = [A⃗e1 A⃗e2 · · · A⃗x · · · A⃗en ] = ⃗a1 ⃗a2 · · · ⃗b · · · ⃗an = Ai (⃗b)

AIi (⃗x) = Ai (⃗b)


det(AIi (⃗x)) = det(Ai (⃗b))
(det A)(det Ii (⃗x)) = det(Ai (⃗b))
(det A)(xi ) = det(Ai (⃗b))

det (Ai (b))


xi = for i = 1, . . . , n (13.5)
det A

13.3 Eigenvalues and Eigenvectors of n × n Matrices


Characteristic Equation: The eigenvalues of a square matrix A are precisely the solutions λ of the equa-
tion
det(A − λI) = 0 (13.6)

69
13.4 Similarity and Diagonalization
" #
a b
Ex: Obtain the eigenvalues of the matrix A =
c d
a−λ b
det(A − λI) = = (a − λ)(d − λ) − bc = 0
c d−λ
ad − aλ − dλ + λ2 − bc = 0 =⇒ λ2 − (a + d)λ + (ad − bc) = 0 =⇒ λ2 − (tr A)λ + det A = 0
p
tr A ± (tr A)2 − 4 det A
λ1,2 =
2

" #
2 −3
Ex: Obtain the eigenvectors of the matrix A =
1 0
p
(2 + 0) ± (2 + 0)2 − 4(0 − (−3)) √
det(A − λI) = λ1,2 = = λ1,2 = 1 ± i 2
" 2 # " #
√ √
2 − (1 + i 2) −3 0 1 − i 2 −3 0
⃗v1 = √ = =⇒
1 0 − (1 + i 2) 0 0 0 0
√ √ √
(1 − i 2)x1 − 3x2 = 0 =⇒ x1 = 1 + i 2, x2 = 1 =⇒ ⃗v1 = ⟨1 + i 2, 1⟩
" √ # " √ #
2 − (1 − i 2) −3 0 1 + i 2 −3 0
⃗v2 = √ = =⇒
1 0 − (1 − i 2) 0 0 0 0
√ √ √
(1 + i 2)x1 − 3x2 = 0 =⇒ x1 = 1 − i 2, x2 = 1 =⇒ ⃗v1 = ⟨1 − i 2, 1⟩

13.4 Similarity and Diagonalization


Similarity of Matrices: Let A and B be n × n matrices. We say that A is similar to B if there is an
invertible n × n matrix P such that P−1 AP = B. If A is similar to B, we write A ∼ B.

Ex: Given that A and B are similar, show that (1) A is only invertible if B is, (2) det A = det B, (3) A
and B have the same characteristic equation
(1): A = P −1 BP =⇒ A −1 = (P −1 BP ) −1 = P B −1 P −1 =⇒ A −1 ∝ B −1
1
(2): A = P −1 BP =⇒ det A = det(P −1 BP ) = (det P −1 )(det B)(det P ) = ( )(det B)(det P ) = det B
det P
(3): det(A − λI) = det(P −1 BP − λP −1 IP ) = det(P −1 (B − λI)P ) = det(B − λI)

Diagonalization: An n × n matrix A is diagonalizable if there is a diagonal matrix D such that A is


similar to D - that is, if there is an invertible n × n matrix P such that P −1 AP = D.

" #
2 −3
Ex: Diagonalize A = if possible
1 0
p
2± (2)2 − 4(0 − (−3)) √
det(A − λI) = 0 =⇒ λ1,2 = = λ1,2 = 1 ± i 2
2

70
13.4 Similarity and Diagonalization
" # " √ #
λ1 0 1+i 2 0
D= = √
0 λ2 0 1−i 2

71
Chapter 14 Orthogonality

14.1 Orthogonality in Rn
Orthogonal Set: A set of vectors {⃗v1 , ⃗v2 , . . . , ⃗vk } in Rn is called an orthogonal set if all pairs of distinct
vectors in the set are orthogonal- that is, if
⃗vi · ⃗vj = 0 whenever i ̸= j for i, j = 1, 2, . . . , k (14.1)

Orthonormal Set: A set of orthogonal vectors {⃗q1 , ⃗q2 , . . . , ⃗qk } in Rn that are normalized as
⃗v1 ⃗v2 ⃗vk
⃗q1 = , ⃗q2 = , . . . , ⃗qk = (14.2)
∥⃗v1 ∥ ∥⃗v2 ∥ ∥⃗vk ∥

Orthogonal Matrix: An n × n matrix Q whose columns form an orthonormal set and have the following
properties
Q = [⃗q1 ⃗q2 · · · ⃗qk ] (14.3)

1. Q −1 is orthogonal and Q −1 = QT 2. ∥Q⃗x∥ = ∥⃗x∥ for every ⃗x in Rn


3. Q⃗x · Q⃗y = ⃗x · ⃗y for every ⃗x and ⃗y in Rn 4. det Q = ±1
5. If λ is an eigenvalue of Q, then |λ| = 1 6. If Q1 and Q2 are orthogonal n × n matrices, then so is Q1 Q2
Proving properties (1), (4), (5), and (6) are as follows
(1) Q −1 Q = I = QT Q =⇒ (QT Q)T = (Q −1 Q)T
=⇒ QQT = (Q −1 )T QT = (Q −1 )T (Q −1 )
=⇒ Q −1 is orthogonal.

(4) QT Q = I =⇒ det(QT Q) = det(I)


=⇒ det QT det Q = 1
=⇒ (det Q)2 = 1
=⇒ det Q = ±1

(5) Q⃗v = λ⃗v =⇒ ∥Q⃗v ∥ = ∥λ⃗v ∥


=⇒ ∥⃗v ∥ = |λ|∥⃗v ∥
=⇒ |λ| = 1

(6) (Q1 Q2 )T = QT1 QT2 =⇒ (Q1 Q2 )T = Q1−1 Q2−1


=⇒ (Q Q )T = (Q Q ) −1
1 2 1 2

=⇒ Q1 Q2 is orthogonal.
14.2 Orthogonal Complements and Orthogonal Projections

14.2 Orthogonal Complements and Orthogonal Projections


Orthogonal Projection: Let W be a subspace of Rn and let {⃗u1 , . . . , ⃗uk } be an orthogonal basis for W .
For any vector ⃗v in Rn , the orthogonal projection of ⃗v onto W is defined as
   
⃗u1 · ⃗v ⃗u1 · ⃗v
projW (⃗v ) = ⃗u1 + · · · + ⃗uk (14.4)
⃗u1 · ⃗u1 ⃗u1 · ⃗uk

The component of ⃗v orthogonal to W is

perpW (⃗v ) = ⃗v − projW (⃗v ) (14.5)

Orthogonal Decomposition: Let W be a subspace of Rn and let ⃗v be a vector in Rn . Then there are
unique vectors w ⃗ ⊥ in W ⊥ such that
⃗ in W and w
w ⃗ ⊥ = projW (⃗v ) + perpW (⃗v ) = projW (⃗v ) + (⃗v − projW (⃗v )) = ⃗v
⃗ +w

⃗v = w ⃗⊥
⃗ +w (14.6)

14.3 The Gram-Schmidt Process and the QR Factorization


The Gram-Schmidt Process: Let {⃗x1 , . . . , ⃗xk } be a basis for a subspace W of Rn and define the fol-
lowing
⃗v1 = ⃗x1
 
⃗x2 · ⃗v1
⃗v2 = ⃗x2 − ⃗v1
⃗v1 · ⃗v1
   
⃗x3 · ⃗v1 ⃗x3 · ⃗v2
⃗v3 = ⃗x3 − ⃗v1 − ⃗v2
⃗v1 · ⃗v1 ⃗v2 · ⃗v2 (14.7)
..
.
k−1 
X 
⃗xk · ⃗vn
⃗vk = ⃗xk − ⃗vn
⃗vn · ⃗vn
n=1

The QR Factorization: Let A be an m × n matrix with linearly independent columns. Then A can be
factored as A = QR, where Q is an m × n matrix with orthonormal columns and R is an invertible upper right
triangular matrix

73
14.4 Orthogonal Diagonalization of Symmetric Matrices
 
0 1 1
 
Ex: Find the QR factorization of A = 1 0 1
1 1 0
     
0 1 1
     
⃗x1 = 1 ⃗x2 = 0 ⃗x3 = 1
1 1 0
       
0   1   0 2
  ⃗x2 · ⃗v1   0+0+1   1 
(1) ⃗v1 = ⃗x1 = 1 (2) ⃗v2 = ⃗x2 − ⃗v1 = 0 − 1 = −1
⃗v1 · ⃗v1 0+1+1 2
1 1 1 1
       
    1   0 ! 1 1
⃗x3 · ⃗v1 ⃗x3 · ⃗v2   0+1+0   1 − 2 + 0  1 2  
1
(3) ⃗v3 = ⃗x3 − ⃗v1 − ⃗v2 = 1 −  1 − − 2  =  1 
⃗v1 · ⃗v1 ⃗v2 · ⃗v2 0+1+1 1 + 14 + 14 3
0 1 1
2 −1
  

 0

 ⃗v1 1  

 √
(1) ⃗q1 = ∥⃗v1 ∥ = 2 1




 1

    

 √2 √1

 2 0
⃗v2 1    1 6 3 
(2) ⃗q2 = = √ −1 =⇒ Q = [⃗q1 ⃗q2 ⃗q3 ] =  √
 2 − √1 √1 
3 

 ∥⃗v2 ∥ 6 6

 1 √1 √1 − √1



   2 6 3




1

 ⃗v3 1  

 (3) ⃗q3 = = √ 1

 ∥⃗v3 ∥ 3
−1

A = QR =⇒ Q −1 A = Q −1 QR =⇒ QT A = IR =⇒ R = QT A
   √ 
0 √1 √1 0 1 1 2 √1 √1
 2 2 2 
  
2 2
R= √
 6 − √1
6
√1  1 0 1 =  0
6  
√3
6
√1 
6
√1
3
√1
3
− 3
√1 1 1 0 0 0 √2
3
    √ 
0 1 1 0 √2 √1 2 √1 √1
  √1
6 3  2 2

A = QR =⇒ 1 0 1 =   2 − √1
6
√1   0
3 
√3
6
√1 
6
1 1 0 √1
2
√1
6
− √13 0 0 √2
3

14.4 Orthogonal Diagonalization of Symmetric Matrices


Orthogonal Diagonalization: A square matrix A is orthogonally diagonalizable if there exists an orthog-
onal matrix Q and a diagonal matrix D such that QT AQ = D.

Spectral Decomposition: Let A be an n × n matrix. Then A is symmetric if an only if it is orthogonally


diagonalizable.
A = λ1 ⃗q1 ⃗q1T + λ2 ⃗q2 ⃗q2T + · · · + λn ⃗qn ⃗qnT (14.8)

74
14.4 Orthogonal Diagonalization of Symmetric Matrices

Quadratic Form of Functions: A quadratic form in n variables, where A is a symmetric n × n matrix


and ⃗x is in Rn , is a function f : Rn −→ R of the form
f (⃗x) = ⃗x T A⃗x (14.9)

The Principal Axes Theorem: If we have a symmetric n × n matrix A associated with the quadratic form
f (⃗x) = ⃗x T A⃗x, then we can change its variables by doing the following
⃗x T A⃗x = (Q⃗y )T A(Q⃗y )
= ⃗y T QT AQ⃗y
= ⃗y T D⃗y

f (⃗y ) = ⃗y T D⃗y (14.10)

" #
4 1
Ex: Given the matrix A = : (1) Diagonalize it by finding an orthogonal matrix Q and a matrix D
1 4
such that D = QT AQ, (2) Obtain the spectral decomposition of A, (3) Find the quadratic form associated with
A, and then change its variables with its diagonalized matrix
" # p
4 1 8 ± 82 − 4(1)(15)
(1): A = =⇒ λ1,2 = =⇒ λ1,2 = 5, 3
1 4 2
 " # " # " #

 4 − 5 1 0 − 1 1 0 1

 ⃗v = = =⇒ ⃗v1 =
 1 1 4−5 0 1 −1 0 1
" # " # " #

 4−3 1 0 1 1 0 1


⃗v2 = = =⇒ ⃗v2 =
1 4−3 0 1 1 0 −1
 " #

 ⃗v1 1 1

 ⃗q1 = =√ " 1 #
 ∥⃗v1 ∥ 2 1 √ √1
" # =⇒ Q = [⃗q1 ⃗q2 ] = 12 2

 1 √ − √1


⃗v 2
=√
1 2 2
⃗q2 =
∥⃗v2 ∥ 2 −1
" 1 #" #" 1 # " #
√ √1 4 1 √ √1 5 0
D = QT AQ = 12 2 2 2 =

2
− √1
2
1 4 √1
2
− √1
2
0 3

75
14.4 Orthogonal Diagonalization of Symmetric Matrices

(2): A = λ1 ⃗q1 ⃗q1T + λ2 ⃗q2 ⃗q2T


" 1 # " 1 #
√ h i √ h i
= 5 12 √12 √12 + 3 2 √1 − √1
√ − √12 2 2
" 2# " #
5 5 3
−3
= 25 52 + 23 32
2 2 −2 2
" #
4 1
=
1 4

" #" #
h i 4 1 x
1
(3): f (x1 , x2 ) = ⃗x T A⃗x = x1 x2 = 4x21 + 2x1 x2 + 4x2
1 4 x2
" #" #
h i 5 0 y
T 1
=⇒ f (y1 , y2 ) = ⃗y D⃗y = y1 y2 = 5y12 + 3y22
0 3 y2

76
Chapter 15 Vector Spaces

15.1 Vector Spaces and Subspaces


Vector Spaces: Let V be a set on which two operations, called addition and scalar multiplication, have
been defined. If the following axioms hold for all ⃗u, ⃗v , and w
⃗ in V and for all scalars c and d, then V is called a
vector space and its elements are called vectors
1. ⃗u + ⃗v is in V 2. ⃗u + ⃗v = ⃗v + ⃗u
3. (⃗u + ⃗v ) + w
⃗ = ⃗u + (⃗v + w)
⃗ 4. ⃗u + ⃗0 = ⃗u
5. ⃗u + (−⃗u = 0 6. c⃗u is in V
7. c(⃗u + ⃗v ) = c⃗u + c⃗v 8. (c + d)⃗u = c⃗u + d⃗u
9. c(d⃗u) = (cd)⃗u 10. 1⃗u = ⃗u

15.2 Linear Independence, Basis, and Dimension


Coordinate Vectors: Let B = {⃗v1 , ⃗v2 , . . . , ⃗vn } be a basis for a vector space V . Let ⃗v be a vector in V ,
and write ⃗v = c1⃗v1 + c2⃗v2 + · · · + cn⃗vn . Then c1 , c2 , . . . , cn are called the coordinates of ⃗v given by
 
c1
 
 c2 
[ ⃗v ]B =   .. 
 (15.1)
.
cn

Ex: Find the coordinate vector [p(x)]B of p(x) = 2 − 3x + 5x2 with respect to the standard basis B =

1, x, x2 of P 2
p(x) = 2 − 3x + 5x2 =⇒ p(x) is a linear combination of 1, x, and x2 .
Let: ⃗v1 = 1, ⃗v2 = x, ⃗v3 = x2
     
2 1 2
     
p(x) = −3 ·  x  =⇒ [p(x)]B = −3
5 x2 5

15.3 Change of Basis


Change of Basis Matrix: Let B = {⃗u1 , . . . , ⃗un } and C = {⃗v1 , . . . , ⃗vn } be bases for a vector space V .
The n × n matrix whose columns are the coordinate vectors [⃗u1 ]C , . . . , [⃗un ]C of the vectors in B with respect
to C is denoted by PC←-B and is called the change-of-basis matrix from B to C.
PC←-B = [ [⃗u1 ]C . . . [⃗un ]C ] (15.2)

Ex: Let C = {î, ĵ} be the basis for cartesian coordinates and P = {êr , êθ } be the basis for polar
15.4 Linear Transformations

coordinates. Obtain the change-of-basis matrix PP←-C .


(
x = r cos θ
Let: ⃗r = xî + y ĵ =⇒ =⇒ Notice: We want d⃗r = dxî + dy ĵ = drêr + dθêθ
y = r sin θ

∂⃗r ∂⃗r 

d⃗r = dx + dy 

∂x
 ∂x    


 (
∂x ∂x ∂y ∂y  ⃗er = cos θî + sin θĵ
= + î + + ĵ
∂r ∂θ ∂r ∂θ =⇒

 ⃗eθ = − sin θî + cos θĵ

= cos θ dr î − r sin θ dθ î + sin θ dr ĵ + r cos θ dθ ĵ 




= (cos θî + sin θĵ) dr + (− sin θî + cos θĵ) r dθ 
" # " # " #" # " #
⃗er cos θî + sin θĵ cos θ sin θ î cos θ sin θ
Therefore: = = =⇒ PP←-C =
⃗eθ − sin θî + cos θĵ − sin θ cos θ ĵ − sin θ cos θ

15.4 Linear Transformations


Linear Transformation: A transformation from a vector space V to a vector space W is a mapping
T : V → W is defined to be a linear transformation for all ⃗u and ⃗v in V and for all scalars c if
T (⃗u + ⃗v ) = T (⃗u) + T (⃗v ) & T (c⃗v ) = cT (⃗v ) (15.3)

Combining both of the properties of such linear transformations yields the following result

T (c1⃗v1 + c2⃗v2 + · · · + cn⃗vn ) = c1 T (⃗v1 ) + c2 T (⃗v2 ) + · · · + cn T (⃗vn ) (15.4)

" # " # " #


1 2 −1
Ex: Let T : R2 → P2 such that T = 2 − 3x + x2 and T = 1 − x2 . Find T .
1 3 2
(" # " #) " # " # " #
1 2 1 2 −1
Since B = , is a basis for R2 =⇒ c1 + c2 = =⇒ c1 = −7 & c2 = 3
1 3 1 3 2
" # " # " #!
−1 1 2
T = T −7 +3
2 1 3
" # " #
1 2
= −7T + 3T
1 3
= −7(2 − 3x + x2 ) + 3(1 − x2 )
= −11 + 21x − 10x2

Composition of Linear Transformations: If T : U → V and S : V → W are linear transformations,


then the composition of S with T is the mapping S ◦ T , defined by
(S ◦ T )(⃗u) = S(T (⃗u)) (15.5)

78
15.5 The Kernel and Range of a Linear Transformation
" #
a
Ex: Let T : R → P1 and S : P1 → P2 be the linear transformations defined by T
2 = a + (a + b)x
b
" #
3
and S(p(x)) = xp(x). Find (S ◦ T )
−2
" # " #
3 3
(S ◦ T ) = S(T ( )) = S(3 + (3 − 2)x) = S(3 + x) = 3x + x2
−2 −2

15.5 The Kernel and Range of a Linear Transformation


Kernel: Let T : V → W be a linear transformation. The kernel of T , denoted ker(T ), is the set of all
vectors in V that are mapped by T to 0 in W . That is,
ker(T ) = {⃗v in V : T (⃗v ) = ⃗0} (15.6)

Range: Let T : V → W be a linear transformation. The range of T , denoted range (T ), is the set of all
vectors in W that are images of vectors in V under T . That is,
range(T ) = {T (⃗v ) : ⃗v in V }
(15.7)
= {w ⃗ = T (⃗v ) for some ⃗v in V }
⃗ in W : w
Let A be an n × n matrix and let T = TA be the corresponding matrix transformation from Rn to Rm defined
by T (⃗v ) = A⃗v . Then the kernel and range are as follows

ker(T ) = null(A)
(15.8)
range(T ) = col(A)

R1
Ex: Let S : P1 → R be the linear transformation defined byS(p(x)) = p(x)dx. Find the kernel and
0
range of S.

Z1
S(a + bx) = (a + bx) dx
0
 
b 2 1
= ax + x
2
  0
b b
= a+ −0=a+
2 2

ker(S) = {a + bx : S(a + bx) = 0}


 
b
= a + bx : a + = 0
2
 
b
= a + bx : a = −
2
 
b
= − + bx
2

79
15.6 The Matrix of a Linear Transformation

Z1
range(S) is all of R because a dx = [ax]10 =⇒ S(a) = a
0

The Rank Theorem: Let T : V → W be a linear transformation from a finite-dimensional vector space
V into a vector space W . Then,
rank(T ) + nullity(T ) = dim V (15.9)

# W be the vector space of all symmetric 2×2 matrices. Define a linear transformation T : W → P2
"Ex: Let
a b
by T = (a − b) + (b − c)x + (c − a)x2 . Find the rank and nullity of T .
c d
(" # " # )
a b a b
ker(T ) = :T =0
c d c d
(" # )
a b
= : (a − b) + (b − c)x + (c − a)x2 = 0
c d
(" # )
a b
= : (a − b) = (b − c) = (c − a) = 0
c d
(" # )
a b
= :a=b=c
c d
(" #) " #!
a a 1 1
= = span
a a 1 1
Therefore: nullity(T ) = dim(ker(T )) = 1 =⇒
rank(T ) = dim W − nullity(T ) = 3 − 1 = 2

15.6 The Matrix of a Linear Transformation


Transformation Matrix: Let V and W be two finite-dimensional vector spaces with bases B and C,
respectively, where B = {⃗v1 , . . . , ⃗vn }. If T : V → W is a linear transformation, then for every vector v in V
the m × n matrix A defined by
 
A = [ T (⃗v1 ) ]C [ T (⃗v2 ) ]C · · · [T (⃗vn ) ]C = [ T ]C (15.10)

Ex: Let T : P2 → P2 be the linear transformation defined by T (p(x)) = p(2x − 1) with respect to
E = {1, x, x2 }. (1) Obtain the transformation matrix in the given basis, and then (2) show that L : P2 → P2 ,

80
15.6 The Matrix of a Linear Transformation

2 ) with respect to E = {1, x,


a linear transformation defined by L(p(x)) = p( x+1 x2 }, is the inverse of T .
  
T (1) = 1 
 1 −1 1

 
1. T (x) = 2x − 1 =⇒ [ T ]E = 0 2 − 4


 0 0 4
T (x2 ) = (2x − 1)2 = 4x2 − 4x + 1


L(1) = 1 
  


x+1 
 1 21 1
4
 1
2. L(x) = 2 =⇒ [ L ]E = 0 12 2
 2 

x+1 x2 + 2x + 1 

 0 0 1
L(x2 ) = =  4
2 4

3. L−1 L = I =⇒ If: L−1 = T Then: T L = I


    
1 −1 1 1 21 14 1 0 0
    
Check: 0 2 −4 0 12 12  = 0 1 0 =⇒ Therefore: T = L−1
0 0 4 0 0 14 0 0 1

81
Chapter 16 Distance and Approximation

16.1 Inner Product Spaces


Inner Product: An inner product on a vector space V is an operation that assigns to every pair of vectors
⃗u and ⃗v in V a real number ⟨⃗u, ⃗v ⟩ such that the following properties hold for all vectors ⃗u, ⃗v , and w
⃗ in V and all
scalars c.
1. ⟨⃗u, ⃗v ⟩ = ⟨⃗v , ⃗u⟩ 2. ⟨⃗u, ⃗v + w⟩
⃗ = ⟨⃗u, ⃗v ⟩ + ⟨⃗u, w⟩

(16.1)
3. ⟨c⃗u, ⃗v ⟩ = c(⃗u, ⃗v ⟩ 4. ⟨⃗u, ⃗u⟩ ≥ 0 and (⃗u, ⃗u) = 0 if and only if ⃗u = ⃗0

If a vector space contains an inner product, then it is called an inner product space.

Length, Distance, and Orthogonality: Let ⃗u and ⃗v be vectors in an inner product space V .
p
1. The length (or the norm) of ⃗v is ∥⃗v ∥ = ⟨⃗v , ⃗v ⟩
2. The distance between ⃗u and ⃗v is d(⃗u, ⃗v ) = ∥⃗u − ⃗v ∥
3. ⃗u and ⃗v are orthogonal is ⟨⃗u, ⃗v ⟩ = 0

R1
Ex: Construct an orthonormal basis W for P2 with respect to the inner product ⟨f, g⟩ = f (x)g(x)dx
−1
by applying the Gram-Schmidt process to the basis {1, x, x2 } (These are known as the Legendre Polynomials).


 ⃗v1 = ⃗x1 = 1



 R1



 (x)(1) dx

 ⟨⃗x2 , ⃗v1 ⟩ −
 
 ⃗v2 = ⃗x2 − ⃗v1 = x − 1
1
 
 (1) = x
 ⃗x = 1
 1 
 ⟨⃗v1 , ⃗v1 ⟩ R
(1)(1) dx
⃗x2 = x =⇒ −

 

1
 
 R1 2 R1 2
⃗x3 = x2 


 (x )(x) dx (x )(1) dx

 ⟨⃗x3 , ⃗v2 ⟩ ⟨⃗x3 , ⃗v1 ⟩ − − 1

 ⃗v3 = ⃗x3 − ⃗v2 − ⃗v1 = x − 1
2 1
(x) − 1
1
(1) = x2 −

 R R

 ⟨⃗v ,
2 2 ⃗
v ⟩ ⟨⃗v ,
1 1⃗
v ⟩ 3

 (x)(x) dx (1)(1) dx
−1 −1

 ⃗v1 ⃗v1 1 1

 ⃗q1 = =p =s =√

 ∥⃗v1 ∥ ⟨⃗v1 , ⃗v1 ⟩ R1 2



 (1)(1) dx

 −1

 √



 ⃗v2 ⃗v2 x 6
⃗q2 = =p =s = x
∥⃗v2 ∥ ⟨⃗v2 , ⃗v2 ⟩ R1 2

 (x)(x) dx

 −

 1

 √

 ⃗v ⃗v x 2 10

 ⃗q3 =
3
=p
3
=s = (3x2 − 1)

 ∥⃗ v ∥ ⟨⃗v3 , ⃗v3 ⟩ R 4

 3 1

 (x2 )(x2 ) dx
−1
16.2 Norms and Distance Functions
( √ √ )
1 6 10
=⇒ W = √ , x, (3x2 − 1)
2 2 4

   
u1 v1
 .  .
Complex Dot Product: If ⃗u =  .  .
 .  and  .  are vectors in C , then the complex dot product of ⃗u and
n

un vn
⃗v is defined by
⃗u · ⃗v = ū1 v1 + ū2 v2 + · · · + ūn vn (16.2)

Conjugate Matrix Transpose: If A is a complex matrix, then the conjugate transpose of A is the matrix
A∗ defined by A∗ = ĀT .

Hermitian: A square complex matrix A is called Hermitian if A∗ = A. That is, if it is equal to its own
conjugate transpose.

Unitary: A square complex matrix U is called unitary if U −1 = U ∗ .

Unitary Diagonalization: A square complex matrix A is called unitarily diagonalizable if there exists a
unitary matrix U and a diagonal matrix D such that U ∗ AU = D if an only if A∗ A = AA∗

16.2 Norms and Distance Functions


Norm: A norm on a vector space V is a mapping that associates with each vector ⃗v a real number ∥⃗v ∥,
called the norm of ⃗v , such that the following properties are satisfied for all vectors ⃗u and ⃗v and all scalars c.
1. ∥⃗v ∥ ⩾ 0, and ∥⃗v ∥ = 0 if and only if ⃗v = ⃗0
2. ∥c⃗v ∥ = |c||⃗v | (16.3)
3. ∥⃗u + ⃗v ∥ ⩽ ∥⃗u∥ + ∥⃗v ∥

Matrix Norm: A matrix norm on Mm is a mapping that associates with each n×n matrix A a real number
∥A∥, called the norm of A, such that the following properties are satisfied for all n × n matrices A and B and
all scalars c.
1. ∥A∥ ⩾ 0 and ∥A∥ = 0 if and only if A = O
2. ∥cA∥ = |c|∥A∥
(16.4)
3. ∥A + B∥ ⩽ ∥A∥ + ∥B∥
4. ∥AB∥ ⩽ ∥A∥∥B∥

83
16.3 Least Squares Approximation

16.3 Least Squares Approximation


Best Approximation: If W is a subspace of a normed linear space V and if ⃗v is a vector in V , then for
every vector w
⃗ in W different from ⃗v the best approximation to ⃗v in W is the vector ⃗v in W such that
∥⃗v − ⃗v ∥ < ∥⃗v − w∥
⃗ (16.5)

The Best Approximation Theorem: If W is a finite-dimensional subspace of an inner product space V


and if ⃗v is a vector in V , then projW (⃗v ) is the best approximation to ⃗v in W . To prove this, let w⃗ be a vector in
W different from projW (⃗v ). Then projW (⃗v ) − w ⃗ is also in W , so ⃗v − projW (⃗v ) = perpW (⃗v ) is orthogonal to
projW (⃗v ) − w,⃗ by

∥⃗v − projw (⃗v )∥2 + ∥projw (⃗v ) − w∥


⃗ 2 = ∥(⃗v − projw (⃗v )) + (projw (⃗v ) − w)∥
⃗ 2
= ∥⃗v − w∥
⃗ 2

⃗ 2 > 0, since w
However, ∥projw (⃗v ) − w∥ ⃗ ̸= projw (⃗v ), so

∥⃗v − projw (⃗v )∥2 < ∥⃗v − projw (⃗v )∥2 + ∥projw (⃗v ) − w∥
⃗ 2 = ∥⃗v − w∥
⃗ 2

∥⃗v − projw (⃗v )∥ < ∥⃗v − w∥


     
1 5 3
     
Ex: Given ⃗u1 =  2 , ⃗u1 = −2, and ⃗v = 2. Find (1) the best approximation to ⃗v in the plane
−1 1 5
W = span(⃗u1 , ⃗u2 ), and (2) the Euclidean distance from ⃗v to W .
⟨⃗v , ⃗u1 ⟩ ⟨⃗v , ⃗u2 ⟩
1. projW (⃗v ) = ⃗u1 + ⃗u2
⟨⃗u1 , ⃗u1 ⟩ ⟨⃗u2 , ⃗u2 ⟩
   
  1   5
3+4−5   15 − 4 + 5  
= 2+ −2
1+4+1 25 + 4 + 1
−1 1
1  8   
3 3 3
 2   16   2 
=  3  + − 15  = − 5 
− 13 8
15
1
5

      s
3 3 0   2  2
   2  12  12 24 12
2. ∥⃗v − projW (⃗v )∥ = 2 − − 5  =  5  = 0 +
2 + =√
1 24
5 5 5
5 5 5

Least Squares Solution: If A is an m × n matrix and ⃗b is in Rm , a least squares solution of A⃗x = ⃗b is a


⃗x in Rn such that
∥⃗b − A⃗x∥ ⩽ ∥⃗b − A⃗x∥ (16.6)

84
16.3 Least Squares Approximation

Least Squares Error: If the errors from a set of n points to the line of best given by y = a + bx fit are
given by E1 , . . . , En , the the corresponding error vector and least squares error are given by
 
E1 q
.
⃗e =  .. 

 =⇒ ∥⃗
e ∥ = E12 + · · · + En2 (16.7)
En

However, since the approximation ∥⃗b − A⃗x∥ is less than the best solution, we redefine this term as the least
squares error by
∥⃗e∥ = ∥⃗b − A⃗x∥ (16.8)

The Least Squares Theorem: Let A be an m × n matrix and let ⃗b be in Rm . Then A⃗x = ⃗b always has at
least one least squares solution ⃗x.

1. ⃗x is a least squares solution of A⃗x = ⃗b if and only if ⃗x is a solution of the normal equations AT A⃗x = AT ⃗b.
2. A has linearly independent columns if and only if AT A is invertible. In this case, the least squares solution
of A⃗x = ⃗b is unique and is given by
−1 T
⃗x = AT A A ⃗b (16.9)

Pseudoinverse Matrix: If A is a matrix with linearly independent columns, then the pseudoinverse of A
is the matrix A+ defined by
A+ = (AT A) −1 AT (16.10)

Ex: Use the least squares theorem to find (1) the least squares line and (2) the least squares error for the
points (1, 1), (2, 2), (3, 2), (4, 3).
       −1  

 a + (1)b = 1

 1 1 " # 1 " # " # 1 1 #  1

a + (2)b = 2      "
1 2      1 1 1 1  
1. =⇒   a = 2 =⇒ a =  1 1 1 1 1 2  2
 1        
1 2 3 4  2

 a + (3)b = 2  3 b 2 b  1 2 3 4  1 3  


 1 4 3 1 4 3
a + (4)b = 3
" # " #!−1 " # " #" # " # " #
a 4 10 8 1 30 −10 8 1 10 1
1 3
= = = = 23 =⇒ y = + x
b 10 30 23 20 −10 4 23 20 12 5
2 5

85
16.4 The Singular Value Decomposition

     
1 1 1 " # −1 s
    1  310    2    2
2 1 2   1 2 3 3 2 1 1
2. ∥⃗e∥ = ∥⃗b−A⃗x∥ =   
2 − 1
 2
 =  10  =
 3 
− + + − + =√
− 10 
3 10 10 10 10
   3 5 5
1
3 1 4 10

16.4 The Singular Value Decomposition


Singular Values: If A is an m×n matrix, the singular values of A are the square roots of the eigenvalues of
AT A and are denoted by σ1 , . . . , σn It is conventional to arrange the singular values so that σ1 ≥ σ2 ≥ · · · ≥ σn

The Singular Value Decomposition: Let A be an m × n matrix with singular values σ1 ≥ σ2 ≥ · · · ≥


σr > 0 and σr+1 = σr+2 = · · · = σn = 0. Then there exist an m × m orthogonal matrix U , an n × n
P
orthogonal matrix V , and an m × n matrix of the form
A = U ΣV T (16.11)

The Outer Product Form of the SVD: Let A be an m × n matrix with singular values σ1 ≥ σ2 ≥ · · · ≥
σr > 0 and σx+1 = σr+2 = · · · = σn = 0. Let ⃗u1 , . . . , ⃗ur be left singular vectors and let ⃗v1 , . . . , ⃗vr be right
singular vectors of A corresponding to these singular values. Then
A = σ1 ⃗u1⃗v1T + · · · + σj ⃗ur⃗vrT (16.12)

16.5 Applications
Approximation of Non-Polynomial Functions: Given a continuous function f on an interval [a, b] and a
subspace W of C [a, b], the best approximation to f in W with an orthogonal basis V = {⃗v1 , . . . , ⃗vn } is given
by
⟨f, ⃗v1 ⟩ ⟨f, ⃗vn ⟩
projW (f ) = ⃗v1 + · · · + ⃗vn (16.13)
⟨⃗v1 , ⃗v1 ⟩ ⟨⃗vn , ⃗vn ⟩

The given function f lives in a vector space C [a, b] of continuous functions on the interval [a, b]. This is an
inner product space with the following inner product
Zb
⟨f, g⟩ = f (x)g(x) dx
a

Ex: Find the best linear approximation to f (x) = ex on the interval [−1, 1] (Note that for linear approxi-

86
16.5 Applications

mations, they are in P1 , with an orthogonal basis V = {1, x}).


⟨ex , 1⟩ ⟨ex , x⟩
g(x) = projW (ex ) = (1) + (x)
⟨1, 1⟩ ⟨x, x⟩
R1 x R1 x
(e )(1) dx (e )(x) dx
−1 −1
= + x
R1 R1
(1)(1) dx (x)(x) dx
−1 −1

e − e −1 2e −1
= + 2 x
2 3
≈ 1.18 + 1.10x

Fourier Approximation: The best approximation to a function f in C [−π, π] by a trigonometric poly-


nomial of order n is projW (f ) where W = span(B) and B = {1, cos x, . . . , cos nx, sin x, . . . , sin nx} (This
is true because B is an orthogonal set and satisfied being a basis for W ). We define this approximation by the
following
projW (f ) = a0 + a1 cos x + · · · + an cos nx + b1 sin x + · · · + bn sin nx (16.14)


⟨f, 1⟩ 1
a0 = = f (x) dx
⟨1, 1⟩ 2π
−π

⟨f, cos kx⟩ 1
ak = = f (x) cos kx dx (16.15)
⟨cos kx, cos kx⟩ π
−π

⟨f, sin kx⟩ 1
bk = = f (x) sin kx dx
⟨sin kx, sin kx⟩ π
−π

Ex: Obtain the 2nd order Fourier approximation to f (x) = x2 on the interval [−π, π].
Zπ  π
⟨f, 1⟩ 1 2 1 x3 π2
a0 = = x dx = =
⟨1, 1⟩ 2π 2π 3 −π 3
−π
Zπ Zπ
⟨f, cos kx⟩ 1 2 2
ak = = x cos kx dx = x2 cos kx dx
⟨cos kx, cos kx⟩ π π
−π 0
 2 π
2 x sin kx 2x cos kx 2 sin kx
= + −
π k k2 k3
 2 0
2 π sin kπ 2π cos kπ 2 sin kπ
= + −
π k k2 k3
4π cos kπ
= =⇒ a1 = −4, a2 = 1
k2

87
16.5 Applications


⟨f, sin kx⟩ 1
bk = = x2 sin kx dx
⟨sin kx, sin kx⟩ π
−π
 2 π
1 x cos kx 2x sin kx 2 cos kx
= − − +
π k k2 k3 −π
 2   2 
1 π cos kπ 2π sin kπ 2 cos kπ π cos(−kπ) (−2π) sin(−kπ) 2 cos(−kπ)
= − − + − − − +
π k k2 k3 k k2 k3
=0

π2
=⇒ f2 (x) = − 4 cos x + cos 2x
3

88
Chapter 17 LA Appendix

17.1 Mathematical Notation and Methods of Proof


Set: A set is a collection of objects, called the elements of the set. The empty set is the set with no elements. It is denote

Intersection: The intersection of sets A and B is denoted by A ∩ B and consists of the elements that A
and B have in common. That is,
A ∩ B = {x : x ∈ A and x ∈ B}

Union: The union of A and B is denoted by A ∪ B and consists of the elements that are in either A or B
(or both). That is,
A ∪ B = {x : x ∈ A or x ∈ B}

Summation Notation: We can abbreviate a sum of the form as


Xn
a1 + a2 + · · · + an = ak
k=1

Ex: Write the sum using summation notation 1 + 3 + 5 + · · · + 99

1 + 3 + 5 + · · · + 99
= (2 · 0 + 1) + (2 · 1 + 1) + (2 · 2 + 1) + · · · + (2 · 49 + 1)
P49
= (2k + 1)
k=0

17.2 Mathematical Induction


First Principle of Mathematical Induction: Let S(n) be a statement about the positive integer n. If
1. S(1) is true and
2. for all k ≥ 1, the truth of S(k) implies the truth of S(k + 1) then S(n) is true for all n ≥ 1.

Second Principle of Mathematical Induction: Let S(n) be a statement about the positive integer n. If
1. S(1) is true and
2. the truth of S(1), S(2), . . . , S(k) implies the truth of S(k + 1) then S(n) is true for all n ≥ 1.

17.3 Complex Numbers


Complex Conjugate: The conjugate of z = a + bi is the complex number
z̄ = a − bi
17.4 Polynomials

Absolute Value of Complex Numbers: The absolute value |z| of a complex number z = a + bi is its
distance from the origin p
|z| = |a + bi| = a2 + b2

Polar Form: Using polar coordinates, we take the point (a, b) and make it (r, θ). Thus, redefining z =
a + bi in polar coordinates is given by
z = r(cos θ + i sin θ)

De Moivre’s Theorem:If z = r(cos θ + i sin θ) and n is a positive integer, then


z n = rn (cos nθ + i sin nθ)

Ex: Find (1 + i)6 .


√  π π
1+i= 2 cos + i sin
4 4
 
6
√ 6 6π 6π
(1 + i) = ( 2) cos + i sin
4 4
 
3π 3π
= 8 cos + i sin
2 2
= 8(0 + i(−1)) = −8i

Euler’s Formula: For aby real number x,


eix = cos x + i sin x

Ex: Write e2+iπ/4 in the form a+ bi


 √ √ !
π π 2 2
e2+iπ/4 = e2 eiπ/4 = e2 cos + i sin = e2 +i
4 4 2 2
√ √
e2 2 e2 2
= + i
2 2

17.4 Polynomials
Polynomial: A function p of a single variable x that can be written in the form
X n
p(x) = a0 + a1 x + a2 x2 + · · · + an xn = ak xk
k=0
The integer n is called the degree of p, which is denoted by writing deg p = n. A polynomial of degree zero is
called a constant polynomial.

The Rational Roots Theorem:Let f (x) = a0 +a1 x+· · ·+an xn be a polynomial with integer coefficients
and let a/b be a rational number written in lowest terms. If a/b is a zero of f , then a0 is a multiple of a and an

90
17.4 Polynomials

is a multiple of b.

Ex: Find all the rational roots of the equation6x3 + 13x2 − 4 = 0


a ∈ {±1, ±2, ±4} and b ∈ {±1, ±2, ±3, ±6}
1 4 1
±1, ±2, ±4, ± , ±13 , ±23 , ± , ±
2 3 6
Quadratic Formula: Suppose we have a second degree polynomial p2 (x)ax2 + bx + c. Its solutions are given
by
ax2 + bx + c = 0
 
b b2 b2
2
a x + x+ 2 = −c
a 4a 4a
 
b 2 b2 − 4ac
x+ =
2a 4a2

b b2 − 4ac
x+ =±
2a √2a
−b ± b2 − 4ac
x=
2a

Ex: Find the roots of 6x2 + x − 2


p
−1 ± 12 − 4(6)(−2)
x=
√ 2·6
−1 ± 49 −1 ± 7
= =
12 12
6 8
= ,−
12 12

Fundamental Theorem of Algebra: Every polynomial of degree n with real or complex coefficients has
exactly n zeros (counting multiplicities) in C.

Descartes’ Rule of Signs: Let p be a polynomial with real coefficients that has k sign changes. Then the
number of positive zeros of p (counting multiplicites) is at most k.

91

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy