Numerical Analysis Solution
Numerical Analysis Solution
Numerical Analysis Solution
1
u11 u12 u1n
l21 1
u22 u2n
A = LU = .
.. .
.
.
.
.
.
.
.
. .
unn
ln1 ln2 1
Doolitte algorithm:
(u11 , u12 , , u1n )
(l21 , l31 , , ln1 )
(u22 , u23 , , u2n )
(l32 , l42 , , ln2 )
=
=
=
=
6.
Pj1
lij = aij k=1
lik ukj /ujj
7.
EndDo
8. EndDo
P. 134. 8. By use of the equation U U l = I, obtain an algorithm for finding the inverse of an upper triangular
matrix.
Solution. Let X be the inverse of the upper triangular matrix U , then X must be upper triangular and U X = I,
which has the component form
n
X
uik xkj = ij
k=1
j
X
uik xkj = ij
k=i+1
So
j
X
uik xkj /uii
xij = ij
k=i+1
1. For j = 1, , n Do:
2.
For i = j,
, 1 Do:
Pj
3.
xij = ij k=i+1 uik xkj /uii
4.
EndDo
5. EndDo
0 a
0 b
l11 u12 = a,
l21 u11 = 0,
l11 u12 = a,
u12 = a,
l11 u12 = 0,
l21 u11 = a,
l21 u11 = a,
1 5
3 15
1 0
3 1
1 5
0 0
P. 134. 16. If A is invertible and has an LU decomposition then all principal minors of A are nonsingular.
Solution. If A is invertible and has an LU decomposition then L and U are also invertible. From A = LU , we
find that Ak = Lk Uk where Ak (Lk , Uk ) are the submatrix of A which is obtained from the first kth rows and first
kth columns of A (L, U ). Since the triangular matrix L and U are nonsingular, Lk and Uk are also nonsingular, thus
Ak = Lk Uk are also nonsingular.
P. 134. 19. Prove or disprove: If A has an LU -factorization in which L is unit lower triangular, then it has an
LU -factorization in which U is unit upper triangular.
Solution. A has the LU factorization A = LU , where L is unit lower triangular. Thus A is nonsingular and thus
U is also nonsingular. Define D = diag(u11 , , unn ), then A = (LD)(D1 )U where LD is lower triangular and
(D1 )U is unit upper triangular.
P. 134. 22. Use the Cholesky Theorem to prove that these two properties of a symmetric matrix A are equivalent:
(a) A is positive definite: (b) there exists x(1) , x(2) , ..., x(n) in Rn such that Aij = (x(i) )T x(j) .
Solution. By the Cholesky Theorem, the symmetric positive definite matrix A has the Cholesky decomposition
A = XX T where X is the lower triangular with positive elements on its diagonal. Let x(i) be the ith row of X,
A = XX T is equivalent to Aij = (x(i) )T x(j) for 1 i, j n. Obviously, the rows of X is a linearly independent set
of vectors.
P. 134. 24. Prove that if all the leading principal minors of A are nonsingular, then A has a factorization LDU in
which L is unit lower triangular, U is unit upper triangular, and D is diagonal.
Solution. According to the LU factorization of A
A = LU1
where L is the unit lower triangular and U1 is upper triangular, we write U1 = DU where the diagonal matrix D is
the diagonal part of U1 and thus U is the unit upper triangular. So we have the desired decomposition A = LDU .
P. 134. (Continuation) If A is a symmetric matrix whose leading principal minors are nonsingular, then A has a
factorization LDLT in which L is unit lower triangular and D is diagonal.
Solution. First A has a factorization A = LDU in which L is unit lower triangular, U is unit upper triangular, and
D is diagonal. Since A = AT , L(DU ) = U T (DLT ) where L and U T is the unit lower triangular, DU and DLT are
upper triangular. According
to the uniquenessof LU -decomposition, we have L = U T . Thus A = LDLT .
2
6 4
P. 134. 29. Consider A = 6 17 17 Determine directly the factorization A = LDLT where D is diagonal
4 17 20
and L is unit lower triangular-that is, do not use Gaussian elimination.
Solution. We use the Doolittle factorization of A
2 6 4
2 6 4
2
6 4
2
6 4
A(1) = 6 17 17 3 17 17 3 1 5 3 1 5
2 5 3
2 5 20
2 17 20
4 17 20
Thus
2 0 0
1 3 2
2 6 4
1 0 0
2
6 4
1 0 0
6 17 17 = 3 1 0 0 1 5 = 3 1 0 0 1 0 0 1 5
0 0 1
0 0 3
2 5 1
0 0 3
2 5 1
4 17 20
3 0 1
P. 134. 31. Find the LU -factorization of the matrix A = 0 1 3
1 3 0
Solution.
3 0 1
3 0 1/3
3 0 1/3
2 6 4
0 1 3 0 1 3 0 1 3 3 1 5
1 3 0
1 3 0
1 3 0
2 5 26/3
3 0 1
2 0
0
1 6 4
0 1 3 = 3 1 0 0 1 5
1 3 0
2 5 26/3
0 5 1
P. 155. 1. Solve the following linear systems twice. First, use Gaussian elimination and give the factorization
A = LU . Second, use Gaussian elimination with scaled row pivoting and determine the factorization of the form
P A = LU . (c)
1
1
0
3
1 0 3
x1
4
0 3 1 x2 0
=
1 1 1 x3 3
0 1 2
x4
1
1 0 3
1
0 3 1
1
1 1 1
0
0 1 2
3
1
1
0
3
1 0 3
1 1 0 3
1
1 3 2
1 1 3 2
1
1 1 1
0 1 4 1
0
3 1 7
3 3 8 1
3
1 0 3
1 3 2
1 4 1
3 2 3
1 1 0 3
1
1 0 3
1 3 2
0 3 1 1 1
=
4 1
1 1 1 0 1 1
3
3 3 2 1
0 1 2
1/3
1 1 0 3 s1 = 3
1/3 1 1/3 7/3
1 0 3 1 s2 = 3
1/3
1/3 0 8/3 1/3
0 1 1 1 s3 = 1
1
0 1 1
0
s4 = 3
3 0
1
2
3 0 1 2
3
0 4/3 4/3
1/3 0 1/2 2
1/3 0 8/3 1/3
0 8/3
1/3
=B
0 1 1 1
1 1
1
3 0
1
2
0
1
2
where p1 = 4, p2 = 3, p3 = 2, p4 = 1.
(P )ij = pi j
0
0
=
0
1
0
0
1
0
0
1
0
0
1
0
,
0
0
3
0
uij = Bpi j (i j) =
0
0
0 1
2
1 1 1
,
0 8/3 1/3
0 0 2
1
0
lij = Bpi j (i > j) =
1/3
1/3
0
1
0 1
0 1/2 1
We have P A = LU .
P. 155. 3. Let (p1 , p2 , , pn ) be a permutation of (1, 2, , n) and define the matrix P by Pij = pi ,j . Let A be
an arbitrary n n matrix. Describe P A, AP, P 1 , and P AP 1 .
Solution. The ith row of P A is just the pi th row of A
(P A)ij =
n
X
Pik Akj =
k=1
n
X
pi ,k Akj = Api ,j
k=1
n
X
k=1
Aik Pkj =
n
X
k=1
Aik pk ,j = Aik ,
pk = j
ij = (P P 1 )ij =
n
X
Pik (P 1 )kj =
k=1
n
X
pi ,k (P 1 )kj = (P 1 )pi ,j
k=1
n
X
(P A)ik (P 1 )kj =
k=1
n
X
Api ,k (P 1 )kj =
k=1
n
X
i(k) = j k = pj
k=1
P. 155. 4. Gaussian elimination with full pivoting treats both rows and columns in an order different from the
natural order. Thus, in the first step, the pivot element aij is chosen, so that |aij | is the largest in the entire matrix.
This determines that row i will be the pivot row and column j will be the pivot column. Zeros are created in column
j by subtracting multiples of row i from the other rows.
Solution. Let p1 , p2 , , pn be the indices of the rows in the order in which they become pivot rows. Let q1 , q2 , , qn
be the indices of the columns in the order in which they become pivot columns. The ith pivot element locates at
(pi , qi ) (1 i n). Let A(1) = A, and define A(2) , , A(n) recursively by the formula
(k)
if i k or i > k > j
api ,qj
(k)
(k)
(k)
(k)
(k+1)
api ,qj = api ,qj (api k /apk k )/apk j
if i > k and j > k
(k) (k)
api k /apk k
if i > k and k = j
Define a permutation matrix P whose elements are Pij = pi j and define Qij = iqj . Define an upper triangular
(n)
matrix U whose elements are uij = api qj if j i. Define a unit lower triangular matrix L whose elements are
(n)
lij = api qj if j < i. Then P AQ = LU .
Proof. From the recursive formula,
(k)
ukj = a(n)
pk qj = apk qj ,
kj
This is because the pk -th row does not changed during the Gaussian elimination from A(k) A(n) .
(n)
(k+1)
(k)
(k)
= api k /apk k ,
kj
This is because the k-th column does not changed during the Gaussian elimination from A(k+1) A(n) . Let
i j,
(LU )ij =
i
X
lik ukj =
i1
X
(k+1)
(i)
(1)
(a(k)
pi qj api qj ) + api qj = api qj = api qj
k=1
i1
X
(k)
(k)
(i)
(api k /apk k )a(k)
pk qj + api qj
k=1
k=1
2 2 4
1 1 1
3 7 5
Solution.
2 2 4
2 4 2
s1 = 4
2 -4 2
1
1 1 s2 = 1 1
1 1 1
1 1 = B
s
=
7
3 7 5
3 4 8
3 1 6
3
Hence p1 = 2, p2 = 1, p3 = 3.
0 1 0
(P )ij = pi j = 1 0 0 ,
0 0 1
1 1 1
uij = Bpi j (i j) = 4 2 ,
6
We have P A = LU .
P. 155. 12. Assume that A is tridiagonal. Define c0 = 0 and an = 0. Show that if A is columnwise diagonally
dominant
|di | > |ai | + |ci1 |,
(1 i n)
then the algorithm for tridiagonal systems will, in theory, be successful since no zero pivot entries will be encountered.
Solution.
d1 c1
a1 d2 c2
..
..
..
.
.
.
a
d
c
n2
n1
an1
n1
dn
..
..
..
.
.
.
1
1 + ||A||
Solution.
||(I A)1 ||(1 + ||A||) ||(I A)1 || ||I A|| 1
P. 178. 4. Prove that if A is invertible and ||A B|| < ||A1 ||1 , then
||A1 B 1 || ||A1 ||
||I A1 B||
1 ||I A1 B||
||I A1 B||
1 ||I A1 B||
C k || = ||
k=0
C k || ||C|| ||
k=1
C k ||
k=0
||C||
1 ||C||
and let C = I A1 B.
P. 178. 6. Prove that if A is invertible, then for any B,
||B A1 ||
||I AB||
||A||
Solution.
||B A1 || ||A|| ||I AB||
P. 178. 7. Prove or disprove: If 1 = ||A|| > ||B|| , then A B is invertible.
Solution. Choose B = 21 A but A is singular. then A B is also singular.
P. 178. 9. Prove or disprove: If ||AB I|| < 1, then ||BA I|| < 1 .
Solution. Choose
1 0
b11 0
A=
, B=
1 1
0 b22
AB I =
b11 1
0
b11
b22 1
BA I =
b11 1
0
b22
b22 1
Obviously we choose b11 and b22 are very close to 1 and we have ||AB I|| < 1 but ||I AB|| > 1.
P. 178. 11. Prove that if A is invertible and ||B A|| < ||A1 ||1 , then
B 1 = A1
(I BA1 )k
k=0
Ck
k=0
Solution.
||(I E)1 (I + E)|| = ||
E k || ||E||2
k=2
1
3||E||2
1 ||E||
Show that the sequence of functions xn (t) = tn has the properties ||xn || = 1 and ||xn ||1 0 as n . Thus these
norms lead to different concepts of convergence.
Solution. This is because the norms in an space V with infinity dimensions are not equivalent.
P. 178. 21. Prove that if ||AB I|| < 1, then 2B BAB is a better approximate inverse for A than B, in the sense
that A(2B BAB) is closer to I.
Solution. We want to prove that
||I A(2B BAB)|| ||I B(2B BAB)||
We just prove it if B = I and thus ||I A|| < 1. The above inequality is equivalent to
||I 2A + A2 || ||I A||
This is obvious because
||I 2A + A2 || ||I A||2 ||I A||
P. 178. 25. Give a series that represents A1 under the assumption that ||I A|| < 1 for some known scalar .
Solution. Since ||I A|| < 1 , we have
(A)1 =
(A)k
k=0
i.e. ,
A1 =
k+1 Ak
k=0
P. 178. 27. Prove that if A is ill conditioned, then there is a singular matrix near A. In fact, there is a singular
matrix within distance ||A||/(A) of A.
P. 178. 31. Prove that if there is a polynomial p without constant term such that
||I p(A)|| < 1
then A is invertible.
Solution. This is because p(A) = Aq(A) is invertible, where q is a polynomial. thus A is also invertible.
P. 178. 32. Prove that if p is a polynomial with constant term c0 and if |c0 | + ||I p(A)|| < 1, then A is invertible.
Solution. This is because ||I q(A)|| < 1 where q = p c0 and then apply the above result in Ex. 31
P. 201. 1. Prove that if A is diagonally dominant and if Q is chosen as in the Jacobi method, then
(I Q1 A) < 1
Solution. Let be an eigenvalue of I Q1 A and the corresponding eigenvectors x with ||x|| = 1. We have
(I Q1 A)x = x,
or Qx Ax = Qx
i.e.,
n
X
aij xj = aii xi ,
1in
j6=i
P. 201. 2. Prove that if A has this property (unit row diagonally dominant)
X
aii = 1
|aij | (1 i n)
j6=i
Solution. Let be an eigenvalue of I Q1 A and the corresponding left eigenvectors x with ||x|| = 1. We have
xT (I Q1 A) = xT ,
or xT xT A = xT
i.e.,
n
X
aij xi = xj ,
1jn
i6=j
P. 201. 5. Let || || be a norm on Rn , and let S be an n n nonsingular matrix. Define ||x|| = ||Sx||, and prove
that || | is a norm.
Solution. The proof is simple.
P. 201. 6. (Continuation) Let || || be a subordinate matrix norm, and let S be a nonsingular matrix. Define
||A|| = ||SAS 1 ||, and show that || || is a subordinate matrix norm.
Solution.
||A|| = ||SAS 1 || = sup
x6=0
||SAS 1 x||
||SAy||
||Ay||
= sup
= sup
||x||
y6=0 ||Sy||
y6=0 ||y||
If (A) < 1, we can find a subordinated matrix norm || || such that ||A|| < 1. Thus we have limk Ak x = 0 for
every x. If limk Ak x = 0 for every x, limk Ak = 0.
P 1 BP =
where
Pr
i=1
i 1
i 1
.. ..
. .
Ji =
..
. 1
i
J1
J2
..
.
Jr
= i I + Eni ,1
ni ni
ni = n, and
Enki ,1 = 0,
B k = P J k P 1 ,
Jk =
J1k
k ni
J2k
..
.
Jrk
k = 1, 2, . . .
B k 0 J k 0 Jik 0 |i | < 0,
i = 1, , r
k
X
Ckj kj
(Eni ,1 )j =
i
j=0
ki
Ck1 k1
i
ki
nX
i 1
Ckj kj
(Eni ,1 )j
i
j=0
...
...
1 k1
Ck i
...
..
..
.
.
..
.
k(ni 1)
Ckni 1 i
ni 2 k(ni 2)
Ck
i
..
.
..
.
Ck1 k1
i
ki
for a large k.
P. 201. 10. Which of the norm axioms are satisfied by the spectral radius function p and which are not? Give
proofs and examples, as appropriate.
Solution.
(A) 0,
(cA) = |c|(A),
cR
If (A) = 0 we can not obtain A = 0. The triangular inequality of the spectral radius is also not valid.
P. 201. 15. Let A be diagonally dominant, and let Q be the lower triangular part of A, as in the Gauss-Seidel
method. Prove that (I Q1 A) is no greater than the largest of the ratios
Pn
j=i+1 |aij |
ri =
Pi1
|aii | j=1 |aij |
Solution. See the proof of the convergence of Gauss-Seidel iteration in the talk.
P. 201. 19. Is there a matrix A such that (A) < ||A|| for all subordinate matrix norms?
Solution. Let be a eigenvalue of A and its corresponding eigenvector is x with ||x|| = 1
Ax = x
Thus || ||A||.
P
P. 201. 20. Prove that if (A) < 1, then I A is invertible and (I A)1 = k=0 Ak .
Solution. If (A) < 1, then there exists a subordinate matrix norm || || such that ||A|| < 1. If I A is singular,
there exists a nonzero vector x such that (I A)x = 0. Thus
||x|| = ||Ax|| ||A|| ||x||
which leads to ||A|| 1. This is a contraction.
P. 201. 21. Is the inequality (AB) (A)(B) true for all pairs of n n matrices? Is your answer the same when
A and B are upper triangular?
Solution. The inequality (AB) (A)(B) is wrong for all pairs of n n matrices. But if A and B are upper
triangular, it is correct.
P. 201. 25. Show that for nonsingular matrices A and B, (AB) = (BA).
Solution. BA = A1 (AB)A.
P. 201. 30. Show that these matrices
R=I A
J = I D1 A
G = I (D CL )1 A
L = I (D CL )1 A
U = I (D CU )1 A
S = I (2 )(D CU )1 D(D CL )1 A
are the iteration matrices for the Richardson, Jacobi, Gauss-Seidel, forward SOR, backward SOR, and SSOR methods,
respectively. Then show that the splitting matrices Q and iteration matrices G given in this section are correct.
Solution. It is simple to prove.
P. 201. 31. Find the explicit form for the iteration matrix I Q1 A in the Gauss-Seidel method when
2 1
1 2 1
.
.
.
.. .. ..
A=
1 2 1
1 2
Solution.
1/2
1/4
1/8
..
.
1/2
1/4
..
.
1/2
.. . .
.
.
1/2n 1/2n1
1/2
(I Q1 A)ij = ij
ij
X Akj
2ik+1
ki
P. 201. 33. Give an example of a matrix A that is not diagonally dominant, yet the Gauss-Seidel method applied
to Ax = b converges.
Solution. If A is nonreducible and weak diagonally dominant, the Gauss-Seidel method applied to Ax = b converges.
For example,
2 1
1 2 1
.
.
.
.. .. ..
A=
1 2 1
1 2
P. 201. 35. Prove that if the number = ||I Q1 A|| is less than 1, then
||x(k) x||
||x(k) x(k1) ||
1
Solution. we have
||x(k+1) x(k) || ||x(k) x(k1) ||
||x(k+1) x|| ||x(k) x||
||x(k+1) x(k) || ||x(k) x|| ||x(k+1) x|| = (1 )||x(k) x||
P. 234. 1. Let A be an n n matrix that has a linearly independent set of n eigenvectors, {u(1) , , u(n) }. Let
Au(i) = i u(i) , and let P be the matrix whose columns are the vectors u(1) , , u(n) . What is P 1 AP ?
Solution. P 1 AP = diag(1 , , n ).
P. 234. 2. Show that if the normalized and unnormalized versions of the power method are started at the same
initial vector, then the values of r in the two algorithms will be the same.
Solution. The normalized power method:
x(1) = Ax(0) ,
x(k) = Ay (k1) =
x(2) = Ay (1) ,
A2 x(k2)
Ak x(0)
k1 [a1 u(1) + (k) ]
Ax(k1)
=
=
=
=
||x(k1) ||
||Ax(k2) ||
||Ak1 x(0) ||
||k1
[a1 u(1) + (k1) ]||
1
(xk+1 )/(xk ) =
which has the same limit as the unnormalized version of power method.
P. 234. 3. In the power method, let rk = (xk+1 )/(xk ). We know that limk rk = 1 . Show that the relative
errors obey
k
rk 1
2
=
ck
1
1
where the numbers ck form a convergent (and hence bounded) sequence.
Solution. This is because
(k) =
n
X
ai (i /1 )k xi = (2 /1 )k (a2 x2 + O(1)) 0,
i=2
rk
(x(k+1) )
a1 (u(1) ) + ((k+1) )
=
1
(x(k) )
a1 (u(1) ) + ((k) )
k
rk 1
((k+1) (k) )
2
=
=
ck
1
1
a1 (u(1) ) + ((k) )
ck =
has a limit as k .
P. 234. 7. In the normalized power method, show that if 1 > 0 then the vectors xk converge to an eigenvector.
Solution. See the proof of Ex. 2.
P. 234. 8. Devise a simple modification of the power method to handle the following case: 1 = 2 > |3 |
|4 | |n |.
Solution. We still adopt the power method
x(k) = Ax(k1) ,
k = 0, 1,
so that
x(k) = Ak x(0) =
n
X
i=1
where
(k) =
n
X
ai (i /1 )k xi 0,
i=3
(x(k+1) )
a1 (u(1) ) + (1)k+1 a2 (u(2) ) + ((k+1) )
= 1
1
(k)
(x )
a1 (u(1) ) + (1)k a2 (u(2) ) + ((k) )
P. 234. 10. Let the eigenvalues of A satisfy 1 > 2 > > n (all real, but not necessarily positive). What value
of the parameter should be used in order for the power method to converge most rapidly to 1 when applied to
A + I.
Solution. Choose such that < (1 + n )/2 to ensure
|1 ()| > |i ()| (i = 2, , n)
P. 234. 11. Prove that I AB has the same eigenvalues as I BA, if either A or B is nonsingular.
Solution. If A is singular, I BA = A1 (I AB)A.
P. 234. 12. If the power method is applied to a real matrix with a real starting vector, what will happen if a
dominant eigenvalue is complex? Does the theory outlined in the text apply?
Solution. Yes. Although x(k) produced by power method is real, the complex sequence
rk
a1 (u(1) ) + ((k+1) )
(x(k+1) )
=
1 C
1
(x(k) )
a1 (u(1) ) + ((k) )
n
n
o
X
E = ni=1 x C : |z aii |
|aji |
i6=j=1
Solution. The proof that the eigenvalues of A lie in E is similar that of the proof that the eigenvalues of A lie in
D. The only modification is that the eigenvector is to be the left eigenvectors.
P. 242. 3. Prove that if is an eigenvalue of A, then there is a nonzero vector x such that xT A = xT . (Here xT
denotes a row vector.)
Solution. Since is an eigenvalue of A, I A is singular, and thus (I A)T is also singular and there exists a
nonzero vector x such that (I A)T x = 0. Therefore xT A = xT .
P. 242. 4. Prove that if A is Hermitian, then the deflation technique in the text will produce a Hermitian matrix.
Solution. According to the proof of Schur Theorem, if A is Hermitian, U AU is also Hermitian and thus A which
is obtained by deleting the obtained by deleting the first row and column of U AU is also Hermitian.
P. 242. 11. Prove or disprove: if {x1 , x2 , , xk } and {y1 , y2 , , yk } are orthonormal sets in C n then there is a
unitary matrix U such that U xi = yi for 1 i k.
Solution. Yes. We can find a unitary matrix U which satisfies the above conditions. We prove this for k = n.
According to U xi = yi for 1 i n, we have U X = Y where the ith column of X (Y ) is xi (yi ). So we have
U = Y X 1 which is unitary.
P. 242. 12. Prove that if (I vv )x = y for some triple of vectors v, x, y, then (x, y) is real.
Solution. y x = x (I vv )x = ||x||22 ||v x||2 is real.
P. 242. 13. Find the precise conditions on a pair of vectors u and v in order that I uv be unitary.
Solution.
I = (I uv ) (I uv ) = (I vu )(I uv ) = I uv vu + vu uv
i.e.
(vu )(uv ) = vu + uv
If u is normalized to be u u = 1, it becomes
vv = vu + uv
P. 242. 16. Prove that for any square matrix A, ||A||22 ||A A||2 .
Solution. This is because
||A||22 = (A A) ||A A||2
P. 242. 17. Let Aj denote the jth column of A. Prove that ||Aj ||2 ||A||2 . Is this true for all subordinate matrix
norms?
Solution.
||A||2 ||Aej ||2 = ||Aj ||2
This is true for all subordinate matrix norms.
P. 242. 25. Let A be n n, let B be m m, and let C be n m. Prove that if C has rank m, and if AC = CB,
then
sp(B) sp(A)
Solution.
Bx = x
A(Cx) = CBx = (Cx)
P. 242. 26. If x x = 2, what is (I xx )1 ?
Solution.
(I xx )(I xx ) = I
P. 242. 27. Let x x = 1 and determine whether I xx is invertible.
Solution. I xx is not invertible. For example, x = e1 .
P. 242. 28. Prove or disprove: If A is a square matrix, then there is a unitary Hermitian matrix U such that U AU
is triangular.
Solution.
P. 242. 29. Without computing them, prove that the eigenvalues of the matrix
6 2 1
A = 1 5 0
2 1 4
satisfy the inequality 1 || 9.
Solution. Using Gershgorins Theorem.
P. 242. 32. Prove that I xx is singular if and only if x x = 1, and find the inverse in all nonsingular cases.
Solution. Since I xx is singular, there exists a nonzero vector y such that (I xx )y = 0, from which we have
y = (x y)x. Note that x y 6= 0. Therefore x y = x yx x which leads to x x = 1. If x x = 1, I xx has an
eigenvalue 0 corresponding to the eigenvector x.
P.255. 1. Prove that if x 6= y and (x, y) is real, then a unitary matrix U satisfying U x = y is given by U = I vu ,
with v = xy and u = 2v/||v||22 . Explain why this is a better method for constructing the Householder transformations.
Solution. The construction of U can be found in my slides.
P.255. 9. For fixed u and x, what value of t makes the expression ||u tx||2 a minimum?
The minimum is realized by
d
||u tx||22 = 0
dt
i.e., t = (u x + x u)/(2x x).
P.255. 10. Prove that the matrix having elements (xi , yj ) is unitary if {x1 , x2 , ..., xn } and {y1 , y2 , , yn } are
orthonormal bases in C n .
Solution. Define
X = (x1 , , xn ),
Y = (y1 , , yn )
Q = ((xi , yj )) = X Y
Q Q = (X Y ) (X Y ) = Y XX Y = I
P.255. 16. Use Householders algorithm to find the QR factorization
0 4
A= 0 0
5 2
Solution.
0 0 1
H1 = 0 1 0 ,
1 0 0
1 0 0
H2 = 0 0 1 ,
0 1 0
5 2
H2 H1 A = 0 4 = R,
0 0
5 2
H1 A = 0 0 ,
0 4
0 0 1
H 2 H1 = 1 0 0 ,
0 1 0
Q = (H2 H1 )1
0 1 0
=0 0 1
1 0 0
P.255. 19. Let A be an m n matrix, b an m-vector, and a > 0. Using the Euclidean norm, define
F (x) = ||Ax b||22 + ||x||22
Prove that F (x) is a minimum when x is a solution of the equation
(AT A + I)x = AT b
Prove that when x is so defined,
F (x + h) = F (x) + (Ah)T Ah + hT h
Solution. Consider the function F (x + h) with respect to where h is a given vector. F (x) is the minimum only if
dF (x + h)
0=
d
=0
(AT A + I)x = AT b
P.255. 33. Find the least-squares solution of the system
3 2 1
(x, y)
= (3, 0, 1)
2 3 2
Solution. The system is equivalent to
3 2
3
2 3 x =0
y
1 2
1
Its normal equation is
14 14
14 17
x
y
3 2 1
2 3 2
3
3 2
2 3 x = 3 2 1 0
y
2 3 2
1 2
1
P.255. 35. Let A be an (n + 1) n matrix of rank n, and let z be a nonzero vector orthogonal to the columns of
A. Show that the equation Ax + z = b has a solution in x and . Show that the x-vector obtained in this way is the
least-squares solution of the equation Ax = b.
Solution. The (n + 1) (n + 1) matrix (A, z) is nonsingular. Thus Ax + z = b has a solution in x and . The
solution obtained in this way satisfies
A b = A Ax + z = A Ax
Thus x-vector is the least-squares solution of the equation Ax = b.
Find the QR-factorization of the matrix
3 2 3
A=
4 5 6
Applying the rotating matrix (orthogonal matrix)
c s
,
s c
c2 + s 2 = 1
c s
s c
3
4
3c + 4s
3s + 4c
5
0
c s
s c
c2 + s2 = 1,
3s + 4c = 0 c = 3/5,
3 2 3
4 5 6
3c + 4s
3s + 4c
s = 4/5
5
0
=R
P.276. 1. Let A be an n n upper Hessenberg matrix having a 0 in position Ak,k1 . Show that the spectrum of A
is the union of the spectra of the two submatrices Aij (1 i, j < k) and Aij (k i, j n).
Solution. The matrix can be written as
A11
A=
0 A22
where A11 = (Aij )1i,jk1 and A22 = (Aij )ki,jn are the upper Hessenberg matrix. Thus the spectral of A is the
union of the spectral of A11 and A22 .
P.276. 2. Show that in the QR algorithm we have Ak+1 = Qk Ak Qk . From this, prove that the Q-factoring of Ak is
(Q1 Qk )(Rk R1 ) = Ak
Solution. First we have
Tk AQ
k
Ak+1 = (Q1 Q2 . . . Qk )T A1 (Q1 Q2 . . . Qk ) = Q
k1 R
k1 ,
Inductive proof. The result is valid for k = 1. Assume that Ak1 = Q
kR
k = Q1 Q2 . . . Qk Rk R2 R1 = Q1 Q2 Qk1 Ak Rk1 R2 R1
Q
k1 Ak R
k1 = AQ
k1 R
k1 = Ak
= Q
6. Find the eigenvalues of the matrix
1 4 1
A = 1 2 5
5 4 3
P.276. 7. Prove that in the shifted QR-algorithm Ak+1 is unitarily similar to Ak
Solution.
Ak sk I = Qk Rk ,
P.276. 11. Let A be a real matrix having upper triangular block structure
A=
A1n
A2n
A3n
.
..
. ..
Ann
in which each An is a 2 2 matrix. Give a simple procedure for computing the eigenvalues of A, including proofs.
Solution. The spectral of the matrix A is the unions of the spectral of Akk , 1 k n. The two eigenvalues of Akk
can be calculated easily.
P.276. 12. Prove or disprove: If U is unitary, R is upper triangular, and U R is upper Hessenberg, then U is upper
Hessenberg.
Solution. Let A = U R be the upper Hessenberg. Then U = AR1 . The multiplication of a upper Hessenberg
matrix and a upper triangular is also upper Hessenberg matrix.
The following questions comes from Heinrich Dinkel. Thanks!
Page 85, PB 3.4 Exercise 1: I dont have any Idea which uppercound
for C is meant, since F is not a given function.
Use
Exercise 10: I can apply Newtons formula on the term F(x), but it
results in an iteration, which would just cancel out the x_n terms,
which I guess is totally wrong.
Find the zero of f(x) = F(x) - x.
Exercise 23: How do I compute the power q? I can guess values like
2,3,4 for q, which all fit in the constraint, but i havent seen any
computation in the book, they only explain it by using an arbitrary
F(s) and just say that if f(s) = 0, but f(s) is not 0, q=2?