0% found this document useful (0 votes)
10 views

Chapter 2 Matrix Algebra

Chapter 2 discusses matrix algebra, defining matrices as rectangular arrays of real numbers and explaining their components, types, and operations. It emphasizes the importance of matrices in organizing data and introduces various matrix types such as vector, square, null, identity, scalar, and diagonal matrices. The chapter also covers matrix operations including addition, subtraction, and multiplication, detailing the properties and laws governing these operations.

Uploaded by

kiraj9693
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Chapter 2 Matrix Algebra

Chapter 2 discusses matrix algebra, defining matrices as rectangular arrays of real numbers and explaining their components, types, and operations. It emphasizes the importance of matrices in organizing data and introduces various matrix types such as vector, square, null, identity, scalar, and diagonal matrices. The chapter also covers matrix operations including addition, subtraction, and multiplication, detailing the properties and laws governing these operations.

Uploaded by

kiraj9693
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CHAPTER 2

MATRIX ALGEBRA AND ITS APPLICATIONS

Algebra - is a part of mathematics that deals with operations (+, -, x÷).


Matrix is A RECTANGULAR ARRAY OF REAL NUMBERS ARRANGED IN M
ROWS AND N COLUMNS. Like sets, it is symbolized by a BOLD FACE CAPITAL
LETTER enclosed by brackets or parentheses as:
a11 a12  a1n 
 
A  a 21 a 22  a 2 n  in which aij are real nos.
 
a m1 a m2  a mn 

Each number appearing in the array is said to be AN ELEMENT or COMPONENT,


of the Matrix. Elements of a matrix are designated using A LOWERCASE FORM OF
THE SAME LETTER USED TO SYMBOLIZE THE MATRIX IT SELF. These letters
are subscript, as aij, to give the row and column location of the element within the
array. The first subscript always retirees to the row location of the element; the
second subscript always refers to its column location. Thus, component a ij is the
component located at the intersection of the ith row and the jth column.

The number of rows, m, and the number of columns, n, of the array give its ORDER,
or its DIMENSIONS, mxn (read “m by n”) = A mxn or *aij] (mxn).
Example: The following are examples of matrices

1 7
 
 
A  5 3  This is a 3 x 2 matrix
 
4 2
 
ELEMENT
a12= 7
a21 = 5
a32 = 2
a23 = X - Because is a 3 x 2 matrix.
1 5 9 15
2 6 10 20
  This is a 4 x 4 matrix Elements X44 = 45 x32 = 7
 3 7 11 30
 
4 8 12 45

1
IMPORTANCE OF MATRICES

Matrices provide a most convenient vehicle for organizing and storing large
quantities of data. Because the basic idea is to organize the data, we cannot over
emphasize the importance of the location of each number with in the matrix. It is not
simply a matter of putting numbers in to rows and columns; each row-column
location with in each matrix carries with it special interpretation; a matrix is, in
essence, a tool for organizing vast quantities of data. Matrices are used to represent
complex systems and operations by compact entities.

Matrix representations are possible


- Transportation matrix
- Distance matrix
- Cost matrix
- Brand switching

TYPES OF MATRICES

1. VECTOR MATRIX - is a matrix which consists of either one row or one column. That
is, it is an mx1 or a 1 x n matrix.
1.1. Row vector = is a 1 x n matrix
E.g. W = [-1, 0, 6]
1.2 Column Vector - is a mx1 matrix
 2
5 
E.g. B =  
 7
 
 0
The transpose of an mxn matrix denoted A-t is an nxm matrix whose rows are the
columns in A (in the same order) and whose columns are the rows in A (in the same
order).
1 4 7 
1 2 3 10   
  2 5 8 
   
If A   4 5 6 11  then A-t = A   
  3 6 9 
7 8 9 12  
  10 11 12
 
Note that aij = aij
t

The transpose of a row vector is a column vector and the transpose of a column
vector is a row vector.

2
2. Square Matrix - is a matrix that has the same number of rows and columns. It is also
called an nth order matrix.
1 0
E.g. 2x2, A   .
0 1
 

3. NULL (ZERO) MATRIX - is a matrix that has zero for every entry. It is generally
denoted by Omn. In matrix operations it is used in much the same way that the
number zero is used in regular algebra. Thus, the sum of a zero matrix and any
matrix gives that given matrix and the product of a zero matrix and any matrix
equals that given matrix.
4. IDENTITY MATRIX - a square matrix in which all of the primary diagonal entries
are ones and all of the off diagonal entries are zeros. Generally it is denoted as I n.
Primary diagonal represents: a11, a22, a33, a44, --- ann entries.

1 0 0 0
1 0 0 1 0 0
I2 = A    , I4 = A   
0 1 0 0 1 0
   
0 0 0 1

The product of any given matrix and the identity matrix is the given matrix it self.
That is, A x I = A and I.A = A. Thus, the identity matrix behaves in matrix
multiplication like the number 1 in an ordinary arithmetic.
5. SCALAR MATRIX - is a square matrix where elements on the primary diagonal are
the same and the rest zeros.
NB: An Identity matrix is a scholar matrix, but a scalar matrix may not be an identity
matrix
6. DIAGONAL MATRIX- a square matrix where elements on the primary diagonal are
consecutive and others zeros.
7. EQUAL MATRICES -Two matrices A & B, are said to be equal only if they are of the
same dimensions and if each element in A is identical to its corresponding element
in B; that is, if and only if aij = bij for every pair of subscripts i and j. If A = B, then B =
A; or if A≠B, then B ≠A.

1 2 1 2
A  is equal to B = A   
3 4 3 4
   

1 2 4 2
However; A    is not equal to C = A   
3 4 3 1
   

3
Even though they contain the same set of numerical values, A and C are not equal
because their corresponding elements are not equal; that is, a11 ≠ C11 and so on.

MATRIX OPERATIONS (ADDITION, SUBTRACTION, and


MULTIPLICATION)

Matrix Addition (subtraction)


Two matrices of the same dimensions are said to CONFORMABLE FOR
ADDITION. The addition is performed by adding corresponding elements from the
two matrices and entering the reset in the same row-column position of a new
matrix [element-wise addition].

If A and B are two matrices, each of size mxn, then the SUM of A and B is the mxn
matrix C whose elements are:
Cij = Aij + bij for i = 1, 2, ------- m
j = 1, 2, -------- n.

Laws of Matrix Addition


The operation of adding two matrices that are conformable for addition has these
two basic properties:
1. A + B = B + A ---- The commutative law of matrix addition.
2. (A+B) +C = A+ (B+C) -------- the associative law of matrix addition.

Cij = aij +bij for i = 1, 2 --------m C11 = a11 + b11


j = 1, 2 ---------n C22 = a22 + b22
C12 = a12 + b12 etc
eg. 1 3 7 9 8 12
2 4 + 8 -10 = 10 –6

eg. –2 7 2 8 7 These two matrices aren’t


4 6 9 + 6 4 = conformable for addition
because they aren’t of the
same dimension.

Given that two matrices do have the same dimension, the way we subtract a matrix
from another matrix is the same as the way we add two matrices.

Matrix Multiplication

A. Matrix Multiplication by a Constant (Scalar Multiplication)

4
A matrix can be multiplied by a constant by multiplying each component in the
matrix by a constant. The result is a new matrix of the same dimensions as the
original matrix.

If K is any real number and A is an mxn matrix, then the product KA is diffident to
be the matrix whose components are given by k times the corresponding component
of A; that is,
KA= [Kaij] (mxn).

E.g. If X = [6 5 7], then 2X = [(2x6) (2x5) (2x7)]


2X = [12 10 14]
Laws of Scalar Multiplication
The operation of multiplying a matrix by a constant (a SCALAR) has the following
basic properties. If x and y are real numbers and A and B are mxn matrices,
conformable for addition, then:
1. XA = AX
2. (X+Y)A = XA+YA
3. X (A+B) = XA + XB
4. X (YA) = XY (A)

B. Vector-by-Vector multiplication
In multiplying two vectors always a row vector is written in the first position and
the column vector in the second position. Each component of a row vector is
multiplied by the corresponding component of the column vector to obtain a result
known as PARTIAL PRODUCT. The sum of all partial products is called
INNER/DOT PRODUCT of two vectors, and this is a number not a vector. In other
words, Vector- by- Vector results in a real number rather than a matrix.

E.g. Consider the product (AB) of the following row and column vectors.
 2
5 
A   3 4 2 6 ’ B=  
 7
 
 0

3x2=6
4 x 5 = 20 partial products
-2 x 7 = -14

5
6x0=0
12 Inner/Dot Product

C. Matrix by Matrix Multiplication


If A and B are two matrices, the product AB is defined if and any if the number of
columns in A is equal to the number of rows in B, i.e., if A is an m x n matrix, B
should be an n x b. If this requirement is met, A is said to be CONFORMABLE to B
FOR MULTIPLICATION. The matrix resulting from the multiplication has
dimensions equivalent to the number of rows in A and the number of columns in B.

Matrix by matrix multiplication indicates a row by column multiplication, where the


entry in the ith row and jth column of the product AB is obtained by multiplying the
entries in the ith row of A by the corresponding entries in the jth column of B and then
adding the results. That is, to obtain the entry in the ith row and jth column of the
product AB, use the ith raw of A and the jth column of B in the following form:

The first element in the raw is multiplied by the first element in the column; the
second element in the row is multiplied by the second element in the column and so
on until the nth row element is multiplied by nth column element. These products
are then summed up to obtain the single number that is the product of the two
vectors.

If A is a matrix of dimension n x m (which has m columns) and B is a matrix of


dimensions p x q (which has p rows) and it m is different from p, the product AB is
not defined. That is, multiplication of matrices is possible only if the number of
columns of the first equals the number of rows of the second.

If A is of dimension n x m and if B is of dimension m x p, then the product A.B is of


dimension n x p.

Example
2 3 4  1 7 
A   B = 0 8 
6 9 7
  5 1 
A.B = (2x-1) + (3x0) + (4x5) (2x7) + (3x8) + (4x1)
= 18 42

= (6x-1) + (9x0) + (7x5) (6x7) + (9x8) + (7x1)


= 29 121

6
18 42 
AB = 
29 121

Special Properties of Matrix Multiplication

1. The Associative and distributive laws of ordinary algebra apply to matrix


multiplication. Given three matrices A, B and C, which are conformable for
multiplication,
 A (BC) = (AB) C -------------------- Associative law, not C (AB).
 A (B+C) = AB + AC -------------- Distributive law
 (A+B) C = AC + AB -------------- Distributive law

2. The commutative law of multiplication does not apply to matrix multiplication. For
any two real numbers X and Y, the product XY is always identical to the product YX.
But for two matrices A and B, it is not generally true that AB equals BA. (In the
product AB, we say that B is pre multiplied by A and that A is post multiplied by B).
In many instances for two matrices A and B, the product AB may be defined while
the product BA is not defined, or vice versa.

In some special cases, AB does equal BA. In such special cases A and B are said to
Commute.

3. The product of two matrices can be the zero matrix even though neither of the two
matrices themselves is zero matrix! We cannot conclude from the result AB = 0 that
at least one of the matrices A or B is a zero matrix.
3 0 0 0 0 0   0 0 0
A = 2 0 0 , B = 7 10 4 , AB = 0 0 0
   
1 0 0  8 3 2  0 0 0

4. We cannot, in matrix Algebra, necessarily conclude from the results AB = AC that B


= C, even if matrix A is not equal to a zero matrix. Thus the CANCELLATION LAW
does not hold, in general, in matrix multiplication.

1 3  4  1 1 2
A , B  , C 3 
2  6 2 5  2 4
     

10 14 
AB = AC =   but B ≠ C.
20  28
 

7
The Multiplicative Inverse of a Matrix

If A is a square matrix of order n, then a square matrix of its inverse (A -1) of the same
order n is said to be the inverse of A, if and only if AA-1 = I = A-1A.
Two square matrices are inverse of each other if their product is the identity matrix:
I = AA-1 = A-1A.

Not all matrices have an inverse. In order for a matrix to have an inverse, the matrix
must, first of all, be a square matrix. Still not all square matrices have inverse. If a
matrix has an inverse, it is said to be INEVITABLE or NON-SINGULAR. A matrix
that doesn’t have an inverse is said to be SINGULAR. An inevitable matrix will have
only one inverse; that is, it a matrix does have an inverse, and that inverse is unique.
In short:
 Inverse of a matrix is defined only for square matrices
 If B is an inverse of A, then A is also an inverse of B.
 Inverse of a matrix is unique.
 If matrix A has an inverse, A is said to be inevitable and not all square
matrices are inevitable.
1 1
E.g. eg  
1 1
 
Finding the Inverse of a Matrix

Let us begin by considering a tabular format where the square matrix. A is


augmented with an identity matrix of the same order, as [A/I]. This process is called
ADJOINING.
Now, if the inverse matrix A-1 were known, we could multiply the matrices on each
side of the vertical line by A-1, as [AA-1/A-1I]

Then, because AA-1 = I and A-1 I = A-1, we would have [I/A-1]. We do not follow this
procedure, because the inverse is not known at this juncture; we are trying to
determine the inverse. We instead employee a set of permissible row operations on
the augmented matrix [A/I] to transform A on the left side of the vertical line in to an
identity matrix (I). As the identity matrix is formed on the left of the vertical line, the
inverse of A is formed on the right side. The allowable manipulations are called
ELEMENTARY ROW OPERATIONS. These Elementary Row Operations are
operations permitted on the row of a matrix.

In a matrix Algebra there are 3 types of row operations.


i. Any pair of row in a matrix may be interchanged /Exchange operations/.
Interchanging rows.

8
ii. A row can be multiplied by any non-zero real number /Multiple
operations/. The multiplication of any row by a non-zero number.
iii. A multiple of any row can be added to any other row /Add-A-Multiple
operations/. The addition /subtraction of (a multiple of) one row
to/from) another row.

4 3 2  2 6 7
E.g. 1. A , B 4 3 2  = interchanging rows
2  
6 7  
 

4 3 2  8 6 4
2. A  B = A  = multiplying the first row by 2.
2 6 7 2 6 7
   

4 3 2  4 3 2 
3. A  B=   = Multiplying the first row by 2
2 6 7  6 12 11
   
and add to 2nd row.

Theorem on row operations


A row operation performed on product of two matrices is equivalent to row
operation performed on the pre-factor.

Consider the following AB = C

1 2 3 1 2 
9 13 
A  B = 1 1  C, =
 13 19
2 3 4   
  2 3 

Interchange R1 with R2

2 3 4 1 2 
13 19
A  B = 1 1  C, =
  9 13 
1 2 3  
  2 3 

Basic Procedures to Find the Inverse of a Square Matrix

1. To get ones first in a column and next zeros (within a given column)
2. To get zeros first in a matrix and next ones.

9
Ones First: Try to set ones first in a column and then zeros of the same column. G0
from left to right
Zeros First: Find the off diagonal zeros first, and following this obtain ones on the
main diagonal. It can simplify the work involved in hand calculation by avoiding
fractions until the last step.

10
MATRIX APPLICATIONS

Solving Systems of Linear Equations

1. n by n systems
Systems of linear equations can be solved using different methods. Some are:
 Elimination method for 2 variable problems (equations).
 Matrix method
i. Inverse method
ii. Gaussian Method.

Inverse Method
To solve systems of linear equations using the inverse method the coefficient matrix
should be inevitable, and it involves the following steps:
1. Put all equations in a matrix form (square matrix form).
2. Find the inverse of the coefficient matrix.
3. Multiply the inverse with right hand side values (vector of constants)

2. X+Y = 2
2x + 2y = 4

The inverse method provides us with unique solution, or no solution and infinite
solution (with out separating them).

Gaussian Method: developed by Karl F. Gauss (1771-1855)

Solving systems of linear equations using the Gaussian method involves the
following steps:
1. Write all equations in a matrix form.
2. Change coefficient matrix in to identity matrix and apply the same
commentary row operations on the vector of constants
3. The resulting value (of the RHS vector) will be the solution.

Ax = B
Ix = C
x=C

The Gaussian Method helps us to obtain:

11
 Unique solution
 No. Solution
 Infinite solution
E.g. 1. 2x + 3y = 4 2. x + y = 2 3. x + y = 5
x + 2y = 2 2x + 2Y = 4 x+y=9

 2 3   4
  
 1 2  2
IX = c
X=C

Therefore, Gaussian method makes a distinction between no solution and infinite


solution, unlike the inverse method.

Summarizing our results for solving an “n” by “n” system, we start with matrix
(A/B), and attempt to transform it in to the matrix (I/C).
One of the three things will result:

1. An n by n matrix with the unique solution; e.g.

1 0 0 10
 
0 1 0 5
0 1 3 
 0

2. A row that is all zeros except in the constant column, indicating that there are no
solutions; e.g.

1 0 0 3
 
0 1 0 5
0 0 7
 0

3. A matrix in a form different from (1) and (2), indicating that there are an unlimited
number of solutions. Note that for an n by n system, this case occurs when there is a
row with all zeros, including the constant column; e.g.

1 0 2 5
 
0 1 3 3
0 0 0 
 0

12
“EVERY SYSTEM OF LINEAR EQUATIONS HAS NO SOLUTION, EXACTLY
ONE SOLUTION OR INFINITELY MANY SOLUTIONS.”

WORD PROBLEMS

Steps

1. Represent one of the unknown quantities by a letter usually x and express other
unknown quantities if there is any in terms of the same letter.
2. Translate the quantities from the statement of the problem in to algebraic form
and set up an equation.
3. Solve the equation (equations) for the unknown that is represented by the letter
and find other unknowns from the solution.
4. Check the findings according to the statement in the problem.

MARKOV CHAINS

Concept, Model and Solutions

This model is a forecasting model. The model is used when the state (outcome,
condition) of the system in any particular time period cannot be determined with
certainty. Therefore, a set of transition probabilities is used to describe the manner
in which the system makes transition from one period to the next. Hence, we can
predict the probability of the system being in a particular state at a given time
period. We can also talk about the long run/equilibrium, steady state.

System - which we want to study, machine, and person


State/outcome, condition - the system can have various number of outcomes.
Transition probabilities - set of input data, and are assumed to be constant.
Long/stead state - the system cannot change any more.

The necessary assumptions of the chain are:

1. The system has a finite number of states - the outcomes of the system should be
finite.
2. The system condition/outcome, state in any given period depends on its state in
the preceding period and on the transition probabilities
3. The transition probabilities are constant over time.
4. Changes in the system will occur once and only once each period.

13
Information flow in the Analysis

The Markov model is based on two sets of input data


 The set of transition probabilities.
 The existing or initial or current conditions or states.

The Markov process, therefore, describes the movement of a system from a certain
state in the current state/ time period to one of n possible states in the next stage. The
system move in an uncertain environment all that is known is the probability
associated with any possible move or transition. This probability is known as
transition probability symbolized by Pij. It is the likelihood that the system which
is currently in state i will Smoke to state j in the next period.

From these inputs the model makes two predictions usually expressed as vectors:
1. The probabilities of the system being in any state at any given future time period.
2. The long run / equilibrium, steady state probabilities.

The set of transition probabilities are necessary for both predictions (time period n,
and steady state), but the initial state is needed for only the first prediction.

In put data Predictions/ outcomes


Set of transition
Probabilities Steady states/ long run states

About past

Current/initial state The probability of the system


being in any state at any given
time

About today

Markov chain analysis used among other things in Market share Analysis. The
example below shows this.

1. Currently it is known that 80% of customers shop at store 1 and 20% shop at store 2.
In reviewing a past data suppose we find that out of all customers who shopped at

14
store 1 in a given week 90% remain loyal for the next week (store one again), 10%
switch to store 2. Out of all customers who shopped at store 2, in a given week 80%
remain loyal for the next week (store 2 again), 20% switch to store 1. What will be
the proportion of customers shopping at store 1 and 2
a) in each of the next two weeks?
b) in the long run?

Lets denote Store 1 by 1 and Store 2 by 2.

V12= (.8 .2) - initial state/ current state probability matrix.

To next weekly shopping period

From one week S1 S2

S1 0.9 0.1

S2 0.2 0.8

 The sum of rows in the transition matrices should be one.


 We have to be consistent in writing the elements.

P11, P22, P33, P44 ---------------------Pnn that represent the primary diagonal show loyalty.
Others switching.

Markov Chain Formula

nth state of a Markov Chain.

Vij (n) = Vij (n-1) x p, or Vij (n) = Vij (0) x (P) n.


Or
Vij (n) = Vij (0) x (P) n.
Where: P = transition matrix
Vij (n) = Vector for period n.
Vij (n-1) = vector for period n-1.

V12 (0) = (.8 .2)


V12 (1) = V12 (0) x P

15
 .9 .1

= (.8 .2)  
 .2 .8

= (.8 x .9) + (.2x.2) (.8x.1) + (.2x.8)
= .72 + .04 .08 +. 16
= 0.76 .24

V12(1) = (.76 .24)

V12(2) = V12(1) x P
= V12(1) x P
= (.76 .24)
 .9 .1

= (.76) .24)   =(0.732 .268)
 .2 .8

b. In the long run (V1 V2) (n) = (V1 V2) (n+1)
n p n+1
 .9 .1

(V1 V2)   = (V1 V2)
 .2 .8 
 
0.9V1 + .2V2 = V1
.1V1 + .8V2 = V2
V1 + V2 = 1
-.1V1 + .2V2 = 0 
 one is the - ve of the other.
.1V1 + -.2V2 = 0 

.9V1+.2(1-V1) =V1
.9V1 + .2 - .2V1 = V1
.7V1 + .2 = V1
.2 = .3V1
V1 = 2/3
V2 = 1 - V1
= 1 - 2/3
V2 = 1/3

16
In short, the switching over the sum of the switching gives us the long run state.

To

S1 S2

From S1 .9 .1

S2 .2 .8

Switch to state 1 Switch to state 2


V1= V2 =
Switch to state 1  switch to state 2 Switch to state 1  switch to state 2
.2 2 .1 1
=  = 
.2  .1 3 .2  .1 3

2 1
(V1 V2) =  
3 3

In the long run 67 of the customer will shop in store 1 and 33% in store 2.
Prediction: Long run - only the transition matrix.
At specified time - the transition matrix and state vector.
Hence, unless the transition matrix is affected, the long run state will not be affected.
Moreover, we cannot know the number of years, weeks, or periods to attain the long
run state, point but we can know the share.

Absorbing Markov Chain


It is a special type of Markov chain in which at least one of the states eventually
doesn’t lose members. We call such a state absorbing because it can absorb
members from other states, but doesn’t give up any of its members.

For example, if we take the above example and change the transition matrix

S1 S2
S1 1 0
S2 .2 .8

17
The state S1 (store 1) in absorbing

In short:
Consider a Markov chain with n different states {S1, S2, and S3 --- Sn}.
The ith state Si is called absorbing if Pii = 1. Moreover, the Markov chain is called
absorbing if it has at least one absorbing state, and it is possible for a member of
population to move from any non-absorbing state to an absorbing one in a finite
number of transitions.

Remark: Note that for an absorbing state Si, the entry on the main diagonal p must
be Pii = 1 and all other entries in the ith row must be 0.

To
E.g. a.
S 1 S 2 S 3

 S 0.4 0 0.6
 1 
from S 2 0 1 0  Absorbing Markov Chain
 
 S 3 0 0.5 0.5

To
E.g. b.
S 1 S 2 S 3

 S 0.4 0 0.6
 1 
from S 2 .5 15 0  has no absorbing states.
 
 S 3 0 15 .5 

To
S 1 S 2 S 3 S 4
 S1 .5 15 0 0
 
 0
from S 2
0 1 0 The second state is absorbing.
6
S3
0 0 .4

S4 0 0 5 .5
However the corresponding Markov chain is not observing. Because there is no way
to move from state 3 or state 4 to state 2.

A Markov chain is absorbing it has at least one absorbing state, and if from every
state it is possible to go to an absorbing state (not necessarily in one step).

Exercises

1. A division of the ministry of public health has conducted a sample survey on the
public attitudes towards the use of condoms. From the results of the survey the
department concluded that currently only 20% of the population uses condoms and

18
every month 10% of non-users become users, where as 5% of users discontinue
using.
Required

a. Write the current transition matrices.


b. What will be the percentage of users from total population just after two months?
c. What will be the proportion of the non users and users in the long run?

Solution

Let. U - Stands for users, and N- stands for nonuser

1. Initial state VUN (0) = 0.2 0.8

To the next month


From one month Users (U) Non Users (N)

Users (U) .95 .05

Non Users (N) .10 .90

2. V (1) UN = V (0) UN x P
 .95 .05
= 0.2 0.8  
 .10 .90
=
(0.27 0.73)

V (2) UN = V (1) UN x P
 .95 .05
= 0.27 0.73  
 .10 .90
= (.3295 0.6705)

3. VU VN = (? ?)

switchtoU switchtoN
Switchtou  SwitchtoN Switchtou  SwitchtoN
VU = VN =
.1 .05
  0.67   0.33
.15 .15

VU VN = 0.67 0.33
VUN (n) = 0.67 0.33

19
2. A city has two suburbs: suburb x and suburb y. Over the past several years, the city
has experienced a population shift from the city to the suburbs, as shown in the table
below.
To the next year
From City (C) Suburb x (X) Suburb y (Y)
one year City (C) .85 .07 .08
Suburb x (X) .01 .96 .03
Suburb y (Y) .01 .02 .97

In 20xo, the city had a population of 120,000, suburb x had a population of 80,000,
and suburb by had a population of 50,000. Assuming that the population in the
metropolitan area remains constant at 250,000 people,
a. How many people will live in each of the three areas in 20X2?
b. How many people will live in each of the three areas in the long run?

Solution.

Let C stands for the city


X stands for the suburb X.
Y stands for the Suburb y. C= 120,000 - 120,000/250,000 = 0.48
x = 80,000 - 80,000/250,000 = 0.32
y = 50, 000 - 50,000/250,000 = 0.20
250,000 1.00

Initial state V(0)cxy (0.48 0.32 0.20)

The transition matrix. From one year


 C X Y 
 
 C .85 .07 .08 
P= 
X .01 .96 .03 
 
 Y .01 .02 .97 
 

 
 
.85 .07 .08
V(1)cxy = V(0)cxy x p (.48 .32 .20) 
 .01 .96 .03
 
 .01 .02 .97

V (1)cxy = (.4132 .3448 .2420)

20
 
 
.85 .07 .08
V(2)cxy = (.4132 .3448 .2420) 
 .01 .96 .03
 
 .01 .02 .97
V(2)cxy = (.3571 .3648 .2781)

Thus, in 20X2, 89,275, 91,200 and 69,525 people will live in the city, suburb x and
suburb y respectively.

longrun
b. n p n+1
 .85 .07 .08
(Vc Vx Vy)   (Vc Vx Vy)
 .01 .96 .03
 
 .01 .02 .97

.85C + .01x + .01y = C


.07C + .096x + .02y = x
.08C + .03X + .97y=y
c + x + y =1 Vc Vx Vy = 1
-.15C + .01x + .01y = 0
.07c - .0yx + .02y = 0
.08C + .03x - .03Y = 0
X = 1-C-Y

.07 - .04 (1-c-y) + .024 = 0


.07 - .04 + 0yc + 04y) + .02y = 0
(.07c + .0yc) - 04 + (04y + .02y) = 0
.11c+.06y - .0y = 0 --- .08c - .03 (1-c-y) + .03y = 0
.08c - .03 + 03c + 03y + .03y = 0
(.08c - .03c) + 03 + (03y + .03y) = 0
.05C +.03 - .06y = 0 --- (2)
.11c + .06y - .4 = 0

.05C - .06Y + .03 = 0
.16C - .01 = 0
.16C = .01
.01
C=
.16
C = 0.0625 .11 (.0625) + .06y - .06y - .04 = 0
.006875 + .06y - .04 = 0
.06y = .033125
y = 0.5521

21
C+X+y = 1
.0625+x+.5521=1
0.6146+x=1
X = .3854

(Vc Vx Vy) = (.0625 .3854 .5521)

In the long run 15,625, 96,350 and 138,025 people will live in the city suburban X and
suburban respectively.

22

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy