0% found this document useful (0 votes)
15 views

lp

Chapter 1 introduces linear programming (LP) and the Simplex algorithm, defining LP problems, objective functions, and constraints. It discusses standard and canonical forms of LP, the concept of convex sets, and the significance of basic feasible solutions (BFS) in finding optimal solutions. The Simplex algorithm is presented as a method to navigate through BFS to identify the optimal solution efficiently.

Uploaded by

anshi800430
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

lp

Chapter 1 introduces linear programming (LP) and the Simplex algorithm, defining LP problems, objective functions, and constraints. It discusses standard and canonical forms of LP, the concept of convex sets, and the significance of basic feasible solutions (BFS) in finding optimal solutions. The Simplex algorithm is presented as a method to navigate through BFS to identify the optimal solution efficiently.

Uploaded by

anshi800430
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter 1

Linear Programming and


Simplex algorithm

1.1 The problem


A linear programming (LP) over variables x1 , x2 , . . . , xn consists of the fol-
lowing

1. a linear function called objective function c1 x1 + c2 x2 + · · · + cn xn .

2. a set of linear constraints of the form a1 x1 + a2 x2 + · · · + an xn ≤ b, or


a1 x1 + a2 x2 + · · · + an xn ≥ b or a1 x1 + a2 x2 + · · · + an xn = b

The minimizing linear programming problem (similarly maximizing) is to


find an x1 , x2 , . . . , xn ∈ R which satisfies all the linear constraints and such
that the objective function is minimized (similarly maximized) at that point.
A linear programming problem with any kind of constraint is called an LP
in general form. Here are two interesting special forms of LP.

Definition 1.1. A linear programming problem is in standard form, if the


linear constraints are as follows

Ax ≤ b
x≥0

Definition 1.2. A linear programming problem is in canonical form, if the


linear constraints are as follows

Ax = b
x≥0

1
1. Linear Programming and Simplex algorithm

For the lecture we will assume (if not mentioned) that the objective is to
minimize the objective function. That is, we want to minimize cT x.

We denote a column vector by


 
a1
 a2 
a =  .. 
 
.
an

and a row vector by aT = (a1 , a2 , . . . , an ). We denote by R the reals and


by Rn the set of all vectors a where aT = (a1 , a2 , . . . , an ) where ai ∈ R.
Usually vectors will be denoted by a, b or c. The exception is: for a real
number m ∈ R we denote by m ∈ Rn the vector mT = (m, m, . . . , m) of
all m’s. The rows of a matrix A is denoted as
 
−− a1 T −−
−− a2 T −−
 
 ··· 
T
−− am −−

where the ith row vector is ai T . The columns of matrix A is denoted as


.. .. ..
 
. . .
 . .. .. 
 .. . . 
 
A
 1 A2 · · · A n  
.. .. ..
. . .

where the j th column vector is Aj . The j th element of the ith row vector
is aij = Aji . The identity matrix is denoted by I.

1.2 Math background - Convex sets


We say that z ∈ Rn is a convex combination of vectors x, y ∈ Rn if there
exists a λ ∈ [0, 1] such that z = λx + (1 − λ)y.
Definition 1.3. A set X ⊆ Rn is a convex set if for all x, y ∈ X, and all
λ ∈ [0, 1] we have that λx + (1 − λ)y ∈ X.
Some examples of convex sets (also see Figure 1.1 and 1.2).
1. A line is a convex set.

2
1.2. MATH BACKGROUND - CONVEX SETS

Figure 1.1: Convex sets

Figure 1.2: Not convex sets

2. A line segment

3. {x ∈ Rn | Ax = b}

4. {x ∈ Rn | Ax = b and x ≥ 0}

Here are some properties of convex sets (try proving them).

Claim 1.1. The following properties hold for convex sets.

1. Convex sets are closed under intersections.

2. A vector space is a convex set.

A vector x ∈ X is an extreme point in the convex set X if x cannot be


represented as a convex combination of two other points in X.
The convex hull of a set X is the least convex set Y which contains X
(that is X ⊆ Y ). For example, the convex hull of two points x, y ∈ R2 is the
line segment joining x and y. Figure 1.3 and 1.4 shows the convex hull of
points X.

3
1. Linear Programming and Simplex algorithm

Figure 1.3: Convex Hull of the red points.

Figure 1.4: Convex Hull

1.3 Standard form LP - geometric view

1.4 Canonical form LP - algebraic view


Let A0 be an m × n − m matrix. That is the matrix has m rows and n − m
columns. Let c0 ∈ Rn−m be a real vector. Here we have n ≥ m.
Consider the LP in standard form. Find an x ∈ Rn−m where
T
minimize c0 x0
A0 x0 ≤ b
x0 ≥ 0

This can be converted into an LP in canonical form as follows. For the ith
in-equality constraint a0i T x0 ≤ bi , we introduce a new variable xn−m+i and
add the following constraints
T
a0i x0 + xn−m+i = bi
xn−m+i ≥ 0

4
1.4. CANONICAL FORM LP - ALGEBRAIC VIEW

This gives us the following LP in canonical form

minimize cT x
Ax = b
x≥0

Here c ∈ Rn and cT = (c0 T , 0, 0, . . . , 0) (This is an abuse of notation which


means the last m elements of the vector c are 0s and the first elements
correspond to that of c0 ). The vector x ∈ Rn and matrix A is of the following
form.  
 
 0

A=
 A I 

 

The set of all feasible points (called the feasible set) of this LP in the
canonical form is

FC = {x ∈ Rn | Ax = b and x ≥ 0}

This set is closely related to the feasible set of the LP in standard form. We
now describe a mapping f : FS → FC . Let x0 ∈ FS . Now consider the vector
x ∈ Rn where xj for a j ≤ n is given as
(
x0j if j ≤ n − m
xj =
bi − a0i T x0 if j = n − m + i and i ≤ m

The first claim is

Claim 1.2. The above vector x ∈ Rn belongs to FC .

Proof. This is an exercise.

Therefore, the above mapping takes every vector x0 ∈ FS to a vector


x ∈ FC . Let us represent this mapping by f : FS → FC .

Claim 1.3. Let V ⊆ FS be the set of extreme points in FS . Then f (V ) is


the set of extreme points in FC .

Proof. This is an exercise.

5
1. Linear Programming and Simplex algorithm

We will now give an alternate (algebraic) definition of the extreme points


of FC . First a few definitions. For a vector x ∈ Rn and a set B ⊆ {1, 2, . . . , n}
we denote by xB = 0 to mean xj = 0 for all j ∈ / B. For example, consider
T 4
x = (2, 3, 0, 0) ∈ R . Then x{1,2} = 0 since x3 = x4 = 0.
We say that x ∈ FC is a basic feasible point, if there exists a set B ⊆
{1, 2, . . . , n} such that the set of column vectors {Aj | j ∈ B} form a basis
for the column space of A (denoted by C(A)) and xB = 0. In other words
the vector b is represented as a positive linear combinations of a basis of A.
The set of all basic feasible point is called basic feasible set (BFS for short).
BFS = {x ∈ FC | x is a basic feasible point.}
The following assumption is going to be assumed henceforth.
Assumption I: The rows of A are independent. Note that the number
of rows is represented by m.

Exercise 1.1. Show that a basis for the column space of A has m vectors.

Exercise 1.2. Given a matrix where rows are dependent, give an equiv-
alent LP with independent rows.
From the above assumption, we note that the number of non-zeros in
a vector x ∈ BFS is atmost m. The reason for at most m and not exact
m is because it can happen that the co-efficient of a basis vector is also 0.
Therefore the number of zeros is atleast n − m.
The next lemma is important.
Lemma 1.4. The BFS form the extreme points of FC .
Proof. We need to show two directions for the proof.
First we show that an extreme point of FC is a BFS.
We now show the other direction of the lemma. That is, BFS is an
extreme point of FC . This is left as an exercise.
BFS are the most important points in the LP. We will soon see that the
optimum occurs at one of the BFS.
Special Case: The feasible set FC is bounded.
We say that a set X ⊆ Rn is bounded if there exists an M ∈ R such
that for all x ∈ X, we have that x ≤ M n . Convex sets which are closed
and bounded satisfy a nice property.

Claim 1.5. Let X be a closed bounded convex set. Then, X is equal to


the convex hull of its extreme points.

6
1.5. SIMPLEX ALGORITHM

Therefore, if FC is bounded, we have that the convex hull of BFS


gives us FC . The following lemma captures the importance of BFS in
bounded sets. It says, if there is an optimal solution to the LP, then it
occurs at one of the points in BFS.

Claim 1.6. Let the feasible set be bounded. If there is an optimal solu-
tion, then it occurs at a BFS.

Proof. Let x be an optimal solution. That is cT x = min{cT y | y ∈ FC }.


In other words,
∀y ∈ FC , cT y ≥ cT x (1.1)
We show that there exists an x∗ ∈ BFS such that cT x∗ = cT x. Let us
assume x ∈/ BFS. From Lemma 1.4 we know that x is not an extreme
points and hence can be written as a convexP combination Pof vectors in
zi ∈ BFS (using claim 1.6). That is, x = ki=1 λi zi where i λi = 1. Let
us assume without loss of generality, cT z1 ≤ cT zi for all i ≤ k. Therefore

Xk
T
c x≥( λi ) cT z1 = cT z1 (1.2)
i=1

From Equation 1.1 and 1.2 we have that cT z1 = cT x and therefore z1 ∈


BFS is also an optimum point.

1.5 Simplex algorithm


With the intuition that the optimum occurs at one of the BFS, our algorithm
will be to check all the BFS and return the optimal value. We have a better
algorithm (which in the worst case is as bad as the naive algorithm) is very
good in practise. The algorithm is called Simplex and the fundamental idea
is to start from a BFS and keep going to a neighbouring vertex which is
better in the objective function. Before we formally see the algorithm, let us
define what are neighbouring BFS.
We say that two basis B, B 0 ⊆ {A1 , A2 , . . . , An } are neighbours if |B ∩
0
B | = m−1. That is, they share exactly m−1 columns. Each has one column
which is not in the other. We say that x, y ∈ BFS are neighbours if there
exists two basis B, B 0 which are neighbours such that the basis associated
with x and y are B and B 0 respectively. In other words, there are atleast
n − m − 1 locations where both x and y are zeros.
The Simplex algorithm is given in Algorithm 1.

7
1. Linear Programming and Simplex algorithm

Algorithm 1 Simplex algorithm


Find an initial BFS x
while ∃y which is a neighbour of x and cT y < cT x do
x=y
end while
return x

1.5.1 Neighbouring BFS


Our first algorithmic challenge is to identify a neighbouring y ∈ BFS given
a x ∈ BFS where cT y < cT x. That is, a neighbour whose cost is lesser that
at x. Let B be the basis associated with vector x and N the rest of the
column vectors of A. We can always re-arrange the columns so that the first
m columns is B. We also denote the various components of A, x, c as given
below.
     
   xB   cB 
     
A=
 AB AN ,x =  ,c =  
    
     
xN cN

Therefore, we can write Ax = b equivalently as AB xB + AN xN = b and the


objective function as cT x = cTB xB + cTN xN . Also note that xN = 0n−m .
Let x be a BFS corresponding to basis B. Our plan is to find a basis
0
B which is a neighbour to B. Let us consider that we want to include the
column Aj in B 0 . This would mean we will have to remove some other column
from B. Without loss of generality, let us assume B = {A1 , A2 , . . . , Am } and
j > m. Since B forms a basis we have that there exists αi s such that
m
X
Aj = αi Ai (1.3)
i=1

We also have that


m
X
b= xi Ai (1.4)
i=1

Assumption: We will assume that xi > 0 for all i ≤ m. The tricky


degenerate case of some of xj = 0 is skipped in the lecture notes.

8
1.5. SIMPLEX ALGORITHM

Consider the following equation (we fix the θ later): Eq. 1.4 - θ Eq. 1.3.
m
X
b= (xi − θαi )Ai + θAj
i=1

The neighbour BFS to x is now given by y where yi is



xi − θαi if i ≤ m

yi = θ if i = j

0 otherwise

There are two cases to be considered now.


Case I: There is atleast one αi > 0. In this case, take θ as follows
xi
θ = min{ | αi > 0}
αi

Let θ = αxkk . Then we have that yk = 0 and all other yi ≥ 0. Moreover y


is a BFS with basis B 0 = B\{Ak } ∪ {Aj }. We can now check if cT y < cT x
and return y as a “good” neighbour if the cost decreases. Otherwise we find
another neighbour and continue this procedure.
Case II: All αi ≤ 0. In this case yi s increases as we increase θ. We can
therefore increase yi to as big a number as we want. Note that this cannot
happen in a bounded FC . On the other hand, we can look at the cost function
cT y. It either increases as θ increase or decreases. If it is increasing, we are
not interested we can skip and go to find another neighbour of x. On the
other hand, if it is decreasing, we can decrease cT y to as much as we want
and therefore, there is no optimum value.
This gives an algorithm to find a neighbouring BFS.

1.5.2 Finding an initial BFS


1.5.3 Correctness of Simplex algorithm
Showing the following claim gives us the correctness of the Simplex algorithm.

Theorem 1.7. Let x be a BFS such that for all neighbours y ∈ BFS we have
cT y ≥ cT x. Then x is an optimal solution.

Proof. First we understand the relationship between x and its neighbours.


Let B be the basis of x and let z be a neighbour with basis B 0 = B\{Ak } ∪

9
1. Linear Programming and Simplex algorithm

{Aj }. We will use the matrix notation as given below.


     
   xB   cB 
     
A=
 AB AN ,x =  ,c =  
    
     
xN cN

Since xN = 0, we have that AB xB = b. Since Aj is in the basis for z we


have that AB zB + zj Aj = b. Subtracting, we have AB xB − AB yB − zj Aj = 0.
Therefore,
xB − zB = zj A−1
B Aj

Now let us calculate the cost difference at both the places

0 ≥ cT x − cT y = cTB (xB − zB ) − cj zj = zj cTB A−1


B Aj − cj zj

Since zj > 0 we have the inequality for all j ∈


/ B.

cj − cTB A−1
B Aj ≥ 0 (1.5)

Now we show that x is an optimal point. Assume not. Let y has a better
objective value. That is, cTB yB + cTN yN = cT y < cT x = cTB xB . Therefore,
X
cTB (yB − xB ) + cj y j < 0 (1.6)
j ∈B
/

P
We know that 0 = A(y − x) = AB (yB − xB ) + j ∈B
/ yj Aj . Therefore,
X
y B − xB = −yj A−1
B Aj (1.7)
j ∈B
/

We can substitute the above equation in Eq. 1.6 giving us


X
(cj − cTB A−1
B Aj )yj < 0
j ∈B
/

This gives us a contradiction from Eq. 1.5 and since yj > 0 (yj = 0 will not
satisfy above inequality).

10
1.6. MATCHING USING LP

1.6 Matching using LP


A bipartite graph G is represented by G = (U, V, E) where U ∪ V are the
set of vertices and edges are denoted by E ⊆ U × V . A matching M ⊆ E
of a bipartite graph G is a set of edges such that for any vertex v of G at
most one edge incident on v is in E. In the maximal matching problem,
given a bipartite graph, we interested in finding out a matching of highest
cardinality. We reduce the problem to an LP as follows. Without loss of
generality we assume cardinality of U is equal to cardinality of V (otherwise
add some vertices with no edges) which is equal to n.
We introduce variables xij for all i ∈ U and j ∈ V . We want a solution
to the LP where xij ∈ {0, 1} for all i and j. We can view any assignment
of xij s to {0, 1} as a set of edges X = {(i, j) ∈ E | xij = 1}. Consider the
following LP.

X
maximize xij
(i,j)∈E
X
∀i ∈ U, xij = 1
j∈V
X
∀j ∈ V, xij = 1
i∈U
∀i, j, 0 ≤ xij ≤ 1

Unfortunately, the optimal solution to the LP may not have integer val-
ues for xij . The next claim shows that there is always an integer solution
associated with a maximal matching.
Claim 1.8. If M ⊆ U × V is a maximal matching, then there is an optimal
solution with integer values and where xij = 1 for all (i, j) ∈ M .
Proof. We give the assignment to xij s as follows. Let {u1 , u2 , . . . , uk } ⊆ U
and {v1 , v2 , . . . , vk } ⊆ V be the set of vertices not matched in M . For the
feasible point, make xui vi = 1. Note that, there is no edge between (ui , vi ).
Otherwise this would have been in the maximal matching.
It is also easy to see that if we have an integer optimal solution, then
we have a maximal matching. Now, we come to deal with the non-integer
solution problem. We show that if we have a non-integer solution we can
get one out of that. In practise this wont be necessary though. It is worth
noting that due to some properties of the linear constraints in the maximal
matching LP, the optimal solution is always integral (we skip this linear

11
1. Linear Programming and Simplex algorithm

algebraic argument here). We give a method to extract an integer solution


of same optimal value from a rational optimal solution.
Let there be K number of fractions in the solution. We reduce the number
of fractions atleast by one without changing the optimal condition. Let u ∈ U
be a vertex where there is an edge with fractional value, say (u, v). Take
that edge. This means that v has another edge which is also fractional. We
take that edge leading to another vertex having another fractional edge. We
continue doing this until we hit upon a cycle. Let us concentrate on the
cycle. Assume the cycle starts from u ∈ U . We subtract θ (a very very small
value) from xuv (we assume (u, v) is the edge in the cycle) and add θ on the
other edge starting from v in the cycle. This is continued. That is, for every
edge from U to V we subtract θ whereas we add θ in the other direction.
Note that θ is smaller than all xi js in the cycle.

Claim 1.9. The optimal value of the LP does not change on this new as-
signment.

Proof. Let us assume that the optimal value increases. A contradiction since
x is an optimal solution and therefore we cannot increase the optimal value.
So, let us assume that the optimal value decreases. Then, consider an alter-
nate assignment where we replace θ by −θ. Due to the previous reason, the
optimal value should now increase. This leads to another contradiction.
We can therefore take a θ such that one of xij s becomes an integer solution
(either 0 or 1) and the rest remains inside the feasible region. This reduces
the number of fractional solutions by atleast one. Continuing the process
will lead to an optimal solution with all integer values.

12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy