09 - APS - Greedy Method
09 - APS - Greedy Method
Greedy method
Damjan Strnad
2
Greedy method
●
greedy method (or strategy) is typically used for problems
that require selecting a subset of n inputs that satisfies given
constraints
●
each such subset represents a feasible solution
●
we are often interested in an optimal solution, i.e., a feasible
solution that minimizes or maximizes a given objective
function
●
the greedy strategy constructs the solution incrementally,
where in each step we find the next element of solution that
contributes the most to the objective function value and
accept it, if the solution extended by that element remains
feasible
3
●
a spanning tree with minimal weight is called minimum
spanning tree – MST
●
finding MST is a common practical problem (e.g.,
connecting cities with the shortest road, electrical or
water network)
8
Prim's algorithm
●
Prim's algorithm is a greedy algorithm for finding the
minimum spanning tree
●
the solution is built incrementally by adding one edge of
the minimum spanning tree in each step*:
– the first added edge is edge (k,l) with minimal weight in G
– for each node j that is not yet in MST we maintain value rj
which represents the index of closest node already in MST
(for nodes in MST we set rj=0)
– in each iteration we add to the tree an edge to node j for
which the following is true:
●
rj ≠ 0 (i.e., node j is not already in MST)
● j = argmin {w(i,ri)} (i.e., node j is the node that is closest
i
to any of the nodes already in MST)
9
Prim's algorithm
PRIM(G=(V,E),W,n,v,T)
select edge (k,l) ∈ E with minimal weight
T ← {(k,l)} % add edge (k,l) to MST
v ← w(k,l) % weight of MST
for i ← 1 to n do % set closest of k or l to other vertices
if w(i,l) < w(i,k) then
r[i] ← l
else
r[i] ← k
r[k] ← r[l] ← 0 % vertices k and l are already included
for p ← 1 to n-2 do
find j, such that r[j]≠0, j=argmini{w(i,r[i])} and w[j,r[j])<∞
T ← T ∪ {(j,r[j])} % add edge (j,r[j]) to MST
v ← v + w(j,r[j]) % update weight of MST
r[j] ← 0 % mark that node j was added to MST
for h ← 1 to n do % update indices of closest in MST
if r[h] ≠ 0 and w(h,r[h]) > w(h,j) then
r[h] ← j
10
30 6
15
25 14
2 3 4
4 20 12
[ ]
0 30 15 6 ∞
30 0 25 ∞ 4
W = 15 25 0 14 20
6 ∞ 14 0 12
∞ 4 20 12 0
11
4 20 12
5
12
i 1 2 3 4 5
ri 2 0 5 5 0
13
14
2 3 4
4 12
5
20
Shortest-path problem
●
in the shortest-path problem we are searching for shortest
paths between nodes in a weighted directed graph G=(V,E)
●
edge weights are defined by a weight function w: E ℝ and
represent distances, costs, time, ...
● a weight of path p = 〈v0, v1,..., vk〉 is a sum of its edge weights:
k
w ( p)=∑ w (v i−1 , v i )
i=1
●
shortest-path weight is defined by:
δ (u , v)=
{
min {w ( p): u→ v },
∞, otherwise
if path from u to v exists
●
the shortest path from node u to node v is any path p with
weight w(p) = δ(u,v)
22
Shortest-path problem
●
the shortest-path problem on graphs with equal weights
of all edges can be solved by breadth-first search
●
here we consider the single-source shortest paths
problem – we are searching for shortest paths from
source node s to all other graph nodes
●
other problem variants are:
– single-destination shortest paths problem (we only
exchange source and destination and invert edge
directions)
– single-pair shortest path problem (there is no algorithm
that would be asymptotically faster than the algorithms for
single-source shortest paths problem)
– all-pairs shortest paths problem (can be solved using the
algorithms for single-source shortest paths problem, but
there are faster variants)
23
Shortest-path problem
●
edge weights can be negative
●
if graph G=(V,E) does not contain cycles with negative
weight, accessible from source s, then the shortest paths
δ(s,v) to nodes v∈V are well defined
●
if any path from s to v has a cycle with negative weight,
then the shortest path is undefined and we set δ(s,v)=−∞
(the path can always be „shortened“ by going one more
cycle)
●
the shortest path also cannot contain a positive cycle
because it can be shortened by removing the cycle
●
in a graph with |V| nodes the shortest path can therefore
contain at most |V|-1 edges
24
Shortest-path problem
●
optimal-substructure property – the shortest path
between two nodes contains other shortest paths:
If 〈v1, v2,..., vk〉 is the shortest path from node v1 to node vk,
then every subpath 〈vi, ..., vj〉 (1 ≤ i ≤ j ≤ k) is the shortest path
from node vi to node vj.
●
the optimal-substructure property is exploited by different
shortest-paths algorithms (both greedy methods and
dynamic programming strategy)
26
Shortest-path problem
●
during the search for shortest paths from source s, we
maintain for each node v∈V:
– the shortest-path estimate d[v], which represents the
upper bound of weight for the shortest path from s to v
– its predecessor parent[v]: after algorithm termination, we
can reconstruct the shortest path from s to v by following
the chain of predecessors from v backwards
●
the shortest path bounds and predecessors are initialized
by procedure INITIALIZATION(G,s)
INITIALIZATION(G,s)
for each node v ∈ V do
d[v] ← ∞
parent[v] ← NIL
d[s] ← 0
27
Shortest-path problem
●
relaxation is a technique with which we iteratively lower
the upper bound of path weight until it becomes equal to
the shortest path weight
●
by relaxing the edge (u,v), we check if we can improve the
shortest path from s to v by going through u; if we
succeed, we update d[v] and parent[v]
RELAXATION(u,v,w)
if d[v] > d[u] + w(u,v) then
d[v] ← d[u] + w(u,v)
parent[v] ← u
●
examples of „successful“ and „unsuccessful“ relaxation of
edge (u,v)
10 15 10 13 10 11 10 11
3 3 3 3
u v u v u v u v
28
Shortest-path problem
●
we will describe two algorithms for finding single-
source shortest paths in a directed weighted graph,
which first perform initialization and then repeatedly
execute edge relaxations:
– Dijkstra's algorithm uses greedy strategy and
relaxes each edge only once; the weights of all
edges must be non-negative
– Bellman-Ford algorithm does not use greedy
strategy and relaxes each edge multiple times; the
edge weights can be negative
29
Dijkstra's algorithm
●
the algorithm maintains a set S of nodes with already
determined shortest path, i.e., d[v]=δ(s,v) for each v∈S
●
in each step the algorithm executes the following steps:
– selects node u∈V−S with minimal value d[u] (greedy
move)
– adds u to S and relaxes all edges starting in u
●
the list V–S, ordered by d, is maintained using a priority
queue Q DIJKSTRA(G,w,s)
●
the implementation assumes INITIALIZATION(G,s)
S ← ∅
that the graph is described by Q ← V
while Q ≠ ∅ do
adjacency lists u ← EXTRACT-MINIMUM(Q)
S ← S ∪ {u}
for each v ∈ Adj[u] do
RELAXATION(u,v,w)
30
0
1
4 9
∞ 1 ∞
5 2
6
4 5 3
7
4 3
∞ 2 ∞
32
●
d[5]>d[1]+w(1,5) (∞>0+4), 4 9
therefore set ∞
5
1 2
∞
d[5]=d[1]+w(1,5)=0+4=4 6
4
and parent[5]=1* 7 5 3
4 3
∞ 2 ∞
33
●
d[5]>d[1]+w(1,5) (∞>0+4), 4 9
therefore set 4
5
1 2
9
d[5]=d[1]+w(1,5)=0+4=4 6
and parent[5]=1* 4 5 3
7
4 3
∞ 2 ∞
34
4 9
4 1 9
5 2
6
4 5 3
7
4 3
∞ 2 ∞
35
4 9
4 1 5
5 2
6
4 5 3
7
4 3
∞ 2 ∞
36
●
d[4]>d[2]+w(2,4) (∞>5+4), 4 9
therefore set 4
5
1 2
5
d[4]=d[2]+w(2,4)=5+4=9 6
and parent[4]=2* 4 5 3
7
4 3
∞ 2 ∞
37
●
d[4]>d[2]+w(2,4) (∞>5+4), 4 9
therefore set 4
5
1 2
5
d[4]=d[2]+w(2,4)=5+4=9 6
and parent[4]=2* 4 5 3
7
4 3
9 2 10
38
0 0
1 1
4
4 9 4
4 5 5
5
1 2 1
5
6 2
4 5 3
7 4 5
9 10
4 3 4 3
9 2 10
43
Bellman-Ford algorithm
●
Bellman-Ford algorithm solves the single-source
shortest paths problem in general case when the weights
can be negative
●
for a given weighted directed graph G=(V,E) with source s
and weight function w: E ℝ, the Bellman-Ford algorithm
returns a Boolean value which signals if a cycle with
negative weight is reachable from the source:
– if such cycle exists, the algorithm terminates without
solution
– if such cycle does not exist, the algorithm finds shortest
paths and their weight
49
Bellman-Ford algorithm
●
basic idea: because the longest path in a graph contains
at most |V|-1 edges, relax each of them that many times
●
the algorithm uses relaxation to gradually lower the upper
weight bounds for paths to all nodes v, until it reaches the
weight of the shortest path δ(s,v)
BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
for i ← 1 to |V|-1 do
for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
return FALSE % one of cycles is negative
return TRUE % all cycles are positive
50
0
5
7 2 6
∞ 8 ∞
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
52
7 2 6
∞ 8 ∞
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
53
7 2 6
∞ 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
54
7 2 6
7 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
57
7 2 6
7 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 11
58
7 2 6
7 8 6
3 1
-3 -4
9 5 -2
4 2
2 7 4
63
7 2 6
7 8 2
3 1
-3 -4
9 5 -2
4 2
2 7 4
64
0
0 5
5 7
7
7 2 6 3
-3
7 8 2 4
3 1 2
-3 -4 -2
9 5 -2 2
1
4 2 -4
-2 7 4 -2
4
68