0% found this document useful (0 votes)
9 views

09 - APS - Greedy Method

The document discusses the greedy method, particularly in the context of solving optimization problems like the simple knapsack problem and finding a minimum spanning tree (MST) using Prim's algorithm. It explains how the greedy strategy selects elements incrementally to maximize or minimize an objective function while ensuring feasibility. The document also provides examples and algorithms for implementing these concepts.

Uploaded by

kgztjmqsss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

09 - APS - Greedy Method

The document discusses the greedy method, particularly in the context of solving optimization problems like the simple knapsack problem and finding a minimum spanning tree (MST) using Prim's algorithm. It explains how the greedy strategy selects elements incrementally to maximize or minimize an objective function while ensuring feasibility. The document also provides examples and algorithms for implementing these concepts.

Uploaded by

kgztjmqsss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Algorithms & data structures

Greedy method
Damjan Strnad
2

Greedy method

greedy method (or strategy) is typically used for problems
that require selecting a subset of n inputs that satisfies given
constraints

each such subset represents a feasible solution

we are often interested in an optimal solution, i.e., a feasible
solution that minimizes or maximizes a given objective
function

the greedy strategy constructs the solution incrementally,
where in each step we find the next element of solution that
contributes the most to the objective function value and
accept it, if the solution extended by that element remains
feasible
3

Simple knapsack problem



given are n objects we want to pack into a knapsack of
volume V

for each object i we know its volume vi (0<vi≤V) and
value ci (ci>0)

we can arbitrarily cut the objects, such that xi (0≤xi≤1)
denotes the fraction of object i in the knapsack

we want to fill the knapsack such that its content will
have maximum value, i.e., we want to maximize the
expression:
n n

∑ c i⋅x i with restriction ∑ v i⋅x i≤V


i=1 i=1
4

Simple knapsack problem



greedy method: the knapsack content will have maximum
value if we put the objects into it in the order of decreasing
relative value per volume unit, i.e., the ratio ci/vi
c i c i+1

we sort the objects so it holds: ≥ i=1 ,…, n−1
v i v i+1

while possible, we put whole objects into the knapsack

we cut the last object that cannot be added as a whole
SIMPLE-KNAPSACK(V,n,v,c,x) % assume object are already sorted by c/v
for i ← 1 to n do
x[i] ← 0 % initialize empty knapsack
y ← V % y is the free space in the knapsack
for i ← 1 to n do
if v[i] ≤ y then % the whole object fits in the knapsack
x[i] ← 1
y ← y – v[i]
else
x[i] ← y / v[i] % cut fraction of the object that fits
exit
5

Simple knapsack problem



example: we have 3 objects with values c = [10, 14, 20] and
volumes v = [4, 7, 5]. The knapsack volume is V=8.
– we calculate relative object values: c/v=[2.5, 2, 4]
– reorder objects: c = [20, 10, 14] and v = [5, 4, 7]
– initialization: x1=x2=x3=0, the knapsack free space is y=8
– the first object fits whole in the knapsack (v1<y, 5<8): x1=1, the
remaining free space is y=8-5=3
– the second object does not fit whole (v2>y, 4>3), so we cut it in
fraction x2=y/v2=3/4=0.75
– the problem solution is vector x=[1, 0.75, 0] (we need to
remember the object order has changed)

algorithm time complexity is T(n)=O(n) if we do not consider
sorting
6

Minimum spanning tree



a spanning tree for a given undirected graph G=(V,E) is
every acyclic subgraph G'=(V,E') for which E'⊆E and G'
connects all nodes of G

example of a graph G and some of its spanning trees

G G'1 G'2 G'3


7

Minimum spanning tree



if each edge (u,v)∈E has weight w(u,v), we can calculate
the weight of a spanning tree T as:
w (T )= ∑ w (u , v)
(u , v)∈T


a spanning tree with minimal weight is called minimum
spanning tree – MST

finding MST is a common practical problem (e.g.,
connecting cities with the shortest road, electrical or
water network)
8

Prim's algorithm

Prim's algorithm is a greedy algorithm for finding the
minimum spanning tree

the solution is built incrementally by adding one edge of
the minimum spanning tree in each step*:
– the first added edge is edge (k,l) with minimal weight in G
– for each node j that is not yet in MST we maintain value rj
which represents the index of closest node already in MST
(for nodes in MST we set rj=0)
– in each iteration we add to the tree an edge to node j for
which the following is true:

rj ≠ 0 (i.e., node j is not already in MST)
● j = argmin {w(i,ri)} (i.e., node j is the node that is closest
i
to any of the nodes already in MST)
9

Prim's algorithm
PRIM(G=(V,E),W,n,v,T)
select edge (k,l) ∈ E with minimal weight
T ← {(k,l)} % add edge (k,l) to MST
v ← w(k,l) % weight of MST
for i ← 1 to n do % set closest of k or l to other vertices
if w(i,l) < w(i,k) then
r[i] ← l
else
r[i] ← k
r[k] ← r[l] ← 0 % vertices k and l are already included
for p ← 1 to n-2 do
find j, such that r[j]≠0, j=argmini{w(i,r[i])} and w[j,r[j])<∞
T ← T ∪ {(j,r[j])} % add edge (j,r[j]) to MST
v ← v + w(j,r[j]) % update weight of MST
r[j] ← 0 % mark that node j was added to MST
for h ← 1 to n do % update indices of closest in MST
if r[h] ≠ 0 and w(h,r[h]) > w(h,j) then
r[h] ← j
10

Prim's algorithm – example



let’s find a MST for the graph in the image: 1

30 6
15
25 14
2 3 4

4 20 12

[ ]
0 30 15 6 ∞
30 0 25 ∞ 4
W = 15 25 0 14 20
6 ∞ 14 0 12
∞ 4 20 12 0
11

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
25 14
2 3 4

4 20 12

5
12

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20
r4=5 4 12

i 1 2 3 4 5
ri 2 0 5 5 0
13

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20
r4=5 4 12
– compute w(1,r1)=w(1,2)=30, 5
w(3,r3)=w(3,5)=20, w(4,r4)=w(4,5)=12 and
add edge (4,5) to T; w(T)=16 i 1 2 3 4 5
ri 2 0 5 5 0
14

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20
r4=5 4 12
– compute w(1,r1)=w(1,2)=30, 5
w(3,r3)=w(3,5)=20, w(4,r4)=w(4,5)=12 and
add edge (4,5) to T; w(T)=16 i 1 2 3 4 5
– set r4=0 and recompute r1=4 (because ri 4 0 4 0 0
w(1,4)<w(1,2), 6<30) and r3=4 (ker w(3,4)<w(3,5), 14<20)
15

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20 12
r4=5 4
– compute w(1,r1)=w(1,2)=30, 5
w(3,r3)=w(3,5)=20, w(4,r4)=w(4,5)=12 and
add edge (4,5) to T; w(T)=16 i 1 2 3 4 5
– set r4=0 and recompute r1=4 (because ri 4 0 4 0 0
w(1,4)<w(1,2), 6<30) and r3=4 (ker w(3,4)<w(3,5), 14<20)
– because w(1,r1)=6 < w(3,r3)=14, add (1,4) as next edge to T;
w(T)=22
16

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20 12
r4=5 4
– compute w(1,r1)=w(1,2)=30, 5
w(3,r3)=w(3,5)=20, w(4,r4)=w(4,5)=12 and
add edge (4,5) to T; w(T)=16 i 1 2 3 4 5
– set r4=0 and recompute r1=4 (because ri 0 0 4 0 0
w(1,4)<w(1,2), 6<30) and r3=4 (ker w(3,4)<w(3,5), 14<20)
– because w(1,r1)=6 < w(3,r3)=14, add (1,4) as next edge to T;
w(T)=22
– set r1=0, r3 does not change (because w(3,4)<w(3,1), 14<15)
17

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20 12
r4=5 4
– compute w(1,r1)=w(1,2)=30, 5
w(3,r3)=w(3,5)=20, w(4,r4)=w(4,5)=12 and
add edge (4,5) to T; w(T)=16 i 1 2 3 4 5
– set r4=0 and recompute r1=4 (because ri 0 0 4 0 0
w(1,4)<w(1,2), 6<30) and r3=4 (ker w(3,4)<w(3,5), 14<20)
– because w(1,r1)=6 < w(3,r3)=14, add (1,4) as next edge to T;
w(T)=22
– set r1=0, r3 does not change (because w(3,4)<w(3,1), 14<15)
– add edge (3,4) to last node 3 into T; w(T)=36
18

Prim's algorithm – example



let’s find a MST for the graph in the image: 1
– edge (2,5) with smallest weight w(2,5)=4 is 30 6
the first one added to T 15
– set r2=r5=0, assign to other nodes the index 25 14
2 3 4
of closer node between 2 and 5: r1=2, r3=5,
20 12
r4=5 4
– compute w(1,r1)=w(1,2)=30, 5
w(3,r3)=w(3,5)=20, w(4,r4)=w(4,5)=12 and
add edge (4,5) to T; w(T)=16 i 1 2 3 4 5
– set r4=0 and recompute r1=4 (because ri 0 0 0 0 0
w(1,4)<w(1,2), 6<30) and r3=4 (ker w(3,4)<w(3,5), 14<20)
– because w(1,r1)=6 < w(3,r3)=14, add (1,4) as next edge to T;
w(T)=22
– set r1=0, r3 does not change (because w(3,4)<w(3,1), 14<15)
– add edge (3,4) to last node 3 into T; w(T)=36
– set r3=0 and the algorithm terminates
19

Prim's algorithm – example



algorithm operation can be summarized in a table:
added included tree
iteration r1 r2 r3 r4 r5
node j edge weight
1 (2,5) 4 2 0 5 5 0
2 4 (4,5) 16 4 0 4 0 0
3 1 (1,4) 22 4 0 0 0 0
4 3 (3,4) 36 0 0 0 0 0

14
2 3 4

4 12

5
20

Time complexity of Prim’s algorithm



implementation using matrix W:
PRIM(G=(V,E),W,n,v,T)
– the first for loop is executed select (k,l)∈E with minimal weight
T ← {(k,l)}
n-times v ← w(k,l)
for i ← 1 to n do
– the second for loop is ...
for p ← 1 to n-2 do
executed (n-2)-times for h ← 1 to n do
...
– the third for loop is
executed n-times for each iteration of the second for loop
into which it is nested
– total time complexity is T(n) = Θ(n) + Θ(n2) = Θ(n2)

the algorithm can also be implemented using adjacency lists
and priority queue, which contains nodes not yet in MST,
ordered by non-decreasing weight of edge to the closest
neighbor already in MST ⇒ time complexity is O(|E|·log2|V|)
21

Shortest-path problem

in the shortest-path problem we are searching for shortest
paths between nodes in a weighted directed graph G=(V,E)

edge weights are defined by a weight function w: E  ℝ and
represent distances, costs, time, ...
● a weight of path p = 〈v0, v1,..., vk〉 is a sum of its edge weights:
k
w ( p)=∑ w (v i−1 , v i )
i=1

shortest-path weight is defined by:

δ (u , v)=
{
min {w ( p): u→ v },
∞, otherwise
if path from u to v exists


the shortest path from node u to node v is any path p with
weight w(p) = δ(u,v)
22

Shortest-path problem

the shortest-path problem on graphs with equal weights
of all edges can be solved by breadth-first search

here we consider the single-source shortest paths
problem – we are searching for shortest paths from
source node s to all other graph nodes

other problem variants are:
– single-destination shortest paths problem (we only
exchange source and destination and invert edge
directions)
– single-pair shortest path problem (there is no algorithm
that would be asymptotically faster than the algorithms for
single-source shortest paths problem)
– all-pairs shortest paths problem (can be solved using the
algorithms for single-source shortest paths problem, but
there are faster variants)
23

Shortest-path problem

edge weights can be negative

if graph G=(V,E) does not contain cycles with negative
weight, accessible from source s, then the shortest paths
δ(s,v) to nodes v∈V are well defined

if any path from s to v has a cycle with negative weight,
then the shortest path is undefined and we set δ(s,v)=−∞
(the path can always be „shortened“ by going one more
cycle)

the shortest path also cannot contain a positive cycle
because it can be shortened by removing the cycle

in a graph with |V| nodes the shortest path can therefore
contain at most |V|-1 edges
24

Shortest-path problem – example



the lengths of shortest paths from s will be written next to nodes

there is only path 〈s,a〉 from s to a, so δ(s,a) = w(s,a) = 6

there is only path 〈s,a,b〉 from s to b, so δ(s,b) = w(s,a) + w(a,b) = 4

there are infinitely many paths from s to c (〈s,c〉,〈s,c,d,c〉,…) ⇒
because the cycle 〈c,d,c〉 has positive weight 5 + (−4) = 1 > 0, the
shortest path is 〈s,c〉 with weight δ(s,c) = 3

similarly, the shortest path from s to d is 〈s,c,d〉 with weight
δ(s,d) = w(s,c) + w(c,d) = 8
6 -2 4

there are infinitely many paths from a b
s to e (〈s,e〉,〈s,e,f,e〉,〈s,e,f,e,f,e〉,…) ⇒ 6 10
because the cycle 〈e,f,e〉 has weight 0 5 -∞
3 3 8 11
2 + (−5) = −3 < 0, the shortest path s c d g
from s to e does not exist, so we set -4
δ(s,e) = −∞ 7
2
6

for the same reason we set -∞
e f
-∞
δ(s,f) = −∞ in δ(s,g) = −∞ -5
25

Shortest-path problem

optimal-substructure property – the shortest path
between two nodes contains other shortest paths:
If 〈v1, v2,..., vk〉 is the shortest path from node v1 to node vk,
then every subpath 〈vi, ..., vj〉 (1 ≤ i ≤ j ≤ k) is the shortest path
from node vi to node vj.

the optimal-substructure property is exploited by different
shortest-paths algorithms (both greedy methods and
dynamic programming strategy)
26

Shortest-path problem

during the search for shortest paths from source s, we
maintain for each node v∈V:
– the shortest-path estimate d[v], which represents the
upper bound of weight for the shortest path from s to v
– its predecessor parent[v]: after algorithm termination, we
can reconstruct the shortest path from s to v by following
the chain of predecessors from v backwards

the shortest path bounds and predecessors are initialized
by procedure INITIALIZATION(G,s)
INITIALIZATION(G,s)
for each node v ∈ V do
d[v] ← ∞
parent[v] ← NIL
d[s] ← 0
27

Shortest-path problem

relaxation is a technique with which we iteratively lower
the upper bound of path weight until it becomes equal to
the shortest path weight

by relaxing the edge (u,v), we check if we can improve the
shortest path from s to v by going through u; if we
succeed, we update d[v] and parent[v]
RELAXATION(u,v,w)
if d[v] > d[u] + w(u,v) then
d[v] ← d[u] + w(u,v)
parent[v] ← u

examples of „successful“ and „unsuccessful“ relaxation of
edge (u,v)
10 15 10 13 10 11 10 11
3 3 3 3
u v u v u v u v
28

Shortest-path problem

we will describe two algorithms for finding single-
source shortest paths in a directed weighted graph,
which first perform initialization and then repeatedly
execute edge relaxations:
– Dijkstra's algorithm uses greedy strategy and
relaxes each edge only once; the weights of all
edges must be non-negative
– Bellman-Ford algorithm does not use greedy
strategy and relaxes each edge multiple times; the
edge weights can be negative
29

Dijkstra's algorithm

the algorithm maintains a set S of nodes with already
determined shortest path, i.e., d[v]=δ(s,v) for each v∈S

in each step the algorithm executes the following steps:
– selects node u∈V−S with minimal value d[u] (greedy
move)
– adds u to S and relaxes all edges starting in u

the list V–S, ordered by d, is maintained using a priority
queue Q DIJKSTRA(G,w,s)

the implementation assumes INITIALIZATION(G,s)
S ← ∅
that the graph is described by Q ← V
while Q ≠ ∅ do
adjacency lists u ← EXTRACT-MINIMUM(Q)
S ← S ∪ {u}
for each v ∈ Adj[u] do
RELAXATION(u,v,w)
30

Dijkstra’s algorithm – example



we are given the graph in the DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
picture: S ← ∅
Q ← V
– bound values d are written while Q ≠ ∅ do
next to the nodes u ← EXTRACT-MINIMUM(Q)
S ← S ∪ {u}
– source is node 5 for each v ∈ Adj[u] do
RELAXATION(u,v,w)
– emphasized edges will be used to
mark the parents of nodes (the 0
parent is the node at arrow's tail) 1
– nodes in S will be marked black, 4 9
nodes in Q=V−S will be marked ∞ 1 ∞
5 2
white
6
– gray node is the one that will be 4 5 3
7
selected in the next iteration of the 4 3
while loop (i.e. node u) ∞ 2 ∞
31

Dijkstra’s algorithm – example



initialization: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– d[1]=0, d[2]=d[3]=d[4]=d[]=∞ S ← ∅
Q ← V
– parent[1]=parent[2]=parent[3]= while Q ≠ ∅ do
parent[4]=parent[5]=NIL u ← EXTRACT-MINIMUM(Q)
S ← S ∪ {u}
– S={}, Q={1,2,3,4,5} for each v ∈ Adj[u] do
RELAXATION(u,v,w)

0
1

4 9
∞ 1 ∞
5 2
6
4 5 3
7
4 3
∞ 2 ∞
32

Dijkstra’s algorithm – example



1. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 1 S ← ∅
Q ← V
– S={1}, Q={5,2,3,4} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (1,2) and (1,5): S ← S ∪ {u}

d[2]>d[1]+w(1,2) (∞>0+9), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore set
d[2]=d[1]+w(1,2)=0+9=9 0
and parent[2]=1 1


d[5]>d[1]+w(1,5) (∞>0+4), 4 9
therefore set ∞
5
1 2

d[5]=d[1]+w(1,5)=0+4=4 6
4
and parent[5]=1* 7 5 3
4 3
∞ 2 ∞
33

Dijkstra’s algorithm – example



1. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 1 S ← ∅
Q ← V
– S={1}, Q={5,2,3,4} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (1,2) and (1,5): S ← S ∪ {u}

d[2]>d[1]+w(1,2) (∞>0+9), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore set
d[2]=d[1]+w(1,2)=0+9=9 0
and parent[2]=1 1


d[5]>d[1]+w(1,5) (∞>0+4), 4 9
therefore set 4
5
1 2
9
d[5]=d[1]+w(1,5)=0+4=4 6
and parent[5]=1* 4 5 3
7
4 3
∞ 2 ∞
34

Dijkstra’s algorithm – example



2. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 5 S ← ∅
Q ← V
– S={1,5}, Q={2,3,4} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edge (5,2): S ← S ∪ {u}

d[2]>d[5]+w(5,2) (9>4+1), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore set
d[2]=d[5]+w(5,2)=4+1=5 0
and parent[2]=5* 1

4 9
4 1 9
5 2
6
4 5 3
7
4 3
∞ 2 ∞
35

Dijkstra’s algorithm – example



2. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 5 S ← ∅
Q ← V
– S={1,5}, Q={2,3,4} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edge (5,2): S ← S ∪ {u}

d[2]>d[5]+w(5,2) (9>4+1), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore set
d[2]=d[5]+w(5,2)=4+1=5 0
and parent[2]=5* 1

4 9
4 1 5
5 2
6
4 5 3
7
4 3
∞ 2 ∞
36

Dijkstra’s algorithm – example



3. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 2 S ← ∅
Q ← V
– S={1,5,2}, Q={4,3} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (2,3) and (2,4): S ← S ∪ {u}

d[3]>d[2]+w(2,3) (∞>5+5), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore set
d[3]=d[2]+w(2,3)=5+5=10 0
and parent[3]=2 1


d[4]>d[2]+w(2,4) (∞>5+4), 4 9
therefore set 4
5
1 2
5
d[4]=d[2]+w(2,4)=5+4=9 6
and parent[4]=2* 4 5 3
7
4 3
∞ 2 ∞
37

Dijkstra’s algorithm – example



3. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 2 S ← ∅
Q ← V
– S={1,5,2}, Q={4,3} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (2,3) and (2,4): S ← S ∪ {u}

d[3]>d[2]+w(2,3) (∞>5+5), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore set
d[3]=d[2]+w(2,3)=5+5=10 0
and parent[3]=2 1


d[4]>d[2]+w(2,4) (∞>5+4), 4 9
therefore set 4
5
1 2
5
d[4]=d[2]+w(2,4)=5+4=9 6
and parent[4]=2* 4 5 3
7
4 3
9 2 10
38

Dijkstra’s algorithm – example



4. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 4 S ← ∅
Q ← V
– S={1,5,2,4}, Q={3} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (4,1) and (4,5): S ← S ∪ {u}

d[1]≤d[4]+w(4,1) (0≤9+6), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore no change

d[5]≤d[4]+w(4,5) (4≤9+7), 0
1
therefore no change*
4 9
4 1 5
5 2
6
4 5 3
7
4 3
9 2 10
39

Dijkstra’s algorithm – example



4. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 4 S ← ∅
Q ← V
– S={1,5,2,4}, Q={3} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (4,1) and (4,5): S ← S ∪ {u}

d[1]≤d[4]+w(4,1) (0≤9+6), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore no change

d[5]≤d[4]+w(4,5) (4≤9+7), 0
1
therefore no change*
4 9
4 1 5
5 2
6
4 5 3
7
4 3
9 2 10
40

Dijkstra’s algorithm – example



5. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 3 S ← ∅
Q ← V
– S={1,5,2,4,3}, Q={} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (3,2) and (3,4): S ← S ∪ {u}

d[2]≤d[3]+w(3,2) (5≤10+3), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore no change

d[4]≤d[3]+w(3,4) (9≤10+2), 0
1
therefore no change*
4 9

the priority queue is empty, so 4 5
the algorithm terminates 5
1 2
6
4 5 3
7
4 3
9 2 10
41

Dijkstra’s algorithm – example



5. iteration of while: DIJKSTRA(G,w,s)
INITIALIZATION(G,s)
– select node 3 S ← ∅
Q ← V
– S={1,5,2,4,3}, Q={} while Q ≠ ∅ do
u ← EXTRACT-MINIMUM(Q)
– relax edges (3,2) and (3,4): S ← S ∪ {u}

d[2]≤d[3]+w(3,2) (5≤10+3), for each v ∈ Adj[u] do
RELAXATION(u,v,w)
therefore no change

d[4]≤d[3]+w(3,4) (9≤10+2), 0
1
therefore no change*
4 9

the priority queue is empty, so 4 5
the algorithm terminates 5
1 2
6
4 5 3
7
4 3
9 2 10
42

Dijkstra’s algorithm – example



if we only draw emphasized edges, we obtain the tree of
shortest paths from node 5 to all other nodes in the graph

0 0
1 1
4
4 9 4
4 5 5
5
1 2 1
5
6 2
4 5 3
7 4 5
9 10
4 3 4 3
9 2 10
43

Dijkstra’s algorithm – example



algorithm operation can be summarized in a table:
selected shortest
iteration parent[u] d[1] d[2] d[3] d[4] d[5]
node u path to u
initialization / / / 0 ∞ ∞ ∞ ∞
1 1 NIL 〈1〉 / 9 ∞ ∞ 4
2 5 1 〈1, 5〉 / 5 ∞ ∞ /
3 4 5 〈1, 5, 2〉 / / 10 9 /
4 2 2 〈1, 5, 2, 4〉 / / 10 / /
5 3 2 〈1, 5, 2, 3〉 / / / / /
44

Dijkstra’s algorithm – analysis



algorithm performs three operations on priority queue:
– element insertion is performed |V|-times (line 3)
– minimum extraction is performed |V|-times (line 5)
– access to elements when relaxing within the for loop is
performed once per edge, in total |E|-times (line 8)
DIJKSTRA(G,w,s)
1: INITIALIZATION(G,s)
2: S ← ∅
3: Q ← V
4: while Q ≠ ∅ do
5: u ← EXTRACT-MINIMUM(Q)
6: S ← S ∪ {u}
7: for each v ∈ Adj[u] do
8: RELAXATION(u,v,w)
45

Dijkstra’s algorithm – analysis



the time complexity of Dijkstra's algorithm depends on the
implementation of the priority queue:
– implementation with an array, where the i-th array element
belongs to node i:

insertion and access require O(1) time

each minimum extraction requires O(|V|) time, in total
O(|V|2)

total algorithm time is T(n) = O(|V|+|V|2+|E|) = O(|V|2)
since |V|2>|E|
46

Dijkstra’s algorithm – analysis



the time complexity of Dijkstra's algorithm depends on the
implementation of the priority queue:
– implementation with a heap:

heap construction requires O(|V|) time
● element access requires O(log |V|) time
2

● each minimum extraction requires O(log2|V|) time


● total time is T(n) = O((|V|+|E|)∙log2|V|) = O(|E|∙log2|V|),
better than O(|V|2) for sparse graphs (|E|≪|V|2)
47

Dijkstra’s algorithm – analysis



the time complexity of Dijkstra's algorithm depends on the
implementation of the priority queue:
– implementation with Fibonacci heap has even better time
complexity T(n) = O(|V|∙log2|V|+|E|) for sparse graphs

Fibonacci heap is a special heap variant which is
efficient in cases when the number of element deletions
is small compared to other operations
48

Bellman-Ford algorithm

Bellman-Ford algorithm solves the single-source
shortest paths problem in general case when the weights
can be negative

for a given weighted directed graph G=(V,E) with source s
and weight function w: E ℝ, the Bellman-Ford algorithm
returns a Boolean value which signals if a cycle with
negative weight is reachable from the source:
– if such cycle exists, the algorithm terminates without
solution
– if such cycle does not exist, the algorithm finds shortest
paths and their weight
49

Bellman-Ford algorithm

basic idea: because the longest path in a graph contains
at most |V|-1 edges, relax each of them that many times

the algorithm uses relaxation to gradually lower the upper
weight bounds for paths to all nodes v, until it reaches the
weight of the shortest path δ(s,v)
BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
for i ← 1 to |V|-1 do
for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
return FALSE % one of cycles is negative
return TRUE % all cycles are positive
50

Bellman-Ford alg. – example



we are given the graph in the BELLMAN-FORD(G,w,s)
picture: INITIALIZATION(G,s)
for i ← 1 to |V|-1 do
– bound values d are written for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
next to the nodes for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– source is node 5 return FALSE
– emphasized edges will be used return TRUE

to mark the parents of nodes (the


parent is the node at arrow's tail) 0
5
– let's assume that in each 7 6
2
iteration of the first for loop the
∞ 8 ∞
edges are relaxed in following 3 1
order: -3 -4
9 5 -2
(1,2), (1,3), (1,4), (2,1), (3,2),
(3,4), (4,2), (4,5), (5,1), (5,3) ∞
4
7
2

51

Bellman-Ford alg. – example



initialization: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– d[5]=0, d[1]=d[2]=d[3]=d[4]=∞ for i ← 1 to |V|-1 do
for each edge (u,v) ∈ E do
– parent[1]=parent[2]=parent[3]= RELAXATION(u,v,w)
parent[4]=parent[5]=NIL for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
return FALSE
return TRUE

0
5

7 2 6
∞ 8 ∞
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
52

Bellman-Ford alg. – example



1. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the first change happens with return FALSE
edge (5,1) (0+6<∞), where we return TRUE

set d[1]=6 and parent[1]=5*


0
5

7 2 6
∞ 8 ∞
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
53

Bellman-Ford alg. – example



1. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the first change happens with return FALSE
edge (5,1) (0+6<∞), where we return TRUE

set d[1]=6 and parent[1]=5*


0
5

7 2 6
∞ 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
54

Bellman-Ford alg. – example



1. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the first change happens with return FALSE
edge (5,1) (0+6<∞), where we return TRUE

set d[1]=6 and parent[1]=5


– the second change happens 0
5
with edge (5,3) (0+7<∞), where
7 2 6
we set d[3]=7 and parent[3]=5*
∞ 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
55

Bellman-Ford alg. – example



1. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the first change happens with return FALSE
edge (5,1) (0+6<∞), where we return TRUE

set d[1]=6 and parent[1]=5


– the second change happens 0
5
with edge (5,3) (0+7<∞), where
7 2 6
we set d[3]=7 and parent[3]=5*
7 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
56

Bellman-Ford alg. – example



2. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– first change happens with return FALSE
edge (1,2) (6+5<∞), where we return TRUE

set d[2]=11 and parent[2]=1*


0
5

7 2 6
7 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 ∞
57

Bellman-Ford alg. – example



2. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– first change happens with return FALSE
edge (1,2) (6+5<∞), where we return TRUE

set d[2]=11 and parent[2]=1*


0
5

7 2 6
7 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 11
58

Bellman-Ford alg. – example



2. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– first change happens with return FALSE
edge (1,2) (6+5<∞), where we return TRUE

set d[2]=11 and parent[2]=1


– second change happens with 0
5
edge (1,4) (6-4<∞), where we
7 2 6
set d[4]=2 and parent[4]=1*
7 8 6
3 1
-3 -4
9 5 -2
4 2
∞ 7 11
59

Bellman-Ford alg. – example



2. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– first change happens with return FALSE
edge (1,2) (6+5<∞), where we return TRUE

set d[2]=11 and parent[2]=1


– second change happens with 0
5
edge (1,4) (6-4<∞), where we
7 2 6
set d[4]=2 and parent[4]=1*
7 8 6
3 1
-3 -4
9 5 -2
4 2
2 7 11
60

Bellman-Ford alg. – example



2. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– first change happens with return FALSE
edge (1,2) (6+5<∞), where we return TRUE

set d[2]=11 and parent[2]=1


– second change happens with 0
5
edge (1,4) (6-4<∞), where we
7 2 6
set d[4]=2 and parent[4]=1
7 8 6
– third change happens with 3 1
edge (3,2) (7-3<11), where we -3 -4
9 5 -2
set d[2]=4 and parent[2]=3*
4 2
2 7 11
61

Bellman-Ford alg. – example



2. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– first change happens with return FALSE
edge (1,2) (6+5<∞), where we return TRUE

set d[2]=11 and parent[2]=1


– second change happens with 0
5
edge (1,4) (6-4<∞), where we
7 2 6
set d[4]=2 and parent[4]=1
7 8 6
– third change happens with 3 1
edge (3,2) (7-3<11), where we -3 -4
9 5 -2
set d[2]=4 and parent[2]=3*
4 2
2 7 4
62

Bellman-Ford alg. – example



3. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the only change happens with return FALSE
edge (2,1) (4-2<6), where we return TRUE

set d[1]=2 and parent[1]=2*


0
5

7 2 6
7 8 6
3 1
-3 -4
9 5 -2
4 2
2 7 4
63

Bellman-Ford alg. – example



3. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the only change happens with return FALSE
edge (2,1) (4-2<6), where we return TRUE

set d[1]=2 and parent[1]=2*


0
5

7 2 6
7 8 2
3 1
-3 -4
9 5 -2
4 2
2 7 4
64

Bellman-Ford alg. – example



3. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the only change happens with return FALSE
edge (2,1) (4-2<6), where we return TRUE

set d[1]=2 and parent[1]=2


0

4. iteration of the first for loop: 5
– relax edges (1,2), (1,3), (1,4), 7 2 6
(2,1), (3,2), (3,4), (4,2), (4,5), 7 2
8
(5,1), (5,3) 3 1
-3 -4
– the only change happens with 9 5 -2
edge (1,4) (2-4<2), where we
4 2
set d[4]=-2 and parent[4]=1* 2 7 4
65

Bellman-Ford alg. – example



3. iteration of the first for loop: BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
– relax edges (1,2), (1,3), (1,4), for i ← 1 to |V|-1 do
(2,1), (3,2), (3,4), (4,2), (4,5), for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
(5,1), (5,3) for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
– the only change happens with return FALSE
edge (2,1) (4-2<6), where we return TRUE

set d[1]=2 and parent[1]=2


0

4. iteration of the first for loop: 5
– relax edges (1,2), (1,3), (1,4), 7 2 6
(2,1), (3,2), (3,4), (4,2), (4,5), 7 2
8
(5,1), (5,3) 3 1
-3 -4
– the only change happens with 9 5 -2
edge (1,4) (2-4<2), where we 4 2
set d[4]=-2 and parent[4]=1* -2 7 4
66

Bellman-Ford alg. – example



in the second for loop, no edge BELLMAN-FORD(G,w,s)
(u,v) satisfies d[v]>d[u]+w(u,v), INITIALIZATION(G,s)
for i ← 1 to |V|-1 do
therefore the algorithm returns for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
TRUE for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then

validation: none of six cycles in return FALSE
the graph has negative weight: return TRUE

– cycle 〈1,2,1〉 has weight 5+(−2)=3


– cycle 〈1,4,2,1〉 has weight 0
5
(−4)+7+(−2)=1
– cycle 〈1,4,5,1〉 has weight (−4)+2+6=4 7 2 6
– cycle 〈1,4,5,3,2,1〉 has weight 7 8 2
3 1
(−4)+2+7+(-3)+(-2)=0
-3 -4
– cycle 〈1,3,2,1〉 has weight 9 5 -2
8+(−3)+(−2)=3
– cycle 〈1,3,4,2,1〉 has weight -2
4
7
2
4
8+9+7+(−2)=22
67

Bellman-Ford alg. – example



if we only draw emphasized edges, we obtain the tree of
shortest paths from node 5 to all other nodes in the graph

0
0 5
5 7
7
7 2 6 3
-3
7 8 2 4
3 1 2
-3 -4 -2
9 5 -2 2
1
4 2 -4
-2 7 4 -2
4
68

Bellman-Ford alg. – example



algorithm operation can be summarized in a table*
iteration parent[1] parent[2] parent[3] parent[4] parent[5] d[1] d[2] d[3] d[4] d[5]
initialization NIL NIL NIL NIL NIL ∞ ∞ ∞ ∞ 0
1 5 NIL 5 NIL NIL 6 ∞ 7 ∞ 0
2 5 3 5 1 NIL 6 4 7 2 0
3 5 3 5 1 NIL 2 4 7 2 0
4 2 3 5 1 NIL 2 4 7 -2 0
69

Bellman-Ford algorithm – analysis



initialization requires O(|V|) time

the first for loop is executed (|V|−1)-times and the inner
for loop is executed |E|-times; their total time complexity is
O(|V|·|E|)

the last for loop is executed |E|-times

the total time complexity of Bellman-Ford algorithm is
T(n) = O(|V|) + O(|V|·|E|) + O(|E|) = O(|V|·|E|)
BELLMAN-FORD(G,w,s)
INITIALIZATION(G,s)
for i ← 1 to |V|-1 do
for each edge (u,v) ∈ E do
RELAXATION(u,v,w)
for each edge (u,v) ∈ E do
if d[v] > d[u] + w(u,v) then
return FALSE
return TRUE

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy