0% found this document useful (0 votes)
32 views58 pages

Unit III Greedy

Greedy method
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views58 pages

Unit III Greedy

Greedy method
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Unit III

Greedy Method
Greedy Method
The greedy method is the straight forward design technique applicable to variety of
applications.
The greedy approach suggests constructing a solution through a sequence of steps, each
expanding a partially constructed solution obtained so far, until a complete solution to the
problem is reached. On each step the choice made must be:
feasible, i.e., it has to satisfy the problem’s constraints
locally optimal, i.e., it has to be the best local choice among all feasible choices
available on that step
irrevocable, i.e., once made, it cannot be changed on subsequent steps of the
algorithm
As a rule, greedy algorithms are both intuitively appealing and simple.
Given an optimization problem, it is usually easy to figure out how to proceed in a
greedy manner, possibly after considering a few small instances of the problem. What is
usually more difficult is to prove that a greedy algorithm yields an optimal solution
(when it does).
Greedy General Method
Algorithm Greedy (a, n)
// a [1 ..n] contains the n inputs
{
solution :=ø;
for i := 1 to n do
{
x := Select (a);
if Feasible (solution, x) then solution := Union ( solution, x);
}
return solution;
}
Greedy General Method
Greedy method consists of 3 functions (steps).
1) Select: it selects an input from array a[] (candidate set) and puts in the variable x.
2) Feasible: it is a Boolean function which checks whether the selected input meets the
constraints or not.
3) Union: if the selected input i.e. 'x' makes the solution feasible, then x is included in the
solution and objective function get updated.

Characteristics of Greedy:
1) These algorithms are simple and straightforward and easy to implement.
2) They take decisions on the basis of information at hand without worrying about the
effect these decisions may have in the future.
3) They work in stages and never reconsider any decision.
Greedy General Method
Applications
 Change making problem
 Minimum Spanning Tree
 Single source shortest path
 Knapsack problem
 Job Sequencing with deadlines
 Optimal storage on tapes
 Huffman codes
Change making problem
• Suppose, we want to make change
for an amount ‘A’ using fewest no
of currency notes. Assume the
available denominations are
Rs 1, 2, 5, 10, 20, 50, 100, 500, 1000.
• To make a change for A=Rs 28,
with the minimum number of notes,
one would first choose a note of
denomination Rs 20, 5, 2 and 1.
Change making problem
• In Greedy method the problems have 'n' inputs called as candidate set, from which a
subset is selected to form a solution for the given problem. Any subset that satisfies the
given constraints is called a feasible solution. We need to find a feasible solution that
maximizes or minimizes an objective function and such solution is called an optimal
solution.
• currency notes denomination set { 1000 ….1000 ,500….500, 100….100, 50…50, 20…
20,10…10,5…5,2..2,1…1}is candidate set.
• constraint is our solution make the exact target amount of cash. Hence, any feasible
solution i.e. sum of selected notes should be equal to target amount.
• objective function is our solution should consist of the fewest number of currency notes.
Hence, any optimal solution which is one of the feasible solutions that optimizes the
objective function. There can be more than one optimal solution.
Minimum Spanning Trees
Spanning Trees
• Given (connected) graph G(V,E),
a spanning tree T(V’,E’):
• Is a subgraph of G; that is, V’  V, E’  E.
• Spans the graph (V’ = V)
• Forms a tree (no cycle);
• So, E’ has |V| -1 edges

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 9


Minimum Spanning Trees
• Given: Connected, undirected, weighted graph, G
• Find: Minimum - weight spanning tree, T
• Example:

b 7 c
5 Acyclic subset of edges(E) that connects
a 1
all vertices of G.
3 -3
11
d e f
0 2 b c
5

a 1
3 -3

d e f
0

Comp 122, Fall 2003


Minimum Spanning Trees
• Edges are weighted: find minimum cost spanning tree
• Applications
• Find cheapest way to wire your house
• Find minimum cost to send a message on the Internet

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 11


Strategy for Minimum Spanning Tree
• For any spanning tree T, inserting an edge enew not in T creates a cycle
• But
• Removing any edge eold from the cycle gives back a spanning tree
• If enew has a lower cost than eold we have progressed!

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 12


Strategy
• Strategy for construction:
• Add an edge of minimum cost that does not create a cycle (greedy algorithm)
• Repeat |V| -1 times
• Correct since if we could replace an edge with one of lower cost, the algorithm
would have picked it up

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 13


Two Algorithms
• Prim: (build tree incrementally)
• Pick lower cost edge connected to known (incomplete) spanning tree that does
not create a cycle and expand to include it in the tree
• Kruskal: (build forest that will finish as a tree)
• Pick lowest cost edge not yet in a tree that does not create a cycle. Then
expand the set of included edges to include it. (It will be somewhere in the
forest.)

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 14


Prim’s algorithm
1
Starting from empty T,
10 5
choose a vertex at random
and initialize 1
V = {1), E’ ={}
8 3
2 3 4

1 1 6

4
2
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 15


Prim’s algorithm
1
Choose the vertex u not in V
10 5
such that edge weight from u to
a vertex in V is minimal 1
(greedy!)
V={1,3} E’= {(1,3) } 8 3
2 3 4

1 1 6

4
2
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 16


Prim’s algorithm
Repeat until all vertices have been
chosen 1
10 5
Choose the vertex u not in V such that
edge weight from v to a vertex in V is 1
minimal (greedy!)
V= {1,3,4} E’= {(1,3),(3,4)} 8 3
2 3 4
V={1,3,4,5} E’={(1,3),(3,4),(4,5)}
…. 6
1 1
V={1,3,4,5,2,6}
E’={(1,3),(3,4),(4,5),(5,2),(2,6)} 4
2
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 17


Prim’s algorithm
Repeat until all vertices have been
1
chosen
10 5
V={1,3,4,5,2,6}
1
E’={(1,3),(3,4),(4,5),(5,2),(2,6)}

8 3
Final Cost: 1 + 3 + 4 + 1 + 1 = 10 2 3 4

1 1 6

4
2
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 18


Prim’s Algorithm Implementation
• Assume adjacency list representation
Initialize connection cost of each node to “inf” and “unmark” them
Choose one node, say v and set cost[v] = 0 and prev[v] =0
While they are unmarked nodes
Select the unmarked node u with minimum cost; mark it
For each unmarked node w adjacent to u
if cost(u,w) < cost(w) then cost(w) := cost (u,w)
prev[w] = u

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 19


Prim’s Algorithm Implementation
PrimMST(Graph G, Node start_node):
for each node v in G:
cost[v] := ∞ // Initialize the cost of each node to infinity
prev[v] := NULL // Initialize the previous node array
mark[v] := false // Mark all nodes as unmarked

cost[start_node] := 0 // Start from the start_node

while exists an unmarked node:


u := unmarked node with minimum cost[u] // Select the unmarked node with minimum cost
mark[u] := true // Mark node u as visited
for each neighbor w of u:
if not mark[w] and weight(u, w) < cost[w]:
cost[w] := weight(u, w) // Update cost of w
prev[w] := u // Set u as the predecessor of w
// The prev[] array now contains the Minimum Spanning Tree (MST) edges

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 20


Prim’s algorithm Analysis
• If the “Select the unmarked node u with minimum cost” is done with binary heap
then O((n+m)logn)

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 21


Kruskal’s Algorithm
• Select edges in order of increasing cost
• Accept an edge to expand tree or forest only if it does not cause a
cycle
• Implementation using adjacency list, priority queues and disjoint sets

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 22


Kruskal’s Algorithm
Initialize a forest of trees, each tree being a single node
Build a priority queue of edges with priority being lowest cost
Repeat until |V| -1 edges have been accepted {
Deletemin edge from priority queue
If it forms a cycle then discard it
else accept the edge – It will join 2 existing trees yielding a larger tree and reducing the forest by
one tree
}
The accepted edges form the minimum spanning tree

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 23


Detecting Cycles
• If the edge to be added (u,v) is such that vertices u and v belong to the
same tree, then by adding (u,v) you would form a cycle
• Therefore to check, Find(u) and Find(v). If they are the same discard (u,v)
• If they are different Union(Find(u),Find(v))

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 24


Properties of trees in K’s algorithm
• Vertices in different trees are disjoint
• True at initialization and Union won’t modify the fact for remaining trees
• Trees form equivalent classes under the relation “is connected to”
• u connected to u (reflexivity)
• u connected to v implies v connected to u (symmetry)
• u connected to v and v connected to w implies a path from u to w so u
connected to w (transitivity)

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 25


K’s Algorithm Data Structures
• Adjacency list for the graph
• To perform the initialization of the data structures below
• Disjoint Set ADT’s for the trees (recall Up tree implementation of
Union-Find)
• Binary heap for edges

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 26


Example

1
10 5
1 1

8 3
2 3 4

1 1 6

4
2
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 27


1
10 5
1 1

8 3
2 3 4

1 1 6

4
2
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 28


Initialization
1
Initially, Forest of 6 trees
F= {{1},{2},{3},{4},{5},{6}}

Edges in a heap (not shown)


2 3 4

6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 29


Step 1

1
Select edge with lowest cost
(2,5)
Find(2) = 2, Find (5) = 5
Union(2,5)
F= {{1},{2,5},{3},{4},{6}} 2 3 4
1 edge accepted
1

6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 30


Step 2

1
Select edge with lowest cost
(2,6)
Find(2) = 2, Find (6) = 6
Union(2,6)
F= {{1},{2,5,6},{3},{4}} 2 3 4
2 edges accepted
1
1

6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 31


Step 3

1
Select edge with lowest cost
(1,3)
Find(1) = 1, Find (3) = 3 1
Union(1,3)
F= {{1,3},{2,5,6},{4}} 2 3 4
3 edges accepted
1
1

6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 32


Step 4

1
Select edge with lowest cost
(5,6)
Find(5) = 2, Find (6) = 2 1
Do nothing
F= {{1,3},{2,5,6},{4}} 2 3 4
3 edges accepted
1
1

6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 33


Step 5
1
Select edge with lowest cost
(3,4)
Find(3) = 1, Find (4) = 4 1
Union(1,4) 3
2 3 4
F= {{1,3,4},{2,5,6}}
4 edges accepted
1
1

6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 34


Step 6
Select edge with lowest cost
1
(4,5)
Find(4) = 1, Find (5) = 2
Union(1,2) 1
F= {{1,3,4,2,5,6}} 3
2 3 4
5 edges accepted : end
Total cost = 10
Although there is a unique 1
4
spanning tree in this example, 1
this is not generally the case
6 5

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 35


Kruskal’s Algorithm Analysis
• Initialize forest O(n)
• Initialize heap O(m), m = |E|
• Loop performed m times
• In the loop one DeleteMin O(log m)
• Two Find, each O(log n)
• One Union (at most) O(1)
• So worst case O(m log m) = O(m log n)

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 36


Time Complexity Summary
• Recall that m = |E| = O(V2) = O(n2 )
• Prim’s runs in O((n+m) log n)
• Kruskal runs in O(m log m) = O(m log n)
• In practice, Kruskal has a tendency to run faster since graphs might
not be dense and not all edges need to be looked at in the Deletemin
operations

31/10/2024 CSE 373 AU 04 - Minimum Spanning Trees 37


Solve using both prim’s and
Kruskal algorithms
Knapsack problem
• A thief robbing a store finds n items, the items each worth vi rupees and
weights wi grams, where vi and wi are positive numbers. He wants to take as
valuable load as possible but he can carry at most w grams in his
knapsack(bag). Which item should he take?
Knapsack problem
They are two types of knapsack problem.
1) 0-1 knapsack problem: Here the items may not be broken into smaller pieces, so thief may decide
either to take an item or to leave to it(binary choice). It cannot be efficiently solved by greedy
algorithm
2) Fractional (General) Knapsack problem: Here thief can take the fraction of items, meaning that
the items can be broken into smaller pieces so that thief may be able to carry a fraction x i of item i.
This can be solved easily by greedy.
If a fraction xi , 0 ≤ xi ≤ 1, of objects i is placed into the knapsack, then a profit of p i xi is earned. The
objective is to obtain a filling of the knapsack that maximizes the total profit earned. Since the
knapsack capacity is m, we require the total weight of all chosen objects to be at most m.
Formally the problem can be stated as
Maximize ∑ Vi xi …………………………………..(1)
Subject to ∑ Wi xi ≤ m………………………………………..(2)
And 0 ≤ xi ≤1, 1≤ i ≤n………………………………………..(3)
The profit and weights are positive numbers. A feasible solution is any set (x 1,x2,………………..xn)
satisfying (2) and(3). An optimal solution is feasible solution for which (1) is maximized.
Problem Definition: Knapsack
Problem
Problem Definition: Knapsack
Problem
Given n items:
• weights: w1 w2 … wn
• values: v1 v2 … v n
• a knapsack of capacity W
Find most valuable subset of the
items that fit into the
Efficiency:
knapsack O(2 n
)
Techniques: Knapsack Problem
Complexity Techniques No
O( n 2n ) Brute Force 1
O( n log n ) Greedy Algorithm 2
Brute Force Approaches-
General
A straightforward approach, usually based
directly on the problem’s statement and
definitions of the concepts involved ( Generate
and test )
Example: Knapsack capacity
W=16

item weight value


1 2 ₹20
2 5 ₹30
3 10 ₹50
4 5 ₹10
Knapsack Problem by Exhaustive Search
Subset Total weight Total value
{1} 2 ₹20
{2} 5 ₹30
{3} 10 ₹50
{4} 5 ₹10 Each subset can be
{1,2} 7 ₹50 represented by a binary
{1,3} 12 ₹70 string.
{1,4} 7 ₹30
{2,3} 15 ₹80
{2,4} 10 ₹40
{3,4} 15 ₹60
{1,2,3} 17 not feasible
Efficiency: Θ(2^n)
{1,2,4} 12 ₹60
{1,3,4} 17 not feasible
{2,3,4} 20 not feasible
{1,2,3,4} 22 not feasible
Greedy Algorithms : Knapsack
capacity

There are three greedy algorithms to solve Knapsack


problem :

1- Max value ( profit)

2- Min weight

3- Ratio between the values and weights (vi / wi )


Greedy Algorithms : Knapsack
capacity value weight item
42 7 1
1- Max value ( profit) 12 3 2
40 4 3
W = 10 25 5 4
value weight item
42 7 1
40 4 2
25 5 3
12 3 4

X X2 X3 X4 𝟒 𝟒

1 ∑ 𝒘 𝒊 𝒙 𝒊 ≤𝟏𝟎 ∑ 𝒗 𝒊 𝒙𝒊
𝒊=𝟏 𝒊=𝟏

1 0 0 1 7+3 10 42+12=54


Greedy Algorithms : Knapsack
capacity value weight item
42 7 1
2- Min weight 12 3 2
value weight item
40 4 3
W = 10
42 7 1 25 5 4
25 5 2

40 4 3

12 3 4

X1 X2 X3 X4 𝟒 𝟒

∑ 𝒘 𝒊 𝒙 𝒊 ≤𝟏𝟎 ∑ 𝒗 𝒊 𝒙𝒊
𝒊=𝟏 𝒊=𝟏

1 0 0 1 7+3 10 42+12=54
Greedy Algorithms : Knapsack
capacity
3- Ratio between the values and weights
value
(vweight
i / wi )
item

Ratio v/w value weight item 42 7 1

10
W = 10
40 4 1 12 3 2

40 4 3
6 42 7 2
25 5 4
5 25 5 3

4 12 3 4

X1 X2 X3 X4 𝟒 𝟒
∑ 𝒘 𝒊 𝒙 𝒊 ≤𝟏𝟎 ∑ 𝒗 𝒊 𝒙𝒊
𝒊=𝟏 𝒊=𝟏

1 0 1 0 10 ≤4+5 40+25 =65


Greedy Algorithms : Knapsack
capacity
• Step 1 Compute the value-to-weight ratios ri = vi/wi,
• i = 1, . . . , n, for the items given.
• Step 2 Sort the items in nonincreasing order of the
• ratios computed in Step 1.
• (Ties can be broken arbitrarily.)
• Step 3 Repeat the following operation until no item is
• left in the sorted list: if the current item on
• the list fits into the knapsack, place it in the
• knapsack and proceed to the next item; otherwise,
• just proceed to the next item.
Greedy Algorithms : Knapsack
capacity
Does this greedy algorithm always yield an optimal solution? The
answer, of course, is no: if it did, we would have a polynomial-time
algorithm for the NP – hard problem. In fact, the following example
shows that no finite upper bound on the accuracy of its
approximate solutions can be given either.

Example:
The Fractional Knapsack Problem
• Given: A set S of n items, with each item i having
• Vi - a positive benefit (value)
• wi - a positive weight
• Goal: Choose items with maximum total benefit (value) but with
weight at most W.
• If we are allowed to take fractional amounts, then this is the
fractional knapsack problem.
• In this case, we let xi denote the amount we take of item i

• Objective: maximize ∑ 𝑉 𝑖 (𝑥 𝑖 /𝑤𝑖 )


𝑖∈𝑆

• Constraint:
∑ 𝑥 𝑖 ≤𝑊 , 0 ≤ 𝑥 𝑖 ≤ 𝑤𝑖
𝑖∈𝑆

5-1-53
The Fractional Knapsack Algorithm
• Greedy choice: Keep taking item with highest value (benefit to
weight ratio) ∑ 𝑉 𝑖 ( 𝑥 𝑖 / 𝑤𝑖 )=∑ (𝑉 𝑖 / 𝑤𝑖 ) 𝑥 𝑖
𝑖∈𝑆 𝑖∈𝑆
• Since

Algorithm fractionalKnapsack(S, W)
Input: set S of items w/ benefit vi and weight wi; max. weight W
Output: amount xi of each item i to maximize benefit w/ weight at most W

for each item i in S


xi  0
vi  Vi / wi {value}
w0 {total weight}
while w < W
remove item i with highest vi
xi  min{wi , W - w}
w  w + xi
5-1-54
The Fractional Knapsack
Algorithm
• Running time: Given a collection S of n items, such that each
item i has a benefit bi and weight wi, we can construct a maximum-
benefit subset of S, allowing for fractional amounts, that has a total
weight W in O(nlogn) time.
• Use heap-based priority queue to store S
• Removing the item with the highest value takes O(logn) time
• In the worst case, need to remove all items
• Correctness: Suppose there is a better solution
• there is an item i with higher value than a chosen item j, but
xi<wi, xj>0 and vi<vj
• If we substitute some i with j, we get a better solution
• How much of i: min{wi-xi, xj}
• Thus, there is no better solution than the greedy one

5-1-55
Fractional Knapsack Example
Number of objects (n): 7 Knapsack capacity (W): 15

Object X1 X2 X3 X4 X5 X6 X7
Weight 2 3 5 7 1 4 1
Profit 10 5 15 7 6 18 3

Step 1: Calculate the profit-to-weight ratio for each object.


Object X1 X2 X3 X4 X5 X6 X7
Weight 2 3 5 7 1 4 1
Profit 10 5 15 7 6 18 3
Profit/
Weight 5 1.67 3 1 6 4.5 3
Step 3: Fill the knapsack with the sorted items, taking
Fractional Knapsack Example fractions if necessary. Knapsack Capacity = 15
1.Take X5 (Weight = 1, Profit = 6):
1. Remaining Capacity = 15 - 1 = 14
Step 2: Sort the objects based on the profit-to- 2. Total Profit = 6
weight ratio in descending order. 2.Take X1 (Weight = 2, Profit = 10):
1. Remaining Capacity = 14 - 2 = 12
2. Total Profit = 6 + 10 = 16
Object X5 X1 X6 X3 X7 X2 X4
3.Take X6 (Weight = 4, Profit = 18):
Weight 1. Remaining Capacity = 12 - 4 = 8
1 2 4 5 1 3 7 2. Total Profit = 16 + 18 = 34
Profit 6 10 18 15 3 5 7 4.Take X3 (Weight = 5, Profit = 15):
1. Remaining Capacity = 8 - 5 = 3
Profit/ 2. Total Profit = 34 + 15 = 49
Weight
6 5 4.5 3 3 1.67 1 5.Take X7 (Weight = 1, Profit = 3):
1. Remaining Capacity = 3 - 1 = 2
Total Profit = 49 + 3 = 52
6. Now we reach X2 (Weight = 3, Profit = 5). The remaining
capacity is 2, so we can only take a fraction of X2:
• Fraction to take = 2/3
• Profit = (2/3) * 5 ≈ 3.33
Step 4: Final result • Total Profit = 52 + 3.33 ≈ 55.33
•The total profit from the fractional knapsack solution is 55.33.
Solve the following
Problem 1 w=50 Problem 2 w=15

Object X1 X2 X3 X4 Object X1 X2 X3 X4 x5
Weight 10 20 30 5 Weight 3 4 5 6 2
Profit 60 100 120 40
Profit 25 40 50 35 20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy