Daa C3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

1

Design and Analysis of


Algorithms
CHAPTER THREE: GREEDY
ALGORITHMS
Greedy algorithms 2
 A greedy algorithm is a simple, intuitive algorithm that is used in optimization
problems.
 The algorithm makes the optimal choice at each step as it attempts to find the
overall optimal way to solve the entire problem.
 Greedy algorithms are quite successful in some problems, such as Huffman
encoding which is used to compress data, or Dijkstra's algorithm, which is used to
find the shortest path through a graph.
 However, in many problems, a greedy strategy does not produce an optimal
solution. For example, in the example below, the greedy algorithm seeks to find the
path with the largest sum. It does this by selecting the largest available number at
each step.
Cont… 3
 The greedy algorithm fails to find the largest sum, however, because it makes
decisions based only on the information it has at any one step, without regard to the
overall problem.
Elements of Greedy Strategy 4
1. How to develop a greedy algorithm?
• Determine the optimal substructure of the problem.
• Develop a recursive solution.
• Show that if we make the greedy choice, then only one subproblem remains.
• Prove that it is always safe to make greedy choice.
• Develop a recursive algorithm that implements greedy strategy and then convert it to iterative one.
2. Greedy - choice property
• In greedy algorithm when we are considering which choice to make, we make the choice that looks best in the
current problem, without considering the results from subproblems.
• Greedy strategy usually progresses in a top-down fashion, making one greedy choice after another, reducing
each given problem to a smaller one.
3. Optimal Substructure
• A problem exhibits optimal substructure if an optimal solution to the problem contains within its optimal
solutions to subproblems. This property is the key ingredient of accessing the applicability of dynamic
programming as well as greedy algorithms.
Activity selection problem 5
 Aka, activity scheduling.
 Activity scheduling is a very simple scheduling problem. We are given a set S = {1,
2, ......, n} of n activities that are to be scheduled to use some resource, where each
activity must be started at a given start time si and ends at a given finish time fi.
 Because there is only one resource, and some start and finish times may overlap
(and two lectures cannot be given in the same room at the same time), not all the
requests can be honored.
 We say that two activities i and j are non-interfacing if their start-finish intervals do
not overlap, more normally, [si, fi) ∩ [sj, fj) = ǿ. (Note that, by making the intervals
half open, two consecutive activities are not considered to interface).
 The activities scheduling problem is to select a maximum set of mutually non-
interfacing activities for use of the resources.
Cont… 6
Greedy_Activity_Selector(struct_arr[])
1. Sort struct_arr using fi in ascending order
2. n length[struct_arr]
3. listOfAct{struct_arr[0]}
4. j0
5. for i1 to n
6. if struct_arr[j].fi <= struct_arr[i].si
7. then listOfAct  listOfAct U {struct_arr[i] }
8. ji
9. return listOfAct
 What is the time complexity of greedy activity selection algorithm?
Knapsack problem 7
 Knapsack problem is the problem where we want to find an optimal object from a finite set
of objects. Here, we have added the concept of “thief robbing”.
 The thief with a knapsack (bag) with some capacity and the knapsack can not withstand the
weight more than the capacity. So it is the choice of the thief to select which item he/she will
put in knapsack to fit the capacity.
 So various situation arises selecting the item: thief may end with the knapsack filled with
costly item or chip item or the knapsack may have some space vacant. We have to solve this
problem greedily.
 Mainly these problems are of two types:
1. 0 – 1 knapsack problem
2. Fractional knapsack problem
0-1 Knapsack problem 8
 The classical 0 – 1 knapsack problem is a famous optimization problem.
 A thief is robbing a store, and finds n items which can be taken. The i th is worth vi dollars
and weights wi pounds, where vi and wi are integers. He wants to take as valuable a load as
possible, but has a knapsack that can only carry W total pounds.
 Which items should he take?
 (The reason that this is called 0 – 1 knapsack is that each items must be left (0) or taken
entirely (1). It is not possible to take a fraction of an item or multiple copies of an item).
 This optimization problem arises in industrial packing applications. For example, 0 – 1
knapsack problem is applicable if you want to ship some subset of items on a truck of
limited capacity.
Cont… 9
 Steps to solve this problem greedily:
 Compute: the value by weight ratio for every item.
 Sort: the items based on these ratio in descending order.
 Add: the item with the highest value by weight ratio into the knapsack until its limit.

 Note that, the 0-1 knapsack problem can’t be solved using greedy strategy optimally. Which
means we can not get the optimal solution for 0-1 knapsack problem using the greedy
algorithmic strategy.
 Example: Having 3 items and a knapsack that can hold 50 pounds. Item 1 weighs 10 pounds and is
worth 60 dollars. Item 2 weighs 20 pounds and is worth 100 dollars. Item 3 weighs 30 pounds and is
worth 120 dollars. Determine the maximum profit we can make by taking the items in 0-1 knapsack
passion?
Cont… 10
classicalKnapsack (knaCap, struct Item arr[], int n) {
1. Sort the Item array on basis of value/weight ratio
2. curWeight=0
3. maxValue=0.0
4. for i=0 ups to n:
5. if ( curWeight + arr[i].weight <knaCap):
6. curWeight +=arr[i].weight
7. maxValue +=arr[i].value
8. return maxValue
Fractional Knapsack 11
 In contrast, the fractional knapsack problem the setup is exactly the same as that of 0 – 1
knapsack problem, but the thief is allowed to take any fraction of an item for a fraction of the
weight and a fraction of the value.
 So, you might think of each object as being a sack of gold, which you can partially empty out
before taking. As in the case of other greedy algorithms we have seen, the idea is to find the
right order in which process items.
 Intuitively, it is good to have high value and bad to have high weight. This suggests that we
first sort the items according to some function that is decreases with value and increases with
weight.
 There are a few choices that you might try here, but only one works. Let ρi = vi/wi denote
value-per-pound ratio of item i. We sort the items in decreasing order of ρi, and add them in
this order.
 If the item fits, we take it all. At some points there is an item that does not fit in the remaining
space. We take as much of this item as possible, thus filling the knapsack entirely.
Cont… 12
 Steps to solve this problem greedily:
 Compute: the value by weight ratio for every item.
 Sort: the items based on these ratio in descending order.
 Add: the item with the highest value by weight ratio into the knapsack until the knapsack is
unable to carry the next item as whole.
 Fraction: lastly, add the next item as much as the knapsack can carry by taking the fraction of the
item.
Cont… 13
fractionalKnapsack (knaCap, struct Item arr[], int n) {
1. Sort the Item array on basis of value/weight ratio
2. curWeight=0
3. maxValue=0.0
4. for i=0 ups to n:
5. if ( curWeight + arr[i].weight <knaCap):
6. curWeight +=arr[i].weight
7. maxValue +=arr[i].value
8. else:
9. remainCap = knapCap – curWeight
10. maxValue += arr[i].value * remainCap/ arr[i].weight
11. break;
12. return maxValue
Minimum Spanning Tree 14
 In short is, MST.
 Before defining what is MST, we must be aware of what a spanning tree is!
 Given undirected graph G = (V, E), a spanning tree on G is an acyclic tree that contains(or consists)
of all the vertices of the graph G but not edges.
 A single graph can have many different spanning trees. A minimum spanning tree (MST) or
minimum weight spanning tree for a weighted, connected, undirected graph is a spanning tree with a
weight less than or equal to the weight of every other spanning tree.
 The weight of a spanning tree is the sum of weights given to each edge of the spanning tree.
 While considering the minimum spanning tree, we have to first think for an edge-weighted graph. In
an edge-weighted graph G = (V, E) we associate weights or costs through weight function W : E ⇒
R+ with each edge.
 If there are n vertices in the MST, then there is (n – 1) no of minimum spanning tree edge present.
Cont… 15
 Given undirected graph G = (V, E) below, determine the list of all possible spanning trees.

 There are basically two greedy algorithmic methods using which we can find the MST for a
given weighted undirected graph. These are:
1. Kruskal’s MST algorithm
2. Prim’s MST algorithm
Kruskal’s MST algorithm 16
 Kruskal’s algorithm in a graph theory finds a minimum spanning tree for a connected
weighted and undirected graph.
 This means it finds a subset of edges that forms a tree that includes every vertex, where the
total weights of all edges in the tree is minimized. If the graph is not connected it finds a
minimum spanning tree forest.
 Steps in Kruskal’s MST algorithm:
1. Sort: all the edges in a non-decreasing order on the basis of their weight.
2. Pick: the smallest edge and check if this edge creates a cycle with the spanning tree
spanned so far. Discard the edge if it creates cycle otherwise add the edge to the spanning
tree list.
3. Repeat: step-2 until the spanning tree reaches (n-1) edges. Where n is the number of
vertices in the undirected graph G.
Kruskal’s MST algorithm 17
What is the time complexity of
MST-KRUSKAL (G, W) this algorithm?
1. MST = Ø
2. for all u ∈ V(G):
3. MAKE-SET (u)
4. Sort all edges on W in non decreasing order by weight
5. for each edge (u, v)  W:
6. if FIND-SET (u) ≠ FIND-SET (v)
7. MST = MST ∪ {(u, v)}
8. UNION (FIND-SET (u), FIND-SET (v))
9. return MST
• It uses a disjoint-set data structure to maintain several disjoint sets of elements. Each set contains the vertices
in one tree of the current forest.
• The operation FIND-SET(u) returns a representative element from the set that contains u. Thus, we can
determine whether two vertices u and v belong to the same tree by testing whether FIND-SET(u) equals
FIND-SET(v).
• To combine trees, Kruskal’s algorithm calls the UNION procedure.
Cont… 18
 The above Kruskal’s algorithm design based on the disjoint set data structures. A disjoint-set
data structure maintains a collection S= {S1, S2,…Sk} of disjoint dynamic sets.
 This is because some applications, like Kruskal’s MST, involve grouping n distinct elements
into a collection of disjoint sets. We identify each set by a representative, which is some
member of the set.
 Letting x denote an object, we wish to support the following operations:
1. MAKE-SET(x): creates a new set whose only member (and thus representative) is
x, which in fact not belongs to any set.
2. UNION(x, y): unites the dynamic sets that contain x and y, say Sx and Sy, into a new set
that is the union of these two sets. We assume that the two sets are disjoint prior to the
operation.
3. FIND-SET(x): returns a pointer to the representative of the (unique) set containing x.
Prim’s MST algorithm 19
 Prim’s algorithm has the property that the edges in the set A always form a single tree. The
tree starts from an arbitrary root vertex r and grows until the tree spans all the vertices in V.
Each step adds to the tree A, a light edge that connects A to an isolated vertex—one on
which no edge of A is incident.
 This strategy qualifies as greedy since at each step it adds to the tree an edge that
contributes the minimum amount possible to the tree’s weight.
 In order to implement Prim’s algorithm efficiently, we need a fast way to select a new edge
to add to the tree formed by the edges in A. In the pseudocode below, the connected graph G
and the root r of the minimum spanning tree to be grown are inputs to the algorithm.
 During execution of the algorithm, all vertices that are not in the tree reside in a min-
priority queue PQ based on a key attribute. For each vertex v, the attribute v.key is the
minimum weight of any edge connecting v to a vertex in the tree, by convention, v.key = ∞
if there is no such edge. The attribute v.π names the parent of v in the tree.
Prim’s MST algorithm 20
What is the time complexity of
MST-PRIM(G, w, r) this algorithm?
1. for each u  G.V
2. u.key =∞
3. u.π =NIL
4. r.key = 0, r. π=NIL
5. For all v ∈ G.V
6. ENQUEUE (PQ, v)
7. while PQ ≠ Ø
8. u = EXTRACT-MIN(PQ)
9. for each v  G.Adj[u]
10. if v  PQ and w(u, v) < v.key
11. v.π = u
12. v.key = w(u, v)
Shortest path 21
 In a shortest-paths problem, we are given a weighted, directed graph G =(V, E) with weight function
w: E → R mapping edges to real-valued weights. The weight w(p) of path p ={v0,v1… vk) is the
sum of the weights of its constituent edges:

 A shortest path from vertex u to vertex v is then defined as any path p with weight w(p) = (u, v).
Variants of Shortest path 22
1. Single-source shortest-paths problem: given a graph G = (V, E), we want to find a shortest path
from a given source vertex s  V to every vertex v  V .
2. Single-destination shortest-paths problem: Find a shortest path to a given destination vertex t from
each vertex v. By reversing the direction of each edge in the graph, we can reduce this problem to a
single-source problem.
3. Single-pair shortest-path problem: Find a shortest path from u to v for given vertices u and v. If we
solve the single-source problem with source vertex u, we solve this problem also.
4. All-pairs shortest-paths problem: Find a shortest path from u to v for every pair of vertices u and v.
Although we can solve this problem by running a single source algorithm once from each vertex,
we usually can solve it faster
Dijkstra’s Shortest path 23
 Dijkstra’s algorithm solves the single-source shortest-paths problem on a weighted, directed graph G =
(V, E) for the case in which all edge weights are non-negative.
 Dijkstra’s algorithm maintains a set S of vertices whose final shortest-path weights from the source s
have already been determined. The algorithm repeatedly selects the vertex u  V with the minimum
shortest-path estimate, adds u to S, and relaxes all edges leaving u. In the following implementation, we
use a min-priority queue Q of vertices, keyed by their d values.
Dijkstra_Shortest_path(G, w, s)  The process of relaxing an edge (u, v) consists of
testing whether we can improve the shortest path to
1. INITIALIZE-SINGLE-SOURCE (G,s)
v found so far by going through u and, if so,
2. S=Ø , Q=G.V updating v.d and v.π.
3. While Q ≠ Ø
RELAX(u, v, w)
4. u =ETRACT-MIN(Q) 1. if v.d > u.d + w(u, v)
2. v.d = u.d + w(u, v)
5. S =S u {u]
3. v.π = u
6. for each vertex v  G.Adj[u]
7. RELAX(u, v, w)
Example: Dijkstra SP 24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy