0% found this document useful (0 votes)
85 views60 pages

Greedy Appraoch and Dynamic Programming

The document describes algorithms and techniques for dynamic programming, including coin changing, binomial coefficients, knapsack problems, optimal binary search trees, and Floyd's algorithm for all-pairs shortest paths. It explains the principle of optimality and how dynamic programming solves problems by breaking them down into overlapping subproblems that are solved only once and storing the results in tables.

Uploaded by

rocky singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views60 pages

Greedy Appraoch and Dynamic Programming

The document describes algorithms and techniques for dynamic programming, including coin changing, binomial coefficients, knapsack problems, optimal binary search trees, and Floyd's algorithm for all-pairs shortest paths. It explains the principle of optimality and how dynamic programming solves problems by breaking them down into overlapping subproblems that are solved only once and storing the results in tables.

Uploaded by

rocky singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

OBJECTIVES

• To understand the concepts of algorithms and its


efficiency
• To understand and apply the algorithm analysis
techniques
• To critically analyze the efficiency of alternative
algorithmic solutions for the same problem
• To understand different algorithm design
techniques
• To understand the limitations of Algorithmic
power.
UNIT III
Dynamic programming – Principle of optimality
- Coin changing problem, Computing a Binomial
Coefficient – Floyd‘s algorithm – Multi stage
graph - Optimal Binary Search Trees – Knapsack
Problem and Memory functions. Greedy
Technique – Container loading problem - Prim‘s
algorithm and Kruskal's Algorithm – 0/1
Knapsack problem, Huffman Trees
Dynamic programming
Dynamic programming
• Dynamic programming is a technique for
solving problems with overlapping
subproblems. Typically, these subproblems
arise from a recurrence relating a given
problem’s solution to solutions of its smaller
subproblems. Rather than solving overlapping
subproblems again and again, dynamic
programming suggests solving each of the
smaller subproblems only once and recording
the results in a table from which a solution to
the original problem can then be obtained
Principle of optimality

• An optimal solution to any instance of an


optimization problem is composed of optimal
solutions to its subinstances.
Coin-row problem
• Coin-row problem There is a row of n coins
whose values are some positive integers c1,
c2, . . . , cn, not necessarily distinct. The goal is
to pick up the maximum amount of money
subject to the constraint that no two coins
adjacent in the initial row can be picked up.
Coin-row problem
• Let F(n) be the maximum amount that can be picked
up from the row of n coins.
• To derive a recurrence for F(n), we partition all the
allowed coin selections into two groups:
– Those that include the last coin and those without
it.
• The largest amount we can get from the first group is
equal to cn + F(n − 2)—the value of the nth coin plus
the maximum amount we can pick up from the first n
− 2 coins.
• The maximum amount we can get from the second
group is equal to F(n − 1) by the definition of F(n).
Max( c2 + F(2-2), F(2-1))

Max( c3 + F(3-2), F(3-1))

Max( c4 + F(4-2), F(4-1))

Max( c5 + F(5-2), F(5-1))

Max( c6 + F(6-2), F(6-1))


• To find the coins with the maximum total value found,
we need to backtrace the computations to see which of
the two possibilities— Cn + F(n − 2) or F(n − 1)—
produced the maxima
• In the last application of the formula, it was the sum
c6 + F(4), which means that the coin c6 = 2 is a part of
an optimal solution.
• Moving to computing F(4), the maximum was produced
by the sum c4 + F(2), which means that the coin c4 = 10
is a part of an optimal solution as well.
• Finally, the maximum in computing F(2) was produced
by F(1), implying that the coin c2 is not the part of an
optimal solution and the coin c1= 5 is.
• Thus, the optimal solution is {c1, c4, c6}.
Analysis
• Using the CoinRow to find F(n), the largest
amount of money that can be picked up, as
well as the coins composing an optimal set,
clearly takes O(n) time and O(n) space.
• This is by far superior to the alternative:
solving the problem by exhaustive search
Change-making problem
• Give change for amount n using the minimum number of
coins of denominations d1<d2 < . ..<dm. where d1 = 1
• Let F(n) be the minimum number of coins whose values
add up to n; it is convenient to define F(0) = 0. The amount
n can only be obtained by adding one coin of
denomination dj to the amount n − dj for j = 1, 2, . . . , m
such that n ≥ dj .
• Therefore, we can consider all such denominations and
select the one minimizing F(n − dj ) + 1. Since 1 is a
constant, we can, of course, find the smallest F(n − dj ) first
and then add 1 to it.
Change-making problem
• The application of the algorithm to amount n = 6
and denominations 1, 3, 4
• The answer it yields is two coins. The time and
space efficiencies of the algorithm are obviously
O(nm) and O(n),
• To find the coins of an optimal solution, we need to
backtrace the computations to see which of the
denominations produced the minima
• For the instance considered, the last application of
the formula (for n = 6), the minimum was produced
by d2 = 3. The second minimum (for n = 6 − 3) was
also produced for a coin of that denomination. Thus,
the minimum-coin set for n = 6 is two 3’s.
Computing a Binomial Coefficient
• Computing binomial coefficients is non optimization
problem but can be solved using dynamic
programming.
• Binomial coefficients are represented by C(n, k) or (nk)
and can be used to represent the coefficients of
a binomial:
(a + b)n = C(n, 0)an + ... + C(n, k)an-k bk + ... + C(n, n)bn
• The recursive relation is defined by the prior power
• C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0
• C(n, 0) = C(n, n) = 1 ( Initial Condition)
Computing a Binomial Coefficient
• Dynamic algorithm constructs a nxk table, with the
first column and diagonal filled out using the Initial
Condition.
Construct the table:
Computing a Binomial Coefficient
• The table is then filled out iteratively, row by
row using the recursive relation.
Computing a Binomial Coefficient
• The cost of the algorithm is filing out the table:
• Addition is the basic operation.
• Because k ≤ n, the sum needs to be split into two parts
because only the half the table needs to be filled out
for i < k and remaining part of the table is filled out
across the entire row.
The Knapsack Problem and Memory Functions
• Given n items of known weights w1, . . . , wn and values v1,
. . . , vn and a knapsack of capacity W, find the most
valuable subset of the items that fit into the knapsack.
• To design a dynamic programming algorithm, we need to
derive a recurrence relation that expresses a solution to an
instance of the knapsack problem in terms of solutions to
its smaller subinstances.
• Let us consider an instance defined by the first i items, 1≤ i
≤ n, with weights w1, . . . , wi, values v1, . . . , vi , and
knapsack capacity j, 1 ≤ j ≤ W. Let F(i, j) be the value of an
optimal solution to this instance, i.e., the value of the most
valuable subset of the first i items that fit into the knapsack
of capacity j
Two categories:
1. Among the subsets that do not include the ith item, the
value of an optimal subset is, by definition, F(i − 1, j).
2. Among the subsets that do include the ith item
(hence, j − wi ≥ 0), an optimal subset is made up of this
item and an optimal subset of the first i − 1 items that
fits into the knapsack of capacity j − wi . The value of
such an optimal subset is vi+ F(i − 1, j − wi).
The time efficiency and space efficiency of this
algorithm are both in (nW).
Memory Functions
• The direct top-down approach to finding a solution to
such a recurrence leads to an algorithm that solves
common subproblems more than once and hence is
very inefficient
• The classic dynamic programming approach, on the
other hand, works bottom up: it fills a table with
solutions to all smaller subproblems, but each of them
is solved only once.
• The goal is to get a method that solves only
subproblems that are necessary and does so only
once. Such a method exists; it is based on using
memory functions.
Memory Functions
Memory Functions
Optimal Binary Search Trees
• A binary search tree is one of the most
important data structures in computer
science.
• One of its principal applications is to
implement a dictionary, a set of elements
with the operations of searching, insertion,
and deletion.
• If probabilities of searching for elements of a
set are known, an optimal binary search tree
for which the average number of comparisons
in a search is the smallest can be constructed.
• Consider four keys A, B, C, and D to be searched for
with probabilities 0.1, 0.2, 0.4, and 0.3, respectively.
• The average number of comparisons in a successful
search in the first of these trees is
• (0.1 ). 1+ (0.2) . 2 + (0.4) .3+ (0.3) . 4 = 2.9, and
• for the second one it is
(0.1) . 2 + (0.2) . 1+ (0.4) . 2 + (0.3) . 3= 2.1 .
• So let a1, . . . , an be distinct keys ordered from
the smallest to the largest and let p1, . . . , pn be
the probabilities of searching for them. Let C(i, j)
be the smallest average number of comparisons
made in a successful search in a binary search
tree Tij made up of keys ai, . . . , aj , where i, j are
some integer indices, 1≤ i ≤ j ≤ n.
• Following the classic dynamic programming
approach, we will find values of C(i, j) for all
smaller instances of the problem, and then we
find C(1, n).
Dynamic programming
• Consider all possible ways to choose a root ak
among the keys ai, . . . , aj . For such a binary
search tree, the root contains key ak, the left
subtree Tik−1 contains keys ai, . . . , ak−1 optimally
arranged, and the right subtree Tk+1j contains
keys a k+1, . . . , aj also optimally arranged
Dynamic programming

• C(i, i − 1) = 0 for 1≤ i ≤ n + 1, which can be interpreted as the


number of comparisons in the empty tree.
• C(i, i) = pi for 1≤ i ≤ n, as it should be for a one-node binary
search tree containing ai .
C

D
B

A
Floyd’s Algorithms- All-pairs shortest-paths
problem
• It is convenient to record the lengths of shortest paths
in an n × n matrix D called the distance matrix: the
element dij in the ith row and the jth column of this
matrix indicates the length of the shortest path from
the ith vertex to the jth vertex.
• Floyd’s algorithm computes the distance matrix of a
weighted graph with n vertices through a series of n × n
matrices:
Floyd’s Algorithms
The element d(k)ij in the ith row and the jth column of
matrix D(k) (i, j = 1, 2, . . . , n, k = 0, 1, . . . , n) is equal to
the length of the shortest path among all paths from the
ith vertex to the jth vertex with each intermediate vertex,
if any, numbered not higher than k.
• D (0) is simply the weight matrix of the graph
• The last matrix in the series, D(n), contains the lengths of
the shortest paths among all paths that can use all n
vertices as intermediate
• We can Compute D(k) from D (k-1)
Floyd’s Algorithms
Greedy Technique
• A greedy algorithm obtains an optimal
solution to a problem by making a sequence
of choices. At each decision point, the
algorithm makes choice that seems best at
the moment. This heuristic strategy does not
always produce an optimal solution.
Greedy-choice property
• greedy-choice property: we can assemble a
globally optimal solution by making locally
optimal (greedy) choices. In other words, when
we are considering which choice to make, we
make the choice that looks best in the current
problem, without considering results from
subproblems.
Huffman Trees and Codes
• Encoding a text that comprises symbols from
some n-symbol alphabet by assigning to each
of the text’s symbols some sequence of bits
called the codeword.
• For example, we can use a fixed-length
encoding that assigns to each symbol a bit
string of the same length m (m ≥ log2 n). This is
exactly what the standard ASCII code does.
• One way of getting a coding scheme that yields
a shorter bit string on the average is based on
the old idea of assigning shorter codewords to
more frequent symbols and longer codewords
to less frequent symbols.
Problem with Variable Length Encoding
• How can we tell how many bits of an encoded
text represent the first symbol
Solution : Use Prefix free or Prefix code
• In a prefix code, no codeword is a prefix of a codeword
of another symbol.
• Hence, with such an encoding,
– we can simply scan a bit string until we get the first
group of bits that is a codeword for some symbol,
– replace these bits by this symbol,
– and repeat this operation until the bit string’s end is
reached.
Huffman’s algorithm
• Step 1 : Initialize n one-node trees and label them
with the symbols of the alphabet given. Record the
frequency of each symbol in its tree’s root to
indicate the tree’s weight.
• Step 2 : Repeat the following operation until a
single tree is obtained. Find two trees with the
smallest weight. Make them the left and right
subtree of a new tree and record the sum of their
weights in the root of the new tree as its weight.
• Step 3: All the left edges are labeled by 0 and all the
right edges are labeled by 1. The codeword of a symbol
can then be obtained by recording the labels on the
simple path from the root to the symbol’s leaf.
Huffman Tree
Huffman Code

Encode : CAB

0011100
Decode : 011010011100

011010011100
D _ C A B
Prim’s Algorithm
• A spanning tree of an undirected connected
graph is its connected acyclic subgraph that
contains all the vertices of the graph.
• If such a graph has weights assigned to its
edges, a minimum spanning tree is its
spanning tree of the smallest weight, where the
weight of a tree is defined as the sum of the
weights on all its edges.
• The minimum spanning tree problem is the
problem of finding a minimum spanning tree for
a given weighted connected graph.
Algorithm
Analysis
• If a graph is represented by its adjacency lists and
the priority queue is implemented as a min-heap, the
running time of the algorithm is in O(|E| log |V |).
• This is because the algorithm performs |V| − 1
deletions of the smallest element and makes |E|
verifications and, possibly, changes of an element’s
priority in a min-heap of size not exceeding |V |.
Each of these operations, as noted earlier, is a O(log
|V |) operation. Hence, the running time of this
implementation of Prim’s algorithm is in
Kruskal’s algorithm
• Kruskal’s algorithm looks at a minimum
spanning tree of a weighted connected graph
G = V, E as an acyclic subgraph with |V| − 1
edges for which the sum of the edge weights
is the smallest.

• The algorithm begins by sorting the graph’s


edges in nondecreasing order of their weights.
Then, starting with the empty subgraph, it scans
this sorted list, adding the next edge on the list to
the current subgraph if such an inclusion does not
create a cycle and simply skipping the edge
otherwise.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy