Design Techniques Part 2 64

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

www.gradeup.

co

1 | Page
www.gradeup.co
Design Technique Part-2

Content :

1. Dynamic Programming Approach


2. LCS
3. Back Tracking Algorithm
4. Greedy Approach
5. Activity Selection Problem
6. KNAPSACK PROBLEMS

DYNAMIC PROGRAMMING APPROACH

THE STRUCTURE OF AN OPTIMAL PARENTHESIZATION

Let us adopt the notation Ai..j for the matrix that results from evaluating the
product Ai Ai+1 … Aj. It is the product A1 A2 … An. Splits the product between Ak
and Ak+1 for some integer k in the range 1 ≤ k ≤ n i.e. for few value of k, we
first compute the matrices A1..k and Ak+1..n and then multiply them together to
produce the final product A1..n. The cost of this is computing the matrix A1..k +
the cost of computing Ak+1..n + cost of multiplying them together.

Let m[I, j] be the minimized number of scalar multiplications needed to compute


the matrix Ai..j, the cost of a cheapest way to compute A1..n would thus be m[1..n].

We can define m[i..j] recursively as follows:

If i = j the chain consists of just one matrix Ai..j = Ai so no scalar multiplication


are necessary to compute the product. Thus m[i, j] = 0 for i = 1, 2, 3, …, n.

To compute m[i, j], when i < j. Let us assume that the optimal parenthesization
splits the product Ai Ai+1 … Aj between Ak and Ak+1 where i ≤ k ≤ j. then m[i, j]

2 | Page
www.gradeup.co
is equal to the minimum cost for computing the subproducts A i..k and Ak+1..j +
cost of multiplying them together. Since computing the matrix product A i..k and
Ak+1..j takes pi-1 pk pj scalar multiplications, we obtain.

M[i, j] = m[i, k] + m[k + 1, j] + pi-1 pk pj

There are only (j-1) possible values for ‘k’ namely k = i, i + 1, …. j-1. It use one
of these values for ‘k’, we need only check them all to find the best. So the
minimum cost of parenthesizing the product Ai Ai+1 … Aj becomes.

To construct an optimal solution, let us defined s[i, j] to be the value of ‘k’ at


which we can split the product Ai Ai+1… Aj to obtain an optimal parenthesization
i.e. s[i, j] = k such that

m[i, j] = m[i, k] + m[k + 1, j] + pi-1 pk pj

Example: We are given the sequence [4, 10, 3, 12, 20, 7]. The matrices have
sizes 4×10, 10×3, 3×12, 12×20, 20×7. We need to compute M[i, j], 0 ≤ i, j ≤ 5.
We know M[i, j] = 0 for all i.

We proceed, working away from the diagonal. We compute the optimal solution
for products of 2 matrices.

3 | Page
www.gradeup.co
Now products of 3 matrices.

Now products of 4 matrics.

Now product of 5 matrices

COMPARISON WITH DYNAMIC PROGRAMMING

It usually outperforms a top-down memorized algorithm by constant factor,


because there is no over-head for recursion and fewer overheads for maintaining
the table. In situations where not every subproblem is computed, memorization
4 | Page
www.gradeup.co
only solves those that are needed but dynamic programming solves all the
subproblems.

In summary, the matrix-claim multiplication problem can be solved in O(n 3) time


by either a top-down, memorized algorithm or a bottom-up dynamic-programming
algorithm.

LONGES COMMON SUBSEQUENCE (LCS)

A subsequence of a identified sequence is given sequence with some elements


left out. Given two sequences X and Y, we say that a sequence Z is a common
sequence of X and Y if Z is a subsequence of both X and Y.

In the longest common subsequence problem, we are given two sequences X =


(x1, x2… and xm) and Y = (y1, y2… yn) and wish to find a maximum length common
subsequence of X and Y. LCS problem can be solved using dynamic
programming.

CHARACTERIZING A LONGEST COMMON SUBSEQUENCE

A Brute-force approach to solving the LCS problem is to specify all subsequences


of X and check each subsequence found. Each subsequence of X corresponds
to a subset of the indices {1, 2, ….m} of X, there are 2m subsequences of X, so
this approach requires exponential time.

The LCS problem has an optimal-substructure property. Given a sequence X =


(x1, x2, … xm), we define the ith prefix of X, for i = 0, 1, 2…m as Xi = (x1, x2…xi).
For example, if X = (A, B, C, B, C, A, B, C) then X4 = (A, B, C, B).

THEOREM (OPTIMAL SUBSTRUCTURE OF AN LCS)

Let X = (x1, x2… xm) and Y = (y1, y2…yn) be the sequences and let Z = (z1, z2….
Zk) be any LCS of X and Y.

5 | Page
www.gradeup.co
1. If xm = yn, then zk = yn and Zk-1 is an LCS of Xm-1 and Yn-1.
2. If xm ≠yn, then zk ≠ xm implies that Z is an LCS of Xm-1 and Y.
3. If xm ≠ y n, then zk ≠ yn implies that Z is an LCS of X and Yn-1.

The above theorem implies that there are either one or two subproblems to
examine when finding an LCS of X = (x1, x2,…xm) and Y = (y1, y2 …. Yn). If xm =
yn we must find an LCS of Xm-1 and Yn-1. If xm ≠ yn, then we must solve two
subproblems finding as LCS of Xm-1 and Y and finding an LCS of X and Yn-1.
Whenever of these LCS’s longer is an LCS of X and Y. but each of these
subproblems has the subproblems of finding the LCS of Xm-1 and Yn-1.

Let us defined c[i, j] to be the length of an LCS of the sequence X i and Yj. If
either i = 0 or j = 0, one of the sequences has length 0, so the LCS has length
0. The optimal substructure of the LCS problem gives the recurrence formula.

Example: Given two sequences X[1..m] and Y[1..n]. Find the longest
subsequences common to both to both. Note: not substring, subsequence.

So if x: A B C B D A B

Y: B D C A B A

The longest subsequence turns out to be B C B A

Here X = (A, B, C, B, D, A, B) and Y = (B, D, C, A, B, A)

m = length [X] and n = length[Y]

m = 7 and n = 6

6 | Page
www.gradeup.co
Now, filling in the m × n table with the value of c[i, j] and the appropriate arrow
for the value of b[i, j]. Initialize top now and left column to 0 which takes θ(m +
n) time.

Work across the rows starting at the top. Any time xi = yi fill in the diagonal
neighbor + 1 and mark the box with the wingding __ otherwise fill in the box
with the max of the box above and box to the left. That is, the entry of c[i, j]
depends only on whether xi = yj and the values in entries c[i – 1, j], c[i, j – 1]
which are computed before c[i, j]. The max length is the lower right hand corner.
In c[i – 1, j] and c[i, j – 1] entries if c[i – 1, j] ≥ c[i, j – 1] then b[i, j] entry is ‘↑’
otherwise “←”.

BACKTRACKING ALGORITHMS

Backtracking algorithms are based on a depth-first recursive search. A


backtracking algorithms:

➢ Tests to see if a solution has been found, and if so, returns it; otherwise
➢ For each choice that can be made at this point,
1. Make that choice
2. Recur
3. If the recursion returns a solution, return it
➢ If no choice remain, return failure.

7 | Page
www.gradeup.co
Example, To color a map with no more than four colurs:

Color (Country n):

If all countries have been colored (n > number of countries) return success ;
otherwise,

For each color c four colours,

If country n is not adjacent to country that has been colored c

Color country n with color c

Recursively color country n + 1

If successful, return success

If loop exists, return failure

GREEDY ALGORITHMS

INTRODUCTION

It solve problems by making the choice that seems best at the particular moment. Many
optimization problems can be solved using a greedy algorithms. Some problems have
no efficient solution, but it provide a solution that is close to optimal. It works if a
problem exhibits the following two properties:

1. Greedy choice property. A globally optimal solution can be arrived at by making a locally
optimal solution. In other words, an optimal solution can be obtained by making “greedy”
choices.

2. Optimal substructure. Optimal solutions contain optimal sub solutions.

8 | Page
www.gradeup.co
AN ACTIVITY-SELECTION PROBLEM

Our first example is the problem of scheduling a resource among several competing
activities. We shall find that a greedy algorithm provides a well-designed and simple
method for selecting a maximum-size set of mutually compatible activities.

Suppose S = {1, 2, … n} is the set of n proposed activities. The activities share a


resource, which can be used by only activity at a time e.g., a Tennis Court, a Lecture
Hall etc. Every activity i has a start time si and a finish time fi, where si ≤ fi. If selected,
activity i takes place during the half-open time interval [si, fi] do not overlap (i.e., i and
j are compatible if si ≥ fj or sj ≥ fi). The activity-selection problem selects the maximum-
size set of mutual compatible activities.

In this strategy we first select the activity with minimum duration (f i – si) and schedule
it. Then, we skip all activities that are no compatible to this one, which means we have
to select compatible activities that are not compatible to this one, which means we have
to select compatible activity having minimum duration and then we have to schedule it.
Thus process is repeated until all the activities are considered. If can be observed that
the process of selecting the activity becomes faster it we assume that the input activities
are in order by increasing finishing time: f1 ≤ f2 ≤ f3 ≤ …. ≤ fn.

The running time of algorithm GREEDY-ACTIVITY-SELECTOR is θ(n lg n) as sorting


can be done in O(n lg n). there are O(1) operations per activity, thus total time is

O(n lg n) + n.O(1) = O (n lg n).

Example: Given 10 activities along with their start and finish time as

S = (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10)

Si = (1, 2, 3, 4, 7, 8, 9, 9, 11, 12)

fi = (3, 5, 4, 7, 10, 9, 11, 13, 12, 14)

calculate a schedule where the highest number of activities takes place.

9 | Page
www.gradeup.co
Solution: The solution for the above activity scheduling problem using greedy strategy
is illustrated below.

order the activities in increasing order of finish time.

Now, schedule A1

Next, schedule A3, as A1 and A3 are non-interfering

Next, Skip A2, as it is interfering.

Next, schedule A4 as A1, A3 and A4 are non-interfering, then next, schedule A6 as A1,
A3, A4 and A6 are non-interfering.

Skip A5 as it is interfering.

Next, schedule A7 as A1, A3, A4, A6, A7 are non-interfering.

Next, schedule A9 as A1, A3, A4, A6, A7, and A9 are non-interfering.

Skip A8, as it is interfering.

Next, schedule A10 as A1, A3, A4, A6, A7, A9 and A10.

10 | P a g e
www.gradeup.co
KNAPSACK PROBLEMS

We want to pack n items in your luggage.

➢ The ith item is worth vi dollars and weight wi pounds.


➢ Take as valuable a load as possible, but cannot exceed W pounds.
➢ vi, wi, W are integers.

O-1 KNASACK PROBLEM

➢ each item is taken or not taken


➢ Cannot take a fractional amount of an item or take an item more than once.

FRACTIONAL KNAPSACK PROBLEM

➢ Fractions of items can be taken rather than having to make a binary (0-1) choice for
each item.

Both exhibits the optimal-substructure property.

0-1 knapsack problem. Consider a optimal solution. If item j is removed from the load,
the remaining load must be the most valuable load weighing at most W – wj.

Fractional knapsack. If w of item j is removed from the optimal load, the remaining load
must be the most valuable load weighing at most W – w that can be taken from other
n – 1 items plus wj – w of item j.

DIFFERENCE BETWEEN GREEDY AND DYNAMIC PROGRAMMING

Because the optimal-substructure property is shown by both greedy and dynamic-


programming strategies, one might be tempted to generate a dynamic-programming
solution to a problem when a greedy solution suffices, or one might mistakenly think
that a greedy solution works when in fact a dynamic-programming solution is required.
The most important difference greedy algorithms and dynamic programming is that we
don’t solve every optimal sub-problem with greedy algorithms. In some cases, Greedy

11 | P a g e
www.gradeup.co
algorithms can be sued to produce sub-optimal solutions. That is solutions which aren’t
necessarily optimal, but are perhaps very close.

In dynamic programming, we make a choice at step, but the choice may depend on the
solutions to sub-problems. In this, we make whatever choice seems best at the moment
and then solve the sub-problem, arising after the choice is made. The choice made by
a greedy algorithm may depend on choices so far, but it cannot depend on any further
choices or an the solutions to sub-problems. Thus, unlike dynamic programming, which
solves the sub-problems bottom up, a greedy strategy usually progresses in a top-down
fashion, making one greedy choice after another, interactively reducing each given
problem instance to a smaller one.

Fractional knapsake problem can be solvable by the greedy strategy whereas the 0-1
problem is not. To solve the fractional problem.

➢ Compute the value per pound vi / wi for each item


➢ Obeying a greedy strategy, we take as much as possible of the item with the greatest
value per pound.
➢ If the supply of that item is exhausted and we can still carry more, we take as much
as possible of the item with the next value per pound, and so forth until we cannot
carry any more.
➢ Sorting the items by value per pound, the greedy algorithm runs in O(n lg n) time.

0-1 knapsack problem cannot be solved by the greedy strategy because it is unable to
fill the knapsake capacity, and the empty space lowers the effective value per pound of
the load and we must compare the solution to the sub-problem in which the item is
included with the solution to the sub-problem in which the item is excluded before we
can make the choice.

Example: Consider 5 items along their respective weights and values.

I = (I1, I2, I3, I4, I5)

w = (5, 10, 20, 30, 40)

12 | P a g e
www.gradeup.co
v = (30, 20, 100, 90, 160)

The capacity of knapsack W = 60. Find the solution to the fractional knapsack problem.

Solution: Initially,

Taking value per weight ratio i.e., pi = vi / wi

Now arrange the value of pi in decreasing order

Now, fill the knapsack according to the decreasing value of pi.

First we choose item I1 whose weight is 5, then choose item I3 whose weight is 20.
Now the total weight in knapsack is 5 + 20 = 25.

Now, the next item is I5 and its weight is 40, but we want only 35. So we choose
fractional part of if i.e.,

13 | P a g e
www.gradeup.co

Thus the maximum value is

= 30 + 100 + 120 = 270.

14 | P a g e
www.gradeup.co

15 | P a g e

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy