Design Techniques Part 2 64
Design Techniques Part 2 64
Design Techniques Part 2 64
co
1 | Page
www.gradeup.co
Design Technique Part-2
Content :
Let us adopt the notation Ai..j for the matrix that results from evaluating the
product Ai Ai+1 … Aj. It is the product A1 A2 … An. Splits the product between Ak
and Ak+1 for some integer k in the range 1 ≤ k ≤ n i.e. for few value of k, we
first compute the matrices A1..k and Ak+1..n and then multiply them together to
produce the final product A1..n. The cost of this is computing the matrix A1..k +
the cost of computing Ak+1..n + cost of multiplying them together.
To compute m[i, j], when i < j. Let us assume that the optimal parenthesization
splits the product Ai Ai+1 … Aj between Ak and Ak+1 where i ≤ k ≤ j. then m[i, j]
2 | Page
www.gradeup.co
is equal to the minimum cost for computing the subproducts A i..k and Ak+1..j +
cost of multiplying them together. Since computing the matrix product A i..k and
Ak+1..j takes pi-1 pk pj scalar multiplications, we obtain.
There are only (j-1) possible values for ‘k’ namely k = i, i + 1, …. j-1. It use one
of these values for ‘k’, we need only check them all to find the best. So the
minimum cost of parenthesizing the product Ai Ai+1 … Aj becomes.
Example: We are given the sequence [4, 10, 3, 12, 20, 7]. The matrices have
sizes 4×10, 10×3, 3×12, 12×20, 20×7. We need to compute M[i, j], 0 ≤ i, j ≤ 5.
We know M[i, j] = 0 for all i.
We proceed, working away from the diagonal. We compute the optimal solution
for products of 2 matrices.
3 | Page
www.gradeup.co
Now products of 3 matrices.
Let X = (x1, x2… xm) and Y = (y1, y2…yn) be the sequences and let Z = (z1, z2….
Zk) be any LCS of X and Y.
5 | Page
www.gradeup.co
1. If xm = yn, then zk = yn and Zk-1 is an LCS of Xm-1 and Yn-1.
2. If xm ≠yn, then zk ≠ xm implies that Z is an LCS of Xm-1 and Y.
3. If xm ≠ y n, then zk ≠ yn implies that Z is an LCS of X and Yn-1.
The above theorem implies that there are either one or two subproblems to
examine when finding an LCS of X = (x1, x2,…xm) and Y = (y1, y2 …. Yn). If xm =
yn we must find an LCS of Xm-1 and Yn-1. If xm ≠ yn, then we must solve two
subproblems finding as LCS of Xm-1 and Y and finding an LCS of X and Yn-1.
Whenever of these LCS’s longer is an LCS of X and Y. but each of these
subproblems has the subproblems of finding the LCS of Xm-1 and Yn-1.
Let us defined c[i, j] to be the length of an LCS of the sequence X i and Yj. If
either i = 0 or j = 0, one of the sequences has length 0, so the LCS has length
0. The optimal substructure of the LCS problem gives the recurrence formula.
Example: Given two sequences X[1..m] and Y[1..n]. Find the longest
subsequences common to both to both. Note: not substring, subsequence.
So if x: A B C B D A B
Y: B D C A B A
m = 7 and n = 6
6 | Page
www.gradeup.co
Now, filling in the m × n table with the value of c[i, j] and the appropriate arrow
for the value of b[i, j]. Initialize top now and left column to 0 which takes θ(m +
n) time.
Work across the rows starting at the top. Any time xi = yi fill in the diagonal
neighbor + 1 and mark the box with the wingding __ otherwise fill in the box
with the max of the box above and box to the left. That is, the entry of c[i, j]
depends only on whether xi = yj and the values in entries c[i – 1, j], c[i, j – 1]
which are computed before c[i, j]. The max length is the lower right hand corner.
In c[i – 1, j] and c[i, j – 1] entries if c[i – 1, j] ≥ c[i, j – 1] then b[i, j] entry is ‘↑’
otherwise “←”.
BACKTRACKING ALGORITHMS
➢ Tests to see if a solution has been found, and if so, returns it; otherwise
➢ For each choice that can be made at this point,
1. Make that choice
2. Recur
3. If the recursion returns a solution, return it
➢ If no choice remain, return failure.
7 | Page
www.gradeup.co
Example, To color a map with no more than four colurs:
If all countries have been colored (n > number of countries) return success ;
otherwise,
GREEDY ALGORITHMS
INTRODUCTION
It solve problems by making the choice that seems best at the particular moment. Many
optimization problems can be solved using a greedy algorithms. Some problems have
no efficient solution, but it provide a solution that is close to optimal. It works if a
problem exhibits the following two properties:
1. Greedy choice property. A globally optimal solution can be arrived at by making a locally
optimal solution. In other words, an optimal solution can be obtained by making “greedy”
choices.
8 | Page
www.gradeup.co
AN ACTIVITY-SELECTION PROBLEM
Our first example is the problem of scheduling a resource among several competing
activities. We shall find that a greedy algorithm provides a well-designed and simple
method for selecting a maximum-size set of mutually compatible activities.
In this strategy we first select the activity with minimum duration (f i – si) and schedule
it. Then, we skip all activities that are no compatible to this one, which means we have
to select compatible activities that are not compatible to this one, which means we have
to select compatible activity having minimum duration and then we have to schedule it.
Thus process is repeated until all the activities are considered. If can be observed that
the process of selecting the activity becomes faster it we assume that the input activities
are in order by increasing finishing time: f1 ≤ f2 ≤ f3 ≤ …. ≤ fn.
Example: Given 10 activities along with their start and finish time as
S = (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10)
9 | Page
www.gradeup.co
Solution: The solution for the above activity scheduling problem using greedy strategy
is illustrated below.
Now, schedule A1
Next, schedule A4 as A1, A3 and A4 are non-interfering, then next, schedule A6 as A1,
A3, A4 and A6 are non-interfering.
Skip A5 as it is interfering.
Next, schedule A9 as A1, A3, A4, A6, A7, and A9 are non-interfering.
Next, schedule A10 as A1, A3, A4, A6, A7, A9 and A10.
10 | P a g e
www.gradeup.co
KNAPSACK PROBLEMS
➢ Fractions of items can be taken rather than having to make a binary (0-1) choice for
each item.
0-1 knapsack problem. Consider a optimal solution. If item j is removed from the load,
the remaining load must be the most valuable load weighing at most W – wj.
Fractional knapsack. If w of item j is removed from the optimal load, the remaining load
must be the most valuable load weighing at most W – w that can be taken from other
n – 1 items plus wj – w of item j.
11 | P a g e
www.gradeup.co
algorithms can be sued to produce sub-optimal solutions. That is solutions which aren’t
necessarily optimal, but are perhaps very close.
In dynamic programming, we make a choice at step, but the choice may depend on the
solutions to sub-problems. In this, we make whatever choice seems best at the moment
and then solve the sub-problem, arising after the choice is made. The choice made by
a greedy algorithm may depend on choices so far, but it cannot depend on any further
choices or an the solutions to sub-problems. Thus, unlike dynamic programming, which
solves the sub-problems bottom up, a greedy strategy usually progresses in a top-down
fashion, making one greedy choice after another, interactively reducing each given
problem instance to a smaller one.
Fractional knapsake problem can be solvable by the greedy strategy whereas the 0-1
problem is not. To solve the fractional problem.
0-1 knapsack problem cannot be solved by the greedy strategy because it is unable to
fill the knapsake capacity, and the empty space lowers the effective value per pound of
the load and we must compare the solution to the sub-problem in which the item is
included with the solution to the sub-problem in which the item is excluded before we
can make the choice.
12 | P a g e
www.gradeup.co
v = (30, 20, 100, 90, 160)
The capacity of knapsack W = 60. Find the solution to the fractional knapsack problem.
Solution: Initially,
First we choose item I1 whose weight is 5, then choose item I3 whose weight is 20.
Now the total weight in knapsack is 5 + 20 = 25.
Now, the next item is I5 and its weight is 40, but we want only 35. So we choose
fractional part of if i.e.,
13 | P a g e
www.gradeup.co
14 | P a g e
www.gradeup.co
15 | P a g e