DAA Sem ANS
DAA Sem ANS
DAA Sem ANS
UNIT-I
La) Define an algorithm. Give characteristics of an algorithm with advantages
and disadvantages. (B)
b) Explain asymptotic notations. (B)
2. Illustrate Merge sort algorithm with example.(M)
3. Explain Strassen's matrix multiplication and its formulae.(C)
4. a) Give the general procedure of divide and conquer
method.(B)
b)Simulate Quick sort algorithm for the following
example 10,80,90,60,30,20. (B)
5. Write an algorithm for Binary search and illustrate with an example. (M)
• Algorithm Definitionl:
• Algorithm Definition2:
• Algorithms that are definite and effective are also called comp11tational procedures.
• A program is the expression of an algorithm in a programming language
problem
algorithm
stated{defined).
Uniqueness - results of each step are uniquely
ofinputs.
Advantages of Algorithms:
Disdvantages of Algorithms:
1. Big-OH (0) •
2. Big-OMEGA (ll),
3. Big-THETA (0) and
4. Little-OH (o)
Our approach is based on the asymptotic complexity measure. This means that we don't try to
count the exact number of steps of a program, but how that number grows with the size of the
input to the program. That gives us a measure that will work for different operating systems,
compilers and CPUs. The asymptotic complexity is written using big-0 notation.
Big 'ob': the function f(n)=O(g(n)) iff there exist positive constants c and no such that
f(n)<=c*g(n) for all n. n>= no.
Omega: the function f(n)=(g(n)) iff there exist positive constants c and no such that
f(n) >= c*g(n) for all n, n >= no.
Theta: the function f(n)=(g(n)) iff there exist positive constants cl,c2 and no such that cl
g(n) <= f(n) <= c2 g(n) for all n. n >= no
Big-0 Notation
This notation gives the tight upper bound of the given function. Generally we represent it as
f(n) = O(g (11)). That means, at larger values of n, the upper bound off(n) is g(n). For
example, if f(n) = n4 + 100n2 + 10n + 50 is the given algorithm, then n 4 is g(n). That means
g(n) gives the maximum rate of growth for f(n) at larger values of n.
0 -notation defined as O(g(n)) = {f(n): there exist positive constants c and Do such that
0 <= f(n) <= cg(n) for all n >= no). g(n) is an asymptotic tight upper bound for f(n). Our
objective is to give some rate of growth g(n) which is greater than given algorithms rate of
growth f(n).
In general, we do not consider lower values of n. That means the rate of growth lower at
values of n is not important ln the below figure, Do is the point from which we consider the
rate of growths for a given algorithm. Below Do the rate of growths may be different.
_ . Input Size, n
no
Note Analyze the algorithms at larger values of n only What this means is, below no we do
not care for rates of growth.
Omega- n notation
Similar to above discussion, this notation gives the tighter lower bound of the given
algorithm and we represent it as f(n) = n (g(n)). That means. at larger values of n, the
tighter lower bound of f(n) is g
n
For example, if f(n) = 100n2 + 10n + 50, g(n) is (n2 ) .
The . n. notation as be defined as n (g (n)) = { f(n): there exist positive constants c and
no such that O <= cg (n) <= f(n) for all n >= Do}. g(n) is an asymptotic lower bound for
f(n). n (g (n)) is the set of functions with smaller or same order of growth as f(n).
Rnc of Growth
/(n) cg(n))
l
no Input Size, n
Theta- 8 notation
This notation decides whether the upper and lower bounds of a given function are same or
not. The average running time of algorilhm is always between lower bound and upper bound.
ff the upper bound (0) and lower bound (0) gives the same result then 8 _notation will also
have the same rate of growth. As an example, let us assume that f(n) = 1On + n is the
expression. Then, its tight upper bound g(n) is O(n). The rate of growth in best case is g (n) =
O(n). In this case, rate of growths in best case and worst are same. As a result, the average
case will also be same.
None: For a given function (algorithm). if the rate of growths (bounds) for O and n are not
same then the rate of growth 0 case may not be same.
Rate of Growth
... Czg(n)
c 1g(n)
no Input Si7.e. n
Now consider the definition of 0 notation It is defined as 8 (g(n)) = { f(71 ): there exist
positive constants Cl, C2 and no such that 0<=5 c 1g(n) <= f(n) <= c2g(n) for all n >= 11c,}.
g(n) is an asymptotic tight bound for f(n). 8 (g(n)) is the set of functions with the same
order of growth as g(n).
Important Notes
For analysis (best case, worst case and average) we try to give upper bound (0) and lower
bound (0) and average running time (0). From the above examples, it should also be clear
that, for a given function (algorithm) getting upper bound (0) and lower bound (0) and
average running time (0) may not be possible always.
For example, if we are discussing the best case of an algorithm. then we try to give upper
bound (0) and lower bound (.0) and average running time (0).
In the remaining chapters we generally concentrate on upper bound (0) because knowing
lower bound (0) of an algorithm is of no practical importance and we use 9 notation if upper
bound (0) and lower bound (0) are same.
Little Oh Notation
The little Oh is denoted as o. It is defined as : Let, f(n} and g(n} be the non negative
functions then
llin /(n) =0
ft➔ocg(n)
such that f(n)= o(g(n)) i.e f of n is little Oh of g of n.
Merge Sort:
The merge sort splits the list to be sorted into two equal halves, and places them in separate
arrays. This sorting method is an example of the DIVIDE-AND-CONQUER paradigm i.e. it
breaks the data into two halves and then sons the two half data sets recursively, and finally
merges them to obtain the complete sorted list The merge sort is a comparison sort and has an
algorithmic complexity of O (n log n). Elementary implementations of the merge sort make use of
two arrays - one for each half of the data set. The following image depicts lhe complete procedure
of merge sort.
27j43j3j9j~
r--T----Y-~3 1712~
,.._....., ~ ~
3 3
t~82
?
Advantages of Merge Sort:
I. Marginally faster than the heap sort for larger sets
2. Merge Sort always does lesser number of comparisons than Quick Sort. Worst case for
merge sort does about 39% less comparisons against quick sort's average case.
3. Merge sort is often the best choice for sorting a linked list because the slow random-
access performance of a linked list makes some other algorithms (such as quick sort)
perl'onn poorJy, and others (such as heap sort) completely impossible.
Al orithm for Mer e sort:
Algorithm mergesort(low, high)
(
if(low<high) then // Dividing Problem into Sub-problems and
( this "mid" is for finding where to split the set.
mid=(low+high)/2;
mergesort(low,mid);
mergesort(mid+l,higb); //Solve the sub-problems
Merge(low,mid,higb); // Combine the solution
}
}
void Merge(low, mid,bigb) (
k=low;
i=low;
j=mid+I;
while(i<=mid&&j<=higb) do(
if(a[i]<=a(j]) then
(
temp[k]=a[i];
i++;
k++;
}
else
(
temp[k}=a[j];
j++;
k++;
}
}
while(i<-mid) do(
temp[k]=a[i];
i++;
k++;
}
whileG<=high) do(
temp[k]=a[j];
j++;
k++;
}
For k=low to high do
a[k]=temp[k];
}
For k:=low to high do a[k]=temp[k];
}
Strassen's Matrix Multiplication:
Let A and B be two nxn Matrices. The product matrix C=AB is also a nxn matrix whose i, j th
element is fonned by taking elements in the i th row of A and jlh column of B and multiplying
them to get
C(i, j)=l:1s.k!in A(i, k)8(k,j)
Here 1< i & j < n means i and j are in between I and n.
The divide and conquer strategy suggests another way to compute the product of two nxn
matrices.
For Simplicity assume n is a power of 2 that is n=2k
Here k➔ any nonnegative integer.
If n is not power of two then enough rows and columns of zeros can be added to both A and
8, so that resulting dimensions are a power of two.
Let A and 8 be two nxn Matrices. Imagine that A & 8 are each partitioned into four square
sub matrices. Each sub matrix having dimensions n/2xn/2.
The product of AB can be computed by using previous formula.
If AB is product of 2x2 matrices then
A11 A12) (811 812) = (C11 C12.)
( A21 A22 821 822 C21 C22
C11=A11B11+A12821
C12=A11812+A12B22
C21=A21B11+A22821
C22= A21812+A22822
Cll=P+S-T+V
Cl2=R+T I T(n)= 7T(n/2)+
b
cn 2
Ir ug;
if' n>2
C21=Q+S
C22=P+R-Q+U
.•
/_~~l::Ar, .f'1M{A 1...iJ /l)__ _ _ -- = -- -- -- -- -. .i
.
·· -:-:-..--:..~-
/__ltf -( n~ ~' 2l.. -;,'L --- -- - - - - - - - " : ~
t .. \,.
,.
J
DIVIDE AND CONQUER
General Method
If the subproblems are large enough then divide and conquer is reapplied.
The generated subproblems are usually of some type as the original problem.
Problem of size N
' , ' .
SubprojtraJD of size Subprogram of size
,, ',
Solution to
Solution to
. .
!
Solution to the original problem of
I Pseudo code Representation of Divide and conquer rule for problem "'P"
Algorithm DAndC(P)
{
if small(P) then return S(P)
else{
divide Pinto smaller instances Pl,P2,P3 ... Pk;
apply DAndC to each of these subprograms; II means DAndC(P 1), DAndC(P2) .....
DAndC(Pk)
return combine(DAndC(Pl). DAndC(P2) •.... DAndC(Pk));
}
}
//P♦Problem
//Here small(P)➔ Boolean value function. If it is true, then the function S Is
/fmvoked
a,b➔ contants.
This is called the general divide and-conquer recurrence.
► Auxiliary space used in the average case for implementing recursive function calls is
0 (log n) and hence proves to be a bit space costly, especially when it comes to large
data sets.
2
► Its worst case has a time complexity of O (n ) which can prove very fatal for large
data sets. Competitive sorting algorithms
r,a • ..,,orithm for uick sort
Algorithm quickSon (a, low, high) {
lf(high>low) then {
m=partition(a.low,high);
if(low<m) then quick(a,low,m);
if(m+ I<high) then quick(a,m+ I ,high);
))
( (
II (imu < lmln) II (lmu < lmln) that
return array II empty; return "'arny ii empty";
lf(keJ<lmla II K>Lmu> lf(keydmla DK>lma:1) then
return element not in arny list rtturn "'eltme■t not la array list"
ea. ea.
( (
int lmld = (lmln +lmu)l2; lmld = (lmln +lmu)/2;
II (A[lmld) > key) if (A[lmid] > key) then
returu blmtry_search("9 key, lmlu, lmld-1); return bbuuyJarcb("9 key, lmln, lmld-1);
tlse 11 (A[lmld] < by) else II (A[lmld] < by) then
ttturn blnary_search("9 key, bnhl+l. I.mu); tttum binary...-rdl("9 key, bald+1, lmu);
else else
tthlnllmid; retumimld;
} }
} }
Time Complexity:
Data structure:- Array
For successful search Unsuccessful search
Worst cue➔ O(log n) or 8(1og n) 8(1og n):- for all cases.
Average cue➔ O(log n) or 8(1og n)
Best case➔ 0(1) or8(I)
Binary searda program by using iterative Binary searda algorithm by using lterative
methodol : methodolo :
int binary_search(int AO, int key, int Jmin, int Algorithm binary_search(A, key, imin, imu)
imu) (
{ While < (irnax >- imin)> do
while (imax >= imin) (
( Int lmid = midpoint(lmin, hnu);
Int imkl = midpoint(lmln, lmu); lf(A[lmld] =key)
ll(A[lmld] = key) return Im.Id;
ffl1IIII lmld; else If (A[lmld] < key)
else If (A[lmld] < key) lmln = lmld + I;
lmln = lmld + 1; else
else Imo = lmld • I;
lmu = lmld - 1; )
) )
)
UNIT-II
I. Define Articulation point. Find articulation point for a given graph.(C)
2. a) Define spanning tree? Narrate few applications of spanning trees with
example.(B)
b) Write in detail about Hamiltonian cycles. Give example to it.(B)
3. Construct State Space Tree for Sum of Subset Problem, with weights w[l :6]=(
5, I 0, 12, 13, 15, 18}, such that sum of subset is 30.(M)
4. a) Discuss in detail about N- queen's problem using backtracking.(M)
b) Solve 8-queens problem for the feasible sequence (6,4,7,1).(M)
5. Explain Graph Coloring problem using back tracking with an example graph.(B)
•
,
~ /5 ()..r) (;\11i°ll.J1o.:b~
f itU-
.. ,
Two
~JOJnk
. ,
g1~t ·
1,ard"Jt ~ ~trrtoff",( an rJ,titutJtio? f'•fll
J!f!_
(l,,'lt.bvcl. d#.f H, f 'rH b ee. Ond
nvrnJM 1 Jrn ca cli 'l'/Od" acco>dio o
~
• I
I
:. J
1
'
,S bac..t e.~ 1 Vt..~'t. ?
I •
1 ~ t -i.. ba,lKed,t.
I
~
Co~trv& ~ d-A 6-ee. 011d p10'\J1~e.., Jfn
'/ I ,Z...
1j 2-
g/ 1.
;I -l
I\
t i;-
I
--1 '
)~
l
,
', ~ 8
I
r
~ I I
-
i b'>~y. 1 ' . . g_ fnfnk~
~ 1' ~-
2-- . ~~~~~ 1 ';i
' I •
I
It Ii I\ 1 ,·
7- 6'1 onhd.'3<- °5 2.
fl u ,, II
1I lJ
I
("
I G , ~, I, 1 , '0
10,1tlf
l l'l {11 1\u clt J ~, no( d \ f,w k..,11'ari
-:. tw rl f I, ~ _ ! :: ©
~1
::: ~ •() ~ s (11;f1 f - 1 - J I -
I
~ rN'n 11, I j =0
f wf3J~, ~,(') ~dm [ ~ -1
-, LM ll/J ~ fdfn [ 41, ~ ,()
1
/W f)
':: rnJr,1 ~ ~ 1
I - s €)
f dfn fr J, flv 'r, f Low (6'L L<5W f=,] f, M4'11{dfvi[--~
➔ LJ.1i,v fr ]: : r>1i n
~ h"f'n f /0 , _ 1
fYli n i 6 , :} ~
I
-::. ni.i' n l ID, - , b 1~ ©
I..,,,
-") LG.,. fq 1 , ,x:,1l dHI [ i] , ,ml•{ l6W C-1 S, rw•~ f dfn r_ --~
~ rw'n i ~, - , - t E) -=-
. f'
Li ) LJSvJ [~ 1 =. f. •
' JS f-t .f
b ~ 6 V
·dfn r~] ='
l j;) l&> [ 10] ~ Lf
l4 > 3 _....
L~ f·~1 :. s- 1 ,•s fl ,('
s~ 3 I.--""'
'
lrMJ f 1J: .
I ~ ·3 X
dfn [1 J ::. 3
l '" l /.JSv0 l 1J = I
dfn .f 4] : ;: ~
.' i
lfN.J f=} 7-:. , t
t1V) i
U}.,O ['] : s .. ,~ fJ .f
t:J~ [SJ-> =t
I • •
ct1 ) w f ~7 -=- £
dm [q l = q
• I
,.r
fl VJ u_ ,
1
i) r co-e.
1('1~ 1-ft:J> 'Kl. mov /0
.
'
Spanning Tree:-
Let G=(V<E) be an undirected connected graph. A sub graph t=(V,E1) of G is a spanning tree
of G iff t is a tree.
Aconnecte~
undirected graph
It can be used to obtain an independent set of circuit equations for an electric network.
Any connected graph with n vertices must have at least n-1 edges and all connected graphs
with n-1 edges are trees. If nodes of G represent cities and the edges represent possible
communication links cono~ting two cities, then the minimum number of links needed to
connect the n cities is n-1.
There are two basic algorithms for finding minimum-cost spanning trees, and both are greedy
algorithms
➔Prim's Algorithm
➔ Kruskal's Algorithm
2. a) Define spanning tree? Narrate few applicatilons of spanning trees with example.
Ans: Definition:
❖ A spanning tree can be defined as the subgraph of an undirected connected graph.
❖ It includes all the vertices along with the least possible number of edges.
❖ If any vertex is missed, it is not a spanning tree.
❖ A spanning tree Is a subset of the graph that does not have cycles, and it also cannot be disconnected.
❖ A spanning tree consists of (n-1) edges, where 'n' is the number of vertices (or nodes).
❖ Edges of the spanning tree may or may not have weights asslsned to them.
❖ All the possible spanning trees created from the given graph G would have the same number of vertices, but the number of
edges in the spanning tree would be equal to the number of vertices in the given graph minus 1.
Applications or Iha spanning tree
8u lcolly. a spMning ttee is ustd to find a minimum path to connect all nodes of t he graph, Some of t he common applicotlons ol the sp0nnlng t, ee
ore lfsted as follows •
o Clun e, Analysis
o Clvll networlc planning
❖ o Computer network routlnq ptotocol
Ln1·• yru 1nr,ii t~n cl l h n m l n l rnu t-n IIJ:>1'nnl n9 1,·e • with 1hn h n l p o f t H 1 O M'"n" 1, lct,
Weighted graph
3 l
T h• a:um o f tho ndg1u1 o f tho 1,b o v ~ g rnph l a 1 6. N o w , • o rn., o f t.hn p o,u;lb lu 11ipttnr,11"'1g tn,o s c r an , .,d f',·orn tho ttbo v" g r f'lph ~ re -
5 a
3
Sum -10
o M lnl m 11.. u n s p a n n h,g t-r•• ca,, be u••d t o d e ■ fgn wate r - sup p ly n e two rk.a, t e l ecomm unl c a tlon n e t w orks. a n d • le c t r lc• I grid •
o It c on b e, u•o d t o fi n d pot h • In t h e m o p ,
A rn l nhTI1.1rn • panning 1r-C!o cnn b a f ou nd fro m a w a i g htc:,d groph b y u Jf l f"lg tho n l go1'hhm• g i v en b a l o \N ..
o A spanning tree is minimally connected, so removing one edge from the tree will mak,e the graph disconnected.
o A spanning tree is maximally acyclic, so adding one edge to the tree will create a loop..
o There can be a maximum nll- 2 number of spanning trees that can be created from a complete graph.
o A spanning tree has n-1 edges, where 'n' is the number of nodes.
o If the graph is a complete graph, then the spanning tree can be constructed by removing maximum (e-n+1) edges, where ·e· is the numbe
edges and ·n· is the number of vertices.
b) Write in detail about Hamiltonian cycles. Give example to it.
❖ Ans: definition of backtracking:
❖ Backtracking is a technique based on algorithm to solve problem.
❖ It uses recursive calling to find the solution by building a solution step by step i ncreasing values with time.
❖ It removes the solutions that doesn't give rise to the solution of the problem based on the constraints given to solve the
problem.
❖ A Hamilto:nian circuit or tour of a graph is a path that starts at a given vertex, visits each vertex in the graph exactly once,
and ends at the starting vertex.
❖ We use the Depth-first Search algorithm to traverse the graph until all the vertices have been visited.
❖ We traverse the graph starting from a vertex (arbitrary vertex chosen as starting vertex) and at any point during the
traversal we get stuck (i.e., all the neighbor vertices have been visited), we bac:ktrack to find other paths (i.e., to visit
another unvisited vertex).
❖ If we successfully reach back to the starting vertex after visiting all the nodes, it means the graph has a Hamiltonian cycle
otherwise not.
❖ We mark each vertex as visited so that we don't traverse them more than once
Example: Consider a graph G • (V. E) shown in fig. we have to find a Hamiltonian circuit using Backtracking method.
Solution: Firstly, we stan our search with venex 'a.' this venex ·a· becomes t he root o f our implicit tree.
Nexi we select 'c' adjacent to 'b.' Next. we select 'd' adjacent to ·c:
❖ Here we have generated one Hamiltonian circuit, but: another Hamiltonian circuit can also be obtained by considering
another vertex
3. Construct State Space Tree for Sum of Subset Problcem, given weights are w[l:6)={ S,10,12,1.3,1S,18}
such that sum of subset Is 30.
► Ans: Subset sum problem is the problem of finding a subset such that the sum of elements equal a given
number. The backtracking approach generates all permutations in the worst case but in general,
performs better than the recursive approach towards subset sum problem.
► A subset A of n positive integers and a value sum(d) is given, find whether or not there exists any
subset of the given set, the sum of whose elements is equal to the given value of sum
ALGORITHM:
1. Start with an empty set
2. Add the next element from the list to the set
3. If the subset is having sum M, then stop w ith that subset as solution.
4. If the subset is not feasible or if we have reached the end of the set, then backtrack through the
subset until we find the most suitable value.
5. If the subset is feasible (sum of subset< d) then go to step 2.
6. If we have visited all the elements without find ing a suitable subset and if no backtracking is
possible then stop without solution
• A classic combinational problem is to place n queens on a n•n chess board so that no two attack, l.,e
• No two queens are on the same row, column or diagonal.
• If we take n=4then the problem is called 4 queens problem.
• If we take n=8 then the problem is called as 8 queens problem .
•
4-Queens problem:
• Consider a 4*4 chessboard. Let there are 4 queens. The objective is place the 4 queens on 4*4 chessboard in such a
way that no two queens should be placed in the same row, same column or diagonal position.
• The explicit constraints are 4 queens are to be placed on 4•4 chessboards in 44 ways.
• The implicit constraints are no two queens are in the same row column or diagonal.
• Let{xl, x2, x3, x4} be the solution vector where xl column on which the queen i is placed.
• First queen is placed in first row and first column.
m •••
Titc llC:cond q u e:c::n 11hould n o l be: in fin.I row •nd 11cc:on d colum n . h ahoul d be rlaccd in aiiec<., ·nd
n.... w ct.nd in t1oC.:.ond,. third or fo urth c:o lu.m n. It we phacc in IIC'C...-onJ c o lumn.• both will be in ,....me
dia.11ona.l .. t&O place it in third c:ol u .m n..
1:1-1 11 1= 1= 1=1 -1
We, a .r e u.nablc 10 ploc.c quc:c:n 3 in thirJ row .. •o go buck t o qucc:.n 2 and place it •omt:whc:.n:
clMC
l·I
1·11
( d)
~..,. ~~ ·-·,~
2
h
l I
2 2
3 3
4
( h) {i)
Hence the solution of to 4-<Jucens's problem is xl =2. x2=4, x3=1, x4=3, i.,c first queen is
placed in 2nd column, second queen is placed in 4th column and third queen is placed in first column
and founh queen is placed in third column.
S.Explaln Graph Coloring problem using back tracking with an example graph.
Ans: What Is graph coloring problem?
❖ We have been given a graph and we are asked to color all vertices with the 'M' number of given colors, In such a way that
no two adjacent vertices should have the same color
❖ It Is possible to color all the vertices with the given colors then we have to output the colored result, otherwise output 'no
solution possible'.
Graph Coloring by backtracking:
❖ In this approach, we color a sing le vertex and then move to Its adjacent (connected) vertex to color It with different color.
❖ After coloring, we again move to another adjacent vertex that Is uncolored and repeat the process until all vertices of the
given graph are colored.
❖ In case, we find a vertex that has all adjacent vertices colored and no color Is left to make it color different, we backtrack
and change the color of the last colored vertices and again proceed further.
Graph Coloring by backtracking:
3. Define Greedy knapsack. Find the optimal solution of the Knapsack instance n= 7,
SOE 111 YEAR I SEM DATA SCIENCE
20
5. Find lhe optimal solution for single source shonesl path problem for the
below weighled graph.(M)
1 a) Explain General method of Greedy method.
Ans: Greedy method: General method
► The greedy method Is one of the strategies like Divide and conquer used to solve the problems.
This method Is used for solvlng "optimization problems".
► An optimization problem Is a problem when It requires either minimum or maximum results.
► This technique Is basically used to determine the feasible solution that may or may not be optimal.
► We need to find a feasible solution that either maximizes or minimizes a given objective function. A feasible solution that
does this is called an optimal solution.
► The feasible solution Is a subset that satisfies the given criteria.
► The optimal solution Is the solution which Is the best and the most favorable solution In the subset.
► In the case of feasible, If more than one solution satisfies the given criteria then those solutions will be considered as the
feasible, whereas the optimal solution is the best solution among all the solution
What Is Greedy Algorithm?
Greedy algorithm is a problem-solving strategy that makes locally optimal decisions at each stage In the hopes of achieving a
globally optimum solution.
We can Implement a greedy solution only If the problem statement follows two properties mentioned below:
1. Greedy Choice Property: Choosing the best option at each 1Phase can lead to a global (overall) optimal solution.
2. Optimal Substructure: If an optimal solution to the complete problem contains the optimal solutions to the sub problems,
the problem has an optimal substructure.
Greedy Solution:
The steps to generate this solution are given below:
1. Start from the source vertex.
2. Pick one vertex at a time with a minimum edge weight (distance) from the source vertex.
3. Add the selected vertex to a tree structure If the connecting edge does not form a cycle.
4. Keep adding adjacent fringe vertices to the tree until you reach the destination vertex.
5. Paths will be picked up in order to reach the destination city.
in~
13
Algorithm Greedy (a, n)
(
II a(l:n]I conta~ns then inputsi
Sotution r: : , (J; IJ initialize the so~utjon m empty
far i - :1 to m do
{
x: = Sellecl(.aJ;,
if" Feasib!el,solution, x) then
solution: = Un~on(solulian ~ x)
)
!ret:urrm ~Oh!J~iDn;
J
t1<t'-d/i~S
ft, R6\";"£ r,.itA a~ lo ~ ~lloWtd ·tr;
r; ~ bl,-/wfl h. fNil:it. Jd".
l j I ft1 cJ· j ob -lokq 0~ uni l- tf -K"'<. .
q JS
no rtt·
ciii· 90'v 1r -h> ~-t..Jltjoo5 -to m"xiffl~ ~ i>--liJ rf t .
1..',v1 [c,W'J;J. cJI f JiiW ,Jx~t.5 o.no CCIYlf-) J(t ff7e.. Ni-ti;,,.._,,,,
---tr,-trJ -n·nu. in ~ 1f~ · ...
--} C,cri,5;~ -M«.. -titY.c. ,~ a~
0 3 LI
'tl_____~~
I~
18
35
88 ( 10+1&)
( 2, 1) ( !f.)+11 ) t~
, ~t,rt) 1()
( "' 3)
C~,J) t,.r-+¾) 8g
►
1&- ormaJ sol,,
rtt
f', .J/1L wi'H, ·Jht.
r°J;t of
2.a) Explain knapsack problem In Greedy method.
Ans: Knapsack problem
► The Greedy algorithm could be understood very well with a well-known problem referred to as Knapsack problem. Let us
discuss the Knapsack problem in detail.
► Given a set of items, each with a weight and a profit, determine a subset of items to Include In a collection so that the total
weight Is less than or equal to a given
Ii mlt and the total profit is as large as possible.
► Problem Scenario
o A thief Is robbing a store and can carry a maximal weight of W Into his knapsack. There are n items available in the store
and weight of ith Item Is wi and its profit Is pl
. What items should the thief take?
o In this context, the Items should be selected in such a way that the thief will carry those items for wh ich he will gain
maximum profit. Hence, the objective of the thief Is to maximize the profit.
o Based on the nature of the items, Knapsack problems are categorized as
✓ Fractional Knapsack
✓ Knapsack
Let us apply the greedy method to solve the knapsack problem. We are given 'n' objects and a knapsack. The object 'I' has a
weight wl and the knapsack has a
capacity 'm'. If a fraction xi, 0 <xi< 1 of object I ls placed Into the knapsack then a profit of pl xi Is earned.
• The objective Is to fill the knapsack that maximizes the total profit earned. Since the knapsack capacity Is 'm', we require the
total weight of all chosen objects to be
at most 'm'.
m..• 1m1u
.
l:, p, 11,
• • I
~
Running time: The objects are to be sorted Into non-decreasing order of pl/ wl ratio. But if we disregard the time to Initially
sort the objects, the algorithm requires only O(n) time
•
.
'• • ' •
rs a~
,.l{< t l.J : :XD
ort-oi,, -,11'- sr:/ '1 jn -f{x o/ vv, kn °'{'ft1 ck [>tb'cn-
.
; I /t-. .l,'- LI
---+-:..: f,·
I
30
-
!? /
:3 lo
..
,~
-
..
~ ,;:v 'l.
"Xi
-::i1 /'')
-- •
I >l1 1
I ..
0
Q
10,
I
-JOS" -- - L.
I
I~ -= '>le, C)
I
0 10, ,~ .
- L
3 I
1/ Jg -- 'I.•'l. \/, .., ~ II B ",,o. f/, '
I. 0 ' .•
. - JO ')
► {'-'r,:.fv_!{ f Ni X;
--(I
I. C~ I ) -\ ( Ii_- >- 7 /, \ ) ·l
( 1(1 Y t,) :Jr,
C,ofV-fv k. i pix/
e;_w,' X/ £Pi'i i
2.0 ~;; ,:74
1-0
:SD
• :t'} ,3
'2.o
'2-0 '14·'(,
7-0 ~ 'l.
cO ']i,8
► Given a set of items, each with a weight and a profit, determine a subset of items to include in ii collection so that the total
weight Is less than or equal to a given
limit and the total profit Is as large as possible.
► Problem Scenario
o A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n items available in the store
and weight of ith item is wi and its profit is pi
. What Items should the thief take7
o In this context, the Items should be selected In such a way that the thief wlll carry those Items for which he will gain
maximum profit. Hence, the objective of the thief Is to maximize the profit.
o Based on the nature of the items, Knapsack problems are categorized as
✓ Fractional Knapsack
✓ Knapsack
Let us apply the greedy method to solve the knapsack problem. We are given 'n' objects and a knapsack. The object 'I' has a
weight wl and the knapsack has a
capacity 'm'. If a fraction xi, O< xi< 1 of object I ls placed Into the knapsack then a profit of pl xi Is earned.
• The objective is to fill the knapsack that maximizes the total profit earned. Since the knapsack capacity Is 'm', we require the
total weight of all chosen objects to be
at most 'm' .
.
m.axlm,~•
•ubJect to
l:,
, ...p, x 1
.... 5 c;. ~ 0
6 t .. 3
'"5 -+ ....,
,:; -=t \
\A.>c...,hn '2- 3
,.,,-, -
L -.:;-
'
~, ....r ~ ,!?_ \
~ '"'5
~
, _8 ~
ri;, l\-•1- ~.ii!,
C.
~-tU.:Jt"
-x.. '
---
~
--s
"""~ u.:, \ -¥ ~ , . . ~~
0
~
\
...._IS
..,,_
\
--· · -~
'~
':\-'°'""" \.~
-
-1-
'A.~~0'~......
\ 'K< 'it;r, -¥ '!=: ,r. "a k \
~ .: "' -"t' \'IC.\
. '
R··' \--0 t ~~-~~ ~ ~rt_~u:>·[ ~~".. ~
' '
-· iI - - _,.._____..~
.-
·t 0 II
s l' I
L-,,
\.o, !
')_ lI
!''I i. I . , I/ , ' ./
II ~
.,_,.: '5 \' \ . G:, ti
\I f:,, 3 .. 3
3 i
-.,I
!
I, 1\ 1:1
, iI . , 5 lS 1 ,
3 .
\_5
I
'l
5 • .'. I
I
3 .. f I .
.
I\
' •
f -=J- ~ l .I . Q . .. 0
~
!
I \ !
'5 6 \ ..
. f, \ I
•I
.I b. I
I
'
4
. :,
..,
C, I
i
•
I
l~
1·
I
~ A· 4-,; . I
. , I . ' ., I\ . ,,.
.
4 -
'1
lI
I,
t~
3
I
I
I
'1,
-=f- ,i .
.
:3 .I '
:
\
;
l 3 _--- . - . \ . ·. . . .!'
I '
-
I/
I
.
""""'4 -~oJ <1_
? J '
0
[ ·opl,; \
I
I
~<.
5 ..I-3
14
~ 3
~ 10/,
20
&-ts'. '1.
•
(on.s,d v, OJ) -wie. vuhce! wi-tl,oo t-
~
CD ~
© ® Ci)
® ~
-Jota.J wt:: 0
1~ 2-
fc1cu M <d')C wf1t,,
10
©
(if ® ,
® -fulo.J VJ t ~ 10
.9j> 3
y G
0, .
~
Tuz. oJ&°711fin, roceg~ ~er ~clt>di~ ~dJo. {d? 'v;ffi.
(C/J ~
. ,
pm•n!f 0... •o;,w,t
~n,hJt" . '
~~q
0 ©
~ ® G)
2~ @
mW wl- .:: c,2
Jfe!!.. 5
er()) @
(i).
:>® "l.o
/1
tot<:u wt :: (; q
s11~
/<,
/0 0
-tt,1.aJ - w t = '}O
2 .0
,l - ~I :, ~ A;~ tt,i,
~ rt'I .'-'Rl~ .J11 '( ~ '\_ ~! f'" I -f,l,t:.
~t Jt ,/ti..'.!
St •'< f'icJ 1'f( U <l l~ ,, tf-Al
~~-
'i\ , ~
lt) ,,_
©
~@2-_,__,
~ ,s t t:0 n¢ U .
2-o
41 ) -f+,t Y ~ ct l . -/hell Oi l etir Wrll, oYfirr..1'f'Y
'k,c i~ it ~~~d
Jicy.
6,;::n -f u; N ~ , ; I:- I j OO t
,~
~ ce rt _ -b, +,t.. r,'o,J() .rt /t( k, :J
ed ~ OJC1Jit .
iM ul jb ,;D~ frrtmin/
U .ft . Jhav l:J bt
c0 .. . ..
-- 5~(:.
- ,o
r
(!) 6)
@ CT) 6)
© ~ -fo/AI wt · £1
(,1.
,y(J)
a, cef(§)):
~ 'Wt:: 33
@ 0
Si1 s-_
/ G) ~
If
J
-lrsfGJ wt:- :- 1/-~
-fufaJ wt.... t. 9-
).'i)
2
0
SCt23,4,5,6,9] st23 =1
4210 S1,6 30
2,8=30
2,42 3,33-
SC3]
s , 3 , 4 : us
sfI2,3,st:3 sfn7,3,2
SEsl:
S,2,3,5,6=
t,3,S,}=
sC?
, So, tom do h e shastest path 4
I-2 3-S
S,3 30
h2,3,s=
2,3,S,43 y
1. Solve the given travelling sales person problem using dynamic programming
approach.(B)
10
20
61 8 8
10
9
2. Solve the optimum cost for multiplying the given matrices using matrix chain
multiplication, Aj=5x4, A2=4x6, Az=6x2 and A4=2x7.(M)
3. Construct an Optimal Binary Search Tree for n=4 and (q1. q2. q3. qu)=(do, if, int,
while). The values for p's and q's are given as p(1:4)=(3,3,1,1) and
q0:4)-(2,3,1,1,1). (C)
4. Generate the optimal sequence using dynamic programming approach for 0/1
knapsack instance n = 4, (w1,W2, W3,wa) =(10,15,6,9) and (p1. P2, P3. pa)=(2,5,8,1)
and M=21.(M)
5. Provide an optimal solution for the given All pairs shortest path problem by
using dynamic programming.(B)
THveting Sales
so7 pobeny.
nd the cot fn tovebng sales perton pnby
v g dymamic P0g ammig apprde
to
0 9 /0
13 O
a a ) +(0s6-Cs, ,S, ) ]
or S S
YUt 1.
elece p-
S a y
* s t C i , S,Va)
=
dliss) + cestCS, pN) Se
1) 1*6 = 1s
CsE C 3, &, 17 = d (2, 2) + Cost(3, p,
sd (2,4)teos t4,5,1)JS
in2+, jor ir]3
in
(st C3, E2,u3, 7 in S td(3,2)+ os( 2, 4, 1)
d(3,)+ s(u, 2, 1)?
n i
nain S 31, r} as
'nSer), Gr«?1
n 23, 223 - 23
Sy S23
Ss,5,1, Va)
oe (1,s, va)a nún S Td (i.s,) *Uest-(,
d Ci.s.)4 s+( S, S5, s?, v)
d Ci,Sg) +test-CSs, is, sf,
Ay
A
6
Mag u y
/20 48 S
MaEMes May ti3
D
Mar Ma4ke3
SMu kl-3
min S mik] + Pm ]F*lj th Pjan r
is ksj-
min M[ 1+ m2,2
t
546? K
M2
Mun 0,+48-440 2
IR0 40 tCo 180
minimum va Mg *
Negtt the 'k sj-
i'l be Cmaidsed
tence Mrg 88 24253
2353
fo E-2
m[3,u + ux 6x 3
m i n m[a,27+
48+0+S6= Joz
cued for k:3
fence Pgu /04 l be
Is3e3
Sxyx9
min 0E104 +
120+ux sxcx3
gg +0t Sx2x
n
s8
Honu, m,y cIS8 is nsísered k3
Thus e o opomu) odt
M, MaMa M
,oh:3M
(MCnm))
CAxa) CAu)
4 6 2
S4 42
Cosb Sx2xa i
CoStSxux2
48+40+90
88 +30
I58
Osl 1al20 Sanlt if,
,
»nt, nuhie
and (q, 9,
7*" }",
CnAde n-y fn rCm)-C
Ihe hes p's and 2's
ans 7 (o:a) : C3,4,,,1)
Consbrv ct #e ORST
omoMe
,i Ci,i0
i, i
,iei i+
Ci,i+r -:+i *is
k
min
3 4
Ho,o =
Woy 8 ?|Pa:33
i,i41 *ia1tt
He ++
2+3 +2 8
Hyj:Hi,j t} +j
Ha,2 o , th, *
Hi): Ho,t P4 12
4t+l-16
Consbrvet a
a abetabe o C and j vales.
Cao0C oCaO : 0 C
on:3 : F 3 3
o P% 2 3 3
Co: 11 : 13 ?
ag 5 O 19
3
2
Cou
Cij
so, Coo, C C22, C3s Cy4 =0
0, oo u 22 , 33 uu :0
Co, t +R 2+3 +3 :8
3+1+3: $
K-i 07 2
vy k a oc&
02 Coj t C22 t
Co 44 Hoa
+12 19
for k
C2 < 2,3
for k 2
073
043 3,
Ca3 Cia t
s,
9+09
C 3+Hs
3+9 12
43,4 Ty
k:3
do
whihe
kn-4,M-21
C h ,. ) =(10,15,6)
tta,fyfa)-3,5,8)
iliallys-fl»)3
S ts)s6) (o16)(132i)($223
soue s4s*j (s
Ave hat
Suulk anstue can
ying
(lo) (s)
is 76
2
o 76
O , rerore (19 (S)
f ) (in)Y95) CPs)(,
s {Ntge s 34S3
ferdne Cua) ( 2 )
stoo)le) (o1o)l3,21)() (a,15}
c b ) S1)
tPw)- (S,)s
(o Hene At P-0)
(--n)-
s-s»ss)
(oo) eS 0
es Ll,)
tu fnal orirun osk"a (o,
d get -
paih Phobkm
painshovtert
Ll
hample shostest patb 4o the
Compute all pai
ollasing 9iaph:
i
IS
33
The
.maimum ualue. onaehin -A i is
1om 1 3 . phfaalo
ov
Jov 4he
t he pocth 3. Finding the pa
1om
o m
3 we
qet
3 1S
mio(i5,0)=s
So, A
70m
21
8o ni mumirnm
mlo (s,9S
Soy
33
The shotest path bw dllhe pais u
denived h0m mahi A
Date
Page
ONT-S
LCBBCnnt (oAl hoah 4bsunM
)uM1SJgjih i10101E,191.1Akghte 2,a2
m frot
weia
1o1 121
c (itpraulen) LOt10t 12t 1a x3
wt 2utbz12 a1-12=3
s 2
yyoy 3-38
- 22
C 3 2 - 2 3 3
33
2 2
Sc-1 -22
u3
10t124 3g(
b,o,1
iotIo t1= 383
6 (
33
Jcz-20
x 7-38
h ec o s t yAtlit
asLceBweelpinisalon
2030 0 1
IS 1 6 2
M 3 2
9 18
6 3
164 6
Ro rinrgalusn
Kin value
20 30 10 10
M 2Ss 16 42 2
3 2 2
613 3
3
16
S
1 0 2 0 0 I K z R1o
3 14 2
Ra-2
ME 3
6 Rg-z
4163 S oKu Ru-3
O 3 12
s Rr
Curn eaigalion
20 3 S
Me13 o
14
3 12
1 3 S
12 O 3 12
IS 12
o
Valu
---o
Rrin +Col ru D
t
Cs25
3S
Cestede2
21 (
2 2
efutadral
(2
3-I()
2
2 2 O 2
3O 3 O2
3 12
12 0
LOt3)=_atlvs)toski) +Tceclaad cost
11t2S 1=S3
Cost_e wsde 4
tu) O+2*O =23 (s-12S=31
31
36
212 O
3
3 12
Ltl)= 12+ 2f t |t 2 so
el+Roxrd
ottt3)-
Cet(8)= D+2S4z36
ruu rede 6
iost g ath (b4,,5)-28
t
2 3
:Aurno allefib Jil the
3 lhstokdoed
3
focakk othejobahis
cerNder-
nst
ve ae
X er+iot 2
X
3
2
1 3 ant
be dsn
JJ
3
2
DAA UNIT 5
Ans:
Problems whose solutions times are bounded by polynomials of small degree are called polynomial
time algorithms Example: Linear search, quick sort, all pairs shortest path etc.
problems whose solutions times are bounded by non-polynomials are called nonpolynomial time
algorithms Examples: Travelling salesman problem, 0/1 knapsack problem etc It is impossible to
develop the algorithms whose time complexity is polynomial for non-polynomial time problems,
because the computing times of non-polynomial are greater than polynomial. A problem that can be
solved in polynomial time in one model can also be solved in polynomial time.
Let P denote the set of all decision problems solvable by deterministic algorithm in polynomial time.
NP denotes set of decision problems solvable by nondeterministic algorithms in polynomial time.
Since, deterministic algorithms are a special case of nondeterministic algorithms, P ⊆ NP. The
nondeterministic polynomial time problems can be classified into two classes.
They are
NP-Hard:
A problem L is NP-Hard iff satisfiability reduces to L i.e., any nondeterministic polynomial time
problem is satisfiable and reducable then the problem is said to be NP-Hard. Example: Halting
Problem, Flow shop scheduling problem
NP-Complete:
A problem that is NP-Complete has the property that it can be solved in polynomial time iff all other
NP-Complete problems can also be solved in polynomial time. (NP=P) If an NP-hard problem can be
solved in polynomial time, then all NP- complete problems can be solved in polynomial time. All NP-
Complete problems are NP-hard, but some NPhard problems are not known to be NP- Complete.
Normally the decision problems are NP-complete but the optimization problems are NPHard.
However if problem L1 is a decision problem and L2 is an optimization problem, then it is possible
that L1α L2. Example: Knapsack decision problem can be reduced to knapsack optimization problem.
There are some NP-hard problems that are not NP-Complete.
Let P, NP, NP-hard, NP-Complete are the sets of all possible decision problems that are solvable in
polynomial time by using deterministic algorithms, non-deterministic algorithms, NP-Hard and NP-
complete respectively. Then the relationship between P, NP, NP-hard, NP-Complete can be
expressed using Venn diagram as:
Ans:
A FIFO branch-and-bound algorithm for the job sequencing problem can begin with upper = as an
upper bound on the cost of a minimum-cost answer node.
Starting with node 1 as the E-node and using the variable tuple size formulation of Figure 8.4, nodes
2, 3, 4, and 5 are generated. Then u(2) = 19, u(3) = 14, u(4) = 18, and u(5) = 21.
The variable upper is updated to 14 when node 3 is generated. Since c (4) and c(5) are greater than
upper, nodes 4 and 5 get killed. Only nodes 2 and 3 remain alive.
Node 2 becomes the next E-node. Its children, nodes 6, 7 and 8 are generated. Then u(6) = 9 and so
upper is updated to 9. The cost gets killed. Node 8 is infeasible and so it is killed. c(7) = 10 > upper
and node 7
Next, node 3 becomes the E-node. Nodes 9 and 10 are now generated. Then u(9) = 8 and so upper
becomes 8. The cost c(10) = 11 > upper, and this nodeis killed.
The next E-node is node 6. Both its children are infeasible. Node 9’s only child is also infeasible. The
minimum-cost answer node is node 9. It has a cost of 8.
When implementing a FIFO branch-and-bound algorithm, it is not economical to kill live nodes with
c(x) > upper each time upper is updated. This is so because live nodes are in the queue in the order
in which they were generated. Hence, nodes with c(x) > upper are distributed in some random way
in the queue. Instead, live nodes with c(x) > upper can be killed when they are about to become E-
nodes.
An LC Branch-and-Bound search of the tree of Figure 8.4 will begin with upper = and node 1 as the
first E-node.
As in the case of FIFOBB, upper is updated to 14 when node 3 is generated and nodes 4 and 5 are
killed as c(4) > upper and c(5) > upper.
Node 2 is the next E-node as c(2) = 0 and c(3) = 5. Nodes 6, 7 and 8 are generated and upper is
updated to 9 when node 6 is generated. So, node 7 is killed as c(7) = 10 > upper. Node 8 is infeasible
and so killed. The only live nodes now are nodes 3 and 6.
Node 6 is the next E-node as c(6) = 0 < c(3) . Both its children are infeasible.
Node 3 becomes the next E-node. When node 9 is generated, upper is updated to 8 as u(9) = 8. So,
node 10 with c(10) = 11 is killed on generation.
Node 9 becomes the next E-node. Its only child is infeasible. No live nodes remain. The search
terminates with node 9 representing the minimum-cost answernode.
23
The path = 1 3 9 = 5 + 3 = 8
Ans:
NP-hard NP-Complete
To solve this problem, it do not have to be To solve this problem, it must be both
in NP . NP and NP-hard problems.
Not all NP-hard problems are NP- All NP-complete problems are NP-
complete. hard
It is optimization
problem used. It is Decision problem used.