All Units Notes DAA (1)
All Units Notes DAA (1)
UNIT 5
V NP-COMPLETE AND APPROXIMATION ALGORITHM
179
Course Code/Title:CS3302/Design Analysis of Algorithms
function, the taskis to find the values of the variables that optimize the objective function
subject to the constraints. Algorithms like the simplex method can solve this problem in
polynomial time.
5. Graph coloring: Given an undirected graph G, the task is to assign a color to each node
such that no two adjacent nodes have the same color ,using as few colors as possible.
The greedy algorithm can solve this problem in O(n^2)time complexity, where n is the
number of nodes in the graph.
These problems are considered tractable because algorithms exist that can solve the min
polynomial l time complexity, which means that the time required to solve them grows no
faster than a polynomial function of the input size.
1. Linearsearch: Given a list of n items, the task is to find a specific item in the
list. The time complexity of linear search is O(n), which h is a polynomial
function of the input size.
180
Course Code/Title:CS3302/Design Analysis of Algorithms
2. Bubblesort: Given a list of n items, the task is to sort the min ascending order. The time
complexity of bubble sort is O(n^2), which is also a polynomial function of the input
size.
3. Shortest path in a graph: Given a graph G and two nodes s and t, the task is to find
the shortest path between s and t .Algorithms like Dijkstra's algorithm and the A*
algorithm can solve this problem in O(m+nlogn) time complexity, which is a
polynomial function of the input size.
4. Maximum flow in a network: Given a network with a source node and a sink node,
and capacities on the edges, thetaskis tofindthe maximumflowfromthesourcetothe
sink. The Ford-Fulkerson algorithm can solve this problem in O(mf),where m is the
number of edges in the network and f is the maximum flow, which is also a
polynomial function of the input size.
5. Linear programming: Given a system of linear constraints and a linear objective
function, the task is to find the values of the variables that optimize the objective
function subject to the constraints. Algorithms like the simplex method can solve this
problem in polynomial time.
P(Polynomial)problems
P problems refer to problems where an algorithm would take a polynomial amount of
time to solve, or where Big-O is a polynomial (i.e.O(1),O(n),O(n²),etc). These are
problems that would be considered ‘easy’ to solve, and thus do not generally have
immense run times.
NP(Non-deterministicPolynomial)Problems
NP problems were a little harder forme to understand, but I think this is what they are.
Interms of solving a NP problem, the run-time would not be polynomial. It would be
something like O(n!) or something much larger.
NP-Hard Problems
A problem is classified as NP-Hard when an algorithm for solving it can be translated
to solve an Zy NP problem. Then we can say, this problem is atleast as hard as any
NP problem, but it could be much harder or more complex.
181
Course Code/Title:CS3302/Design Analysis of Algorithms
NP-Complete Problems
NP-Complete problems are problems that live in both the NP and NP-Hard
classes. This means that NP-Complete problems can be verified in polynomial
time and that any NP problem can be reduced to this problem in polynomial time.
The simplest approximate approach to the bin packing problem is the Next-Fit (NF)
algorithm which is explained later in this article. The first item is assigned to bin 1.
Items 2,...,n are then considered by increasing indices: each item is assigned to the
current bin, if it fits; otherwise, it is assigned to a new bin, which becomes the current
one.
VisualNRepresentation
Let us consider the same example as used above and bins of size 1
182
Course Code/Title:CS3302/Design Analysis of Algorithms
The Next fit solution (NF(I)) forth is instance I would be- Considering 0.5 size d item
first, we can place it in the first bin
Moving onto the 0.7 size d item, we cannot place it in the first bin. Hence we place it in a new
bin.
Moving on to the 0.5 sized item, we cannot place it in the current bin. Hence we place it in a new
bin.
Moving on to the 0.2 sized item, we can place it in the current (third bin)
Thus we need 6 bins as opposed to the 4 bins of the optimal solution. Thus we can see
that this algorithm is not very efficient.
Analyzing the approximation ratio of Next-Fit algorithm
The time complexity of the algorithm is clearlyO(n). It is easy to prove that, for any
instance I of BPP, the solution value NF(I) provided by the algorithm satisfies the
bound
183
Course Code/Title:CS3302/Design Analysis of Algorithms
NF(I)<2z(I)
Where z(I) denotes the optimal solution value. Furthermore, there exist instances for
which the ratio NF(I)/z (I) is arbitrarily close to 2, i.e. the worst-case approximation
ratio of NFisr(NF)
=2.
Psuedo code
NEXTFIT(size[],n,c)
size[] is the array containg the sizes of the items, n is the number of items and c is the
capacity of the bin
{
Initialize result(Count of bins) and remaining capacity in current bin.res=0
bin_rem=c
Place items one by one
for(inti=0;i <n;i++){
//If this item can't fit in current
bin if (size[i] > bin_rem) {
Use a new bin
res++
bin_rem=c-size[i]
}
else
bin_rem-=size[i];
}
returnres;
}
2) FirstFit algorithm
A better algorithm, First-Fit (FF), considers the items according to increasing indices and
assign seach item to the lowest indexed initialized bin in to which it fits; only when the current
item cannot fit in to any initialized bin, is a new bin introduced
VisualRepresentation
Let us consider the same example as used above and bin s of size 1
184
Course Code/Title:CS3302/Design Analysis of Algorithms
Considering0.5sizeditemfirst,wecanplaceitinthefirstbin
Movingontothe0.7sizeditem,wecannotplaceitinthefirstbin.Henceweplaceitinanewbin.
Movingontothe0.5sizeditem,wecanplaceitinthefirstbin.
Movingontothe0.2sizeditem,wecanplaceitinthefirstbin,wecheckwiththesecondbinand
we can place it there.
Movingontothe0.4sizeditem,wecannotplaceitinanyexistingbin.Henceweplaceitinanewbin.
Similarly,placingalltheotheritemsfollowingtheFirst-Fitalgorithmweget-
185
Course Code/Title:CS3302/Design Analysis of Algorithms
Thus we need 5 bins as opposed to the 4 bins of the optimal solution but is much more
efficient than Next-Fit algorithm.
AnalyzingtheapproximationratioofNext-Fitalgorithm
If FF(I)is theFirst-fit implementation for Iinstance and z(I)is the most optimal solution, then:
ItcanbeseenthattheFirstFitneverusesmorethan1.7*z(I)bins.SoFirst-
FitisbetterthanNextFit in terms of upper bound on number of bins.
Psuedocode
FIRSTFIT(size[],n,c)
{
size[]isthearraycontaingthesizesoftheitems,nisthenumberofitemsandcisthecapacityoft
he bin
/Initializeresult(Countofbins)
res=0;
Createanarraytostoreremainingspaceinbinstherecanbeatmostnbinsbin_rem[n];
Plaeitemsonebyone
for(inti=0;i<n;i++){
Findthefirstbinthatcanaccommodateweight[i]intj;
for(j=0;j <res;j++){
if (bin_rem[j] >= size[i]) {
bin_rem[j]=bin_rem[j]-
size[i]; break;
}
}
Ifnobincouldaccommodatesize[i]
if (j == res) {
186
Course Code/Title:CS3302/Design Analysis of Algorithms
bin_rem[res]=c-
size[i]; res++;
}
}
returnres;
}
3) BestFitAlgorithm
The next algorithm,Best-Fit(BF), is obtained from FF by assigning the current item to the
feasible bin (if any) having the smallest residual capacity (breaking ties in favor of the lowest
indexed bin).
Simply put,the idea is to places the next item in the tightes tspot.That is,put it in the
bin so that the smallest empty space is left.
VisualRepresentation
Letusconsiderthesameexampleasusedaboveandbinsofsize1
Movingontothe0.7sizeditem,wecannotplaceitinthefirstbin.Henceweplaceitinanewbin.
187
Course Code/Title:CS3302/Design Analysis of Algorithms
Moving on to the 0.5sized item,we can place it in the first bin tightly.
Moving on to the 0.2 sized item,we cannot place it in the first bin but we can place it in second
bin tightly.
Moving on to the 0.4 sized item,we cannot place it in any existing bin.Hence we place it in a new
bin.
Similarly,placing all the other items following the First-Fit algorithm we get-
Thus we need 5 bins as opposed to the 4 bins of the optimal solution but is much more efficient than Next-
Fit algorithm.
Analyzing the approximation ratio of Best-Fit algorithm
It can be noted that Best-Fit(BF), is obtained from FF by assigning the current item
to the feasible bin (if any) having the smallest residual capacity (breaking ties in
favour of the lowest indexed bin). BF satisfies the same worst-case bounds as FF
AnalysisOfupper-boundofBest-Fitalgorithm
If z(I) is the optimal number of bins, then BestFit n ever use s more than 2*z(I)-2 bins.
So Best Fit is same as Next Fit in terms of upper bound on number of bins.
188
Course Code/Title:CS3302/Design Analysis of Algorithms
Psuedo code
BEST FIT(size[],n,c)
{
size[] is the array containg the size s of the items, n is the number of item s and c is
the capacity of the bin
Initialize result(Count of bins)res
=0;
Initialize minimum space left and index of best bin int min = c + 1, bi = 0;
for(j=0;j<res;j++){
if(bin_rem[j]>=size[i]&&bin_rem[j]-
size[i]<min){bi=j; min=bin_rem[j]-size[i];
}
}
189
Course Code/Title:CS3302/Design Analysis of Algorithms
We first sort the array of item s in decreasing size by weight and apply first-fit
algorithm as discussed above
Algorithm
Read the input s of items
Sort the array of item s in decreasing order by their sizes
Apply First-Fit algorithm
Visual Representation
Let us consider the same example as used above and bin s of size 1
We then select 0.6 sized item. We cannot place it in bin 1.So, we place it in bin 2
We then select 0.5 sized item. We cannot place it in any existing. So, we place it in bin 3
190
Course Code/Title:CS3302/Design Analysis of Algorithms
Thus only 4 bins are required which his the same as the optimal solution.
We first sort the array of items in decreasing size by weight and apply Best-fit
algorithm as discussed above
Algorithm
Read the inputs of items
Sort the array of items in decreasing order by their sizes
Apply Next-Fit algorithm
Visual Representation
Let us consider the same example as used above and bins of size 1
191
Course Code/Title:CS3302/Design Analysis of Algorithms
We then select 0.6 sized item. We cannot place it in bin 1. So, we place it in bin 2
We then select 0.5 sized item. We cannot place it in any existing. So, we place it in bin 3
Thus only 4 bins are required which is the same as the optimal solution.
Approximation Algorithms for the Traveling Salesman Problem
We solved the traveling sales man problem by exhaustive search in Section3.4,
mentioned its decision version as one of the most well-known NP-complete problems
in Section 11.3, and saw how its instances can be solved by a branch-and-bound
algorithm in Section 12.2. Here, we consider several approximation algorithms, a
small sample of dozens of such algorithms suggested over the years for this famous
problem.
But first let us answer the question of whether we should hope to find a polynomial-
time approximation algorithm with a finite performance ratio on all instances of the
192
Course Code/Title:CS3302/Design Analysis of Algorithms
Nearest-neighbour algorithm
The following well-known greedy algorithm is based on the nearest-neighbor
heuristic: always go next to the nearest unvisited city.
Step1 Choose an arbitrary city as the start.
Step2 Repeat the following operation until all the cities have beenvisited: go to the
unvisited city nearest the one visited last (ties can be broken arbitrarily).
Step3 Return to the starting city.
EXAMPLE1 For the instance represented by the graph in Figure12.10, with a as the
starting vertex, the nearest-neighbor algorithm yields the tour (Hamiltonian
circuit) sa:a−b−c−d−a of length 10.
Unfortunately, except for its simplicity, not many good things can be said about the
nearest- neighbor algorithm. In particular, nothing can be said in general about the
193
Course Code/Title:CS3302/Design Analysis of Algorithms
Which can be made as large as we wish by choosing an appropriately large value of w. Hence,
RA=
∞ for this algorithm(as it should be according to Theorem1).
Twice-around-the-tree algorithm
Step1 Construct a minimum spanning tree of the graph corresponding to a given
instance of the traveling salesman problem.
Step2Starting at an arbitrary vertex, perform a walk around the minimum spanning
tree recording all the vertices passed by. (This can be done by a DFS traversal.)
Step3Scan the vertex list obtained in Step2 and eliminate from it all repeated
occurrences of the same vertex except the starting on e at the end of the list.(This step
is equivalent to making shortcuts in the walk.) The vertices remaining on the list will
form a Hamiltonian circuit, which is the output of the algorithm.
194
Course Code/Title:CS3302/Design Analysis of Algorithms
1≡1(modn)OR
an-1%n=1
Example: Since 5 is prime,
24≡1(mod5)[or24%5=1],
34≡1(mod5)and44≡1(mod5)
Since7isprime,26≡ 1(mod7),
36≡1(mod7),46≡1(mod7)
56≡1(mod7)and66≡1(mod7)
Algorithm
1) Repeat following k times:
2) Return true[probablyprime].
Unlike merge sort, we don’t need to merge the two sort e d arrays. Thus Quick sort
require s lesser auxiliary space than Merge Sort, which is why it is often preferred to
Merge Sort.
Using a randomly generated pivot we can further improve the time complexity of QuickSort.
Algorithm for random pivoting
195
Course Code/Title:CS3302/Design Analysis of Algorithms
partition(arr[],lo,hi)
pivot=arr[hi]
i = lo //place for
swapping for j := lo to hi
– 1 do
if arr[j] <= pivot then
swaparr[i]witharr[j]i
=i +1
swaparr[i]witharr[hi]retur
ni
partition_r(arr[],lo,hi)
r=RandomNumberfromlotohiSw
ap arr[r] and arr[hi]
returnpartition(arr,lo,hi)
quicksort(arr[],lo,hi)
iflo<hi
p=partition_r(arr,lo,hi)
quicksort(arr, lo , p-1)
quicksort(arr, p+1, hi)
Finding k th smallest element
Problem Description:Given an arrayA[] of n elements and a positive integer K, find
the Kth smallest element in the array. It is given that all array elements are distinct.
For Example:
Input:A[]={10,3,6,9,2,4,15,23},K=4
Output:6
Input:A[]={5,-8,10,37,101,2,9},K=6
Output:37
Quick-Select:Approach similar to quicksort
This approach is similar to the quick sort algorithm where we use the partition on the
input array recursively. But unlike quicksort, which processes both sides of the array
recursively, this algorithm works on only one side of the partition. We recur for either
the left or right side according to the position of pivot.
Solution Steps
1. Partition the arrayA[left..right]into two subarrays A[left..pos] and A[pos+1..right]
such that each element of A[left .. pos] is less than each element of A[pos + 1 .. right].
2. Computes the number of elements in the subarrayA[left..pos]i.e.count=pos-left+1
196
Course Code/Title:CS3302/Design Analysis of Algorithms
197