ADSA Unit-1
ADSA Unit-1
__________________________________________________________________________
Introduction to Algorithms:
Algorithms, Pseudocode for expressing algorithms, Performance Analysis-Space
complexity, Time complexity, Asymptotic Notation- Big oh, Omega, Theta notation and
Little oh notation, Polynomial Vs Exponential Algorithms, Average, Best and Worst Case
Complexities, Analysing Recursive Programs.
_________________________________________________________________________
An Algorithm is any well-defined computational procedure that takes some value or set of
values as Input and produces a set of values or some value as output. Thus algorithm is a
sequence of computational steps that transforms the input into the output.
Formal Definition:
Algorithm Specification:
Algorithm can be described in three ways.
1. Natural language like English:
1
When this way is choused care should be taken, we should ensure that each & every
statement is definite.
2. Graphic representation called flowchart:
This method will work well when the algorithm is small& simple.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program, which resembles
language like Pascal & algol.
While Loop:
While < condition > do
{
<statement-1>
.
.
.
<statement-n> }
2
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
.
.
<statement-n>
until<condition>
As an example, the following algorithm fields & returns the maximum of ‘n’ given
numbers:
1. algorithm Max(A,n)
2. // A is an array of size n
3
3. {
4. Result := A[1];
5. for I:= 2 to n do
6. if A[I] > Result then
7. Result :=A[I];
8. return Result;
9. }
In this algorithm (named Max), A & n are procedure parameters. Result & I are Local
variables.
Selection Sort:
1. For i:= 1 to n do
2. {
3. Examine a[I] to a[n] and suppose the smallest element is at a[j];
4. Interchange a[I] and a[j];
5. }
4
Selection Sort:
Selection Sort begins by finding the least element in the list. This element is
moved to the front. Then the least element among the remaining element is found out and put
into second position. This procedure is repeated till the entire list has been studied.
1 is selected , 1,5,4,3,2
2 is selected, 1,2,4,3,5
3 is selected, 1,2,3,4,5
4 is selected, 1,2,3,4,5
Proof:
We first note that any I, say I=q, following the execution of lines 6 to 9,it is the
case that a[q] Þ a[r],q<r<=n.
Also observe that when ‘i’ becomes greater than q, a[1:q] is unchanged. Hence,
following the last execution of these lines (i.e. I=n).We have a[1] <= a[2]
<=……a[n].
We observe this point that the upper limit of the for loop in the line 4 can be
changed to n-1 without damaging the correctness of the algorithm.
Algorithm:
5
{
count =count+1;
s=s+a[I];
count=count+1;
}
count=count+1;
count=count+1;
return s;
}
If the count is zero to start with, then it will be 2n+3 on termination. So each
invocation of sum execute a total of 2n+3 steps.
2. The second method to determine the step count of an algorithm is to build a table in
which we list the total number of steps contributes by each statement.
First determine the number of steps per execution (s/e) of the statement and the total
number of times (ie., frequency) each statement is executed.
By combining these two quantities, the total contribution of all statements, the
step count for the entire algorithm is obtained.
Statement S/e Frequency Total
1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0
Total 2n+3
There are different kinds of mathematical notations used to represent time complexity.
These are called Asymptotic notations. They are as follows:
1. Big oh(O) notation
2. Omega(Ω) notation
3. Theta(ɵ) notation
8
1. Big oh(O) notation:
Big oh(O) notation is used to represent upperbound of algorithm runtime.
Example:
If f(n)=3n+2 then prove that f(n) = O(n)
Let f(n) =3n+2, c=4, g(n) =n
if n=1 3n+2 ≤ 4n
3(1)+2 ≤ 4(1)
3+2 ≤ 4
5 ≤ 4 (F)
if n=2 3n+2≤4n
3(2)+2 ≤ 4(2)
8 ≤ 8 (T)
3n+2 ≤ 4n for all n ≥ 2
This is in the form of f(n) ≤ c*g(n) for all n ≥ n0, where c=4, n0 =2
Therefore, f(n) = O(n),
2. Omega(Ω) notation:
Big oh(O) notation is used to represent lowerbound of algorithm runtime.
Let f(n) and g(n) are two non-negative functions
The function f(n) = Ω(g(n)) if and only if there exists positive constants c and n0
such that f(n) ≥ c*g(n) for all n , n ≥ n0.
9
Example
f(n)=3n+2 then prove that f(n) = Ω(g(n))
Let f(n) =3n+2, c=3, g(n) =n
if n=1 3n+2 ≥ 3n
3(1)+2 ≥ 3(1)
5 ≥ 3 (T)
3n+2 ≥ 4n for all n ≥ 1
This is in the form of f(n) ≥ c*g(n) for all n ≥ n0, where c=3, n0 =1
Therefore, f(n) = Ω(n).
3. Theta(ɵ) notation:
Theta(ɵ) notation is used to represent the running time between upper bound and
lower bound.
Let f(n) and g(n) be two non-negative functions.
The function f(n) = θ(g(n)) if and only if there exists positive constants c 1 , c2 and
n0 such that c1*g(n) ≤ f(n)≤c2* g(n) for all n, n≥n0 .
Example:
f(n)=3n+2 then Prove that f(n) = θ(g(n))
Lower bound = 3n+2 ≥ 3n for all n ≥ 1
c1=3,g(n)=n,n0=1
Upper Bound = 3n+2 ≤ 4n for all n ≥ 2
10
Algorithms which have exponential time complexity grow much faster than
polynomial algorithms.
The difference you are probably looking for happens to be where the variable is in the
equation that expresses the run time. Equations that show a polynomial time complexity
have variables in the bases of their terms.
Examples: n3 + 2n2 + 1. Notice n is in the base, NOT the exponent.
In exponential equations, the variable is in the exponent.
Examples: 2n. As said before, exponential time grows much faster. If n is equal to 1000 (a
reasonable input for an algorithm), then notice 1000 3 is 1 billion, and 21000 is simply huge!
For a reference, there are about 280 hydrogen atoms in the sun, this is much more than 1
billion.
1.6.AVERAGE, BEST AND WORST CASE COMPLEXITIES
Best case: This analysis constrains on the input, other than size. Resulting in the fasters
possible run time
Worst case: This analysis constrains on the input, other than size. Resulting in the fasters
possible run time
Average case: This type of analysis results in average running time over every type of input.
Complexity: Complexity refers to the rate at which the storage time grows as a function of
the problem size.
12
Merge Sort
T(N) = 2 T(N/2) + CN
Base Condition: T(1) = 1
Recursive Algorithm: Finding min and max in an array
T(N) = 2 T(N/2) + 2
Base Condition: T(1) = 0 and T(2) = 1
Quick Sort
T(N) = T(i) + T(N-i-1) + CN
The time taken by quick sort depends upon the distribution of the input array and partition
strategy. T(i) and T(N-i-1) are two smaller subproblems after the partition where i is the
number of elements that are smaller than the pivot. CN is the time complexity of the partition
process where C is a constant. .
Worst Case: This is a case of the unbalanced partition where the partition process always
picks the greatest or smallest element as a pivot(Think!).For the recurrence relation of the
worst case scenario, we can put i = 0 in the above equation.
T(N) = T(0) + T(N-1) + CN
which is equivalent to
T(N) = T(N-1) + CN
Best Case: This is a case of the balanced partition where the partition process always picks
the middle element as pivot. For the recurrence relation of the worst case scenario, put i =
N/2 in the above equation.
T(N) = T(N/2) + T(N/2-1) + CN
which is equivalent to
T(N) = 2T(N/2) + CN
Average Case: For average case analysis, we need to consider all possible permutation of
input and time taken by each permutation.
T(N) = (for i = 0 to N-1) ∑ ( T(i) + T(N-i-1) ) / N
Note: This looks mathematically complex but we can find several other intuitive ways to
analyse the average case of quick sort.
13
Analyzing the Efficiency of Recursive Algorithms
Step 1: Identify the number of sub-problems and a parameter (or parameters) indicating an
input’s size of each sub-problem (function call with smaller input size)
Step 2: Add the time complexities of the sub-problems and the total number of basic
operations performed at that stage of recursion.
Step3: Set up a recurrence relation, with a correct base condition, for the number of times the
basic operation is executed.
14
Step4: Solve the recurrence or, at least, ascertain the order of growth of its solution. There
are several ways to analyse the recurrence relation but we are discussing here two popular
approaches of solving recurrences:
Method 1: Recursion Tree Method
Method 2: Master Theorem
15
Case 1: T(N) = O(N^logb(a)), if k < logb(a).
Example 1
T(N) = T(N/2) + C
The above recurrence relation is of binary search. Comparing this with master theorem, we
get a = 1, b = 2 and k = 0 because f(N) = C = C(N^0)
Example 2
T(N) = 2*T(N/2) + CN
The above recurrence relation is of merge sort. Comparing this with master theorem,a = 2, b
= 2 and f(N) = CN. Comparing left and right sides of f(N), we get k = 1.
logb(a) = log2(2) = 1 = K
16