0% found this document useful (0 votes)
5 views

ADSA Unit-1

Adsa

Uploaded by

naiduvamshidhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

ADSA Unit-1

Adsa

Uploaded by

naiduvamshidhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT –I

__________________________________________________________________________
Introduction to Algorithms:
Algorithms, Pseudocode for expressing algorithms, Performance Analysis-Space
complexity, Time complexity, Asymptotic Notation- Big oh, Omega, Theta notation and
Little oh notation, Polynomial Vs Exponential Algorithms, Average, Best and Worst Case
Complexities, Analysing Recursive Programs.
_________________________________________________________________________

1.1. INTRODUCTION TO ALGORITHMS: WHAT IS AN ALGORITHM?


Informal Definition:

An Algorithm is any well-defined computational procedure that takes some value or set of
values as Input and produces a set of values or some value as output. Thus algorithm is a
sequence of computational steps that transforms the input into the output.

Formal Definition:

An Algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In


addition, all algorithms should satisfy the following criteria.

 INPUT  Zero or more quantities are externally supplied.


 OUTPUT  At least one quantity is produced.
 DEFINITENESS  Each instruction is clear and unambiguous.
 FINITENESS  If we trace out the instructions of an algorithm, then for all cases,
the algorithm terminates after a finite number of steps.
 EFFECTIVENESS  Every instruction must very basic so that it can be carried out,
in principle, by a person using only pencil & paper.

Issues or study of Algorithm:


1. How to device or design an algorithm  creating and algorithm.
2. How to express an algorithm  definiteness.
3. How to analysis an algorithm  time and space complexity.
4. How to validate an algorithm  fitness.
5. Testing the algorithm  checking for error.

Algorithm Specification:
Algorithm can be described in three ways.
1. Natural language like English:

1
When this way is choused care should be taken, we should ensure that each & every
statement is definite.
2. Graphic representation called flowchart:
This method will work well when the algorithm is small& simple.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program, which resembles
language like Pascal & algol.

1.2. PSEUDO-CODE FOR EXPRESSING AN ALGORITHM:

1. Comments begin with // and continue until the end of line.


2. Blocks are indicated with matching braces {and}.
3. An identifier begins with a letter. The data types of variables are not explicitly
declared.
4. Compound data types can be formed with records. Here is an example,
Node. Record
{
data type – 1 data-1;
.
.
.
data type – n data – n;
node * link;
}
Here link is a pointer to the record type node. Individual data items of a record can be
accessed with  and period.

5. Assignment of values to variables is done using the assignment statement.


<Variable>:= <expression>;

6. There are two Boolean values TRUE and FALSE.


 Logical Operators AND, OR, NOT
Relational Operators <, <=,>,>=, =, !=

7. The following looping statements are employed.


For, while and repeat-until

While Loop:
While < condition > do
{
<statement-1>
.
.
.
<statement-n> }

2
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
.
.
<statement-n>
until<condition>

8. A conditional statement has the following forms.


 If <condition> then <statement>
 If <condition> then <statement-1>
Else <statement-1>
Case statement:
Case
{
: <condition-1> : <statement-1>
.
.
.
: <condition-n> : <statement-n>
: else : <statement-n+1>
}
9. Input and output are done using the instructions read & write.
10. There is only one type of procedure: Algorithm, the heading takes the form,
Algorithm Name (Parameter lists)

 As an example, the following algorithm fields & returns the maximum of ‘n’ given
numbers:

1. algorithm Max(A,n)
2. // A is an array of size n

3
3. {
4. Result := A[1];
5. for I:= 2 to n do
6. if A[I] > Result then
7. Result :=A[I];
8. return Result;
9. }
In this algorithm (named Max), A & n are procedure parameters. Result & I are Local
variables.

 Next we present 2 examples to illustrate the process of translation problem into an


algorithm.

Selection Sort:

 Suppose we Must devise an algorithm that sorts a collection of n>=1 elements


of arbitrary type.
 A Simple solution given by the following.
 ( From those elements that are currently unsorted ,find the smallest & place it
next in the sorted list.)
Algorithm:

1. For i:= 1 to n do
2. {
3. Examine a[I] to a[n] and suppose the smallest element is at a[j];
4. Interchange a[I] and a[j];
5. }

 Finding the smallest element (sat a[j]) and interchanging it with a[ i ]

 We can solve the latter problem using the code,


t := a[i];
a[i]:=a[j];
a[j]:=t;
 The first subtask can be solved by assuming the minimum is a[ I ];checking a[I]
with a[I+1],a[I+2]…….,and whenever a smaller element is found, regarding it
as the new minimum. a[n] is compared with the current minimum.
 Putting all these observations together, we get the algorithm Selection sort.
Theorem: Algorithm selection sort(a,n) correctly sorts a set of n>=1 elements .The result
remains is a a[1:n] such that a[1] <= a[2] ….<=a[n].

4
Selection Sort:

Selection Sort begins by finding the least element in the list. This element is
moved to the front. Then the least element among the remaining element is found out and put
into second position. This procedure is repeated till the entire list has been studied.

Example: List L = 3,5,4,1,2

1 is selected ,  1,5,4,3,2

2 is selected, 1,2,4,3,5

3 is selected, 1,2,3,4,5

4 is selected, 1,2,3,4,5

Proof:
 We first note that any I, say I=q, following the execution of lines 6 to 9,it is the
case that a[q] Þ a[r],q<r<=n.
 Also observe that when ‘i’ becomes greater than q, a[1:q] is unchanged. Hence,
following the last execution of these lines (i.e. I=n).We have a[1] <= a[2]
<=……a[n].
 We observe this point that the upper limit of the for loop in the line 4 can be
changed to n-1 without damaging the correctness of the algorithm.

Algorithm:

1. Algorithm selection sort (a,n)


2. // Sort the array a[1:n] into non-decreasing order.
3.{
4. for I:=1 to n do
5. {
6. j:=I;
7. for k:=i+1 to n do
8. if (a[k]<a[j])
9. t:=a[I];
10. a[I]:=a[j];
11. a[j]:=t;
12. }
13. }

5
{
count =count+1;
s=s+a[I];
count=count+1;
}
count=count+1;
count=count+1;
return s;
}

 If the count is zero to start with, then it will be 2n+3 on termination. So each
invocation of sum execute a total of 2n+3 steps.

2. The second method to determine the step count of an algorithm is to build a table in
which we list the total number of steps contributes by each statement.
First determine the number of steps per execution (s/e) of the statement and the total
number of times (ie., frequency) each statement is executed.
By combining these two quantities, the total contribution of all statements, the
step count for the entire algorithm is obtained.
Statement S/e Frequency Total

1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

Total 2n+3

1.4. ASYMPTOTIC NOTATIONS

There are different kinds of mathematical notations used to represent time complexity.
These are called Asymptotic notations. They are as follows:
1. Big oh(O) notation
2. Omega(Ω) notation
3. Theta(ɵ) notation

8
1. Big oh(O) notation:
 Big oh(O) notation is used to represent upperbound of algorithm runtime.

 Let f(n) and g(n) are two non-negative functions


 The function f(n) = O(g(n)) if and only if there exists positive constants c and n0
such that f(n)≤c*g(n) for all n , n ≥ n0.

Example:
If f(n)=3n+2 then prove that f(n) = O(n)
Let f(n) =3n+2, c=4, g(n) =n
if n=1 3n+2 ≤ 4n
3(1)+2 ≤ 4(1)
3+2 ≤ 4
5 ≤ 4 (F)
if n=2 3n+2≤4n
3(2)+2 ≤ 4(2)
8 ≤ 8 (T)
3n+2 ≤ 4n for all n ≥ 2
This is in the form of f(n) ≤ c*g(n) for all n ≥ n0, where c=4, n0 =2
Therefore, f(n) = O(n),
2. Omega(Ω) notation:
 Big oh(O) notation is used to represent lowerbound of algorithm runtime.
 Let f(n) and g(n) are two non-negative functions
 The function f(n) = Ω(g(n)) if and only if there exists positive constants c and n0
such that f(n) ≥ c*g(n) for all n , n ≥ n0.

9
Example
f(n)=3n+2 then prove that f(n) = Ω(g(n))
Let f(n) =3n+2, c=3, g(n) =n
if n=1 3n+2 ≥ 3n
3(1)+2 ≥ 3(1)
5 ≥ 3 (T)
3n+2 ≥ 4n for all n ≥ 1
This is in the form of f(n) ≥ c*g(n) for all n ≥ n0, where c=3, n0 =1
Therefore, f(n) = Ω(n).
3. Theta(ɵ) notation:
 Theta(ɵ) notation is used to represent the running time between upper bound and
lower bound.
 Let f(n) and g(n) be two non-negative functions.
 The function f(n) = θ(g(n)) if and only if there exists positive constants c 1 , c2 and
n0 such that c1*g(n) ≤ f(n)≤c2* g(n) for all n, n≥n0 .

Example:
f(n)=3n+2 then Prove that f(n) = θ(g(n))
Lower bound = 3n+2 ≥ 3n for all n ≥ 1
c1=3,g(n)=n,n0=1
Upper Bound = 3n+2 ≤ 4n for all n ≥ 2

10
Algorithms which have exponential time complexity grow much faster than
polynomial algorithms.
The difference you are probably looking for happens to be where the variable is in the
equation that expresses the run time. Equations that show a polynomial time complexity
have variables in the bases of their terms.
Examples: n3 + 2n2 + 1. Notice n is in the base, NOT the exponent.
In exponential equations, the variable is in the exponent.
Examples: 2n. As said before, exponential time grows much faster. If n is equal to 1000 (a
reasonable input for an algorithm), then notice 1000 3 is 1 billion, and 21000 is simply huge!
For a reference, there are about 280 hydrogen atoms in the sun, this is much more than 1
billion.
1.6.AVERAGE, BEST AND WORST CASE COMPLEXITIES

Best case: This analysis constrains on the input, other than size. Resulting in the fasters
possible run time

Worst case: This analysis constrains on the input, other than size. Resulting in the fasters
possible run time

Average case: This type of analysis results in average running time over every type of input.

Complexity: Complexity refers to the rate at which the storage time grows as a function of
the problem size.

1.7. ANALYSING RECURSIVE PROGRAMS.


For every recursive algorithm, we can write recurrence relation to analyse the time
complexity of the algorithm.
Recurrence relation of recursive algorithms
A recurrence relation is an equation that defines a sequence where any term is defined in
terms of its previous terms.
The recurrence relation for the time complexity of some problems are given below:
Fibonacci Number
T(N) = T(N-1) + T(N-2)
Base Conditions: T(0) = 0 and T(1) = 1
Binary Search
T(N) = T(N/2) + C
Base Condition: T(1) = 1

12
Merge Sort
T(N) = 2 T(N/2) + CN
Base Condition: T(1) = 1
Recursive Algorithm: Finding min and max in an array
T(N) = 2 T(N/2) + 2
Base Condition: T(1) = 0 and T(2) = 1
Quick Sort
T(N) = T(i) + T(N-i-1) + CN

The time taken by quick sort depends upon the distribution of the input array and partition
strategy. T(i) and T(N-i-1) are two smaller subproblems after the partition where i is the
number of elements that are smaller than the pivot. CN is the time complexity of the partition
process where C is a constant. .

Worst Case: This is a case of the unbalanced partition where the partition process always
picks the greatest or smallest element as a pivot(Think!).For the recurrence relation of the
worst case scenario, we can put i = 0 in the above equation.
T(N) = T(0) + T(N-1) + CN
which is equivalent to
T(N) = T(N-1) + CN

Best Case: This is a case of the balanced partition where the partition process always picks
the middle element as pivot. For the recurrence relation of the worst case scenario, put i =
N/2 in the above equation.
T(N) = T(N/2) + T(N/2-1) + CN
which is equivalent to
T(N) = 2T(N/2) + CN

Average Case: For average case analysis, we need to consider all possible permutation of
input and time taken by each permutation.
T(N) = (for i = 0 to N-1) ∑ ( T(i) + T(N-i-1) ) / N
Note: This looks mathematically complex but we can find several other intuitive ways to
analyse the average case of quick sort.

13
Analyzing the Efficiency of Recursive Algorithms
Step 1: Identify the number of sub-problems and a parameter (or parameters) indicating an
input’s size of each sub-problem (function call with smaller input size)

Step 2: Add the time complexities of the sub-problems and the total number of basic
operations performed at that stage of recursion.

Step3: Set up a recurrence relation, with a correct base condition, for the number of times the
basic operation is executed.

14
Step4: Solve the recurrence or, at least, ascertain the order of growth of its solution. There
are several ways to analyse the recurrence relation but we are discussing here two popular
approaches of solving recurrences:
 Method 1: Recursion Tree Method
 Method 2: Master Theorem

Method 1: Recursion Tree Method


A recurrence tree is a tree where each node represents the cost of a certain recursive
subproblem. We take the sum of each value of nodes to find the total complexity of the
algorithm.
Steps for solving a recurrence relation
1. Draw a recursion tree based on the given recurrence relation.
2. Determine the number of levels, cost at each level and cost of the last level.
3. Add the cost of all levels and simplify the expression.
Let us solve the given recurrence relation by Recurrence Tree Method
T(N) = 2*T(N/2) + CN
From the above recurrence relation, we can find that
1. The problem of size N is divided into two sub-problems of size N/2.
2. The cost of dividing a sub-problem and then combining its solution of size N is CN.
3. Each time, the problem will be divided into half, until the size of the problem
becomes 1.
The recursion tree for the above relation will be

Method 2: Master theorem

Master theorem states that for a recurrence relation of form

T(N) = aT(N/b) + f(N) where a >= 1 and b > 1

If f(N) = O(N^k) and k ≥ 0, then

15
Case 1: T(N) = O(N^logb(a)), if k < logb(a).

Case 2: T(N) = O((N^k)*logN), if k = logb(a).

Case 3: T(N) = O(N^k), if k > logb(a)

Example 1

T(N) = T(N/2) + C

The above recurrence relation is of binary search. Comparing this with master theorem, we
get a = 1, b = 2 and k = 0 because f(N) = C = C(N^0)

Here logb(a) = k, so we can apply case 2 of the master theorem.

T(n) = (N⁰*log(N)) = O(logN).

Example 2

T(N) = 2*T(N/2) + CN

The above recurrence relation is of merge sort. Comparing this with master theorem,a = 2, b
= 2 and f(N) = CN. Comparing left and right sides of f(N), we get k = 1.

logb(a) = log2(2) = 1 = K

So, we can apply the case 2 of the master theorem.

=> T(N) = O(N¹*log(N)) = O(NlogN).

16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy