0% found this document useful (0 votes)
11 views

L3_CS1201-AsypComplexityAnalysis

The document discusses asymptotic analysis in algorithm design, focusing on the efficiency, correctness, and complexity of algorithms and data structures. It explains the concepts of best, worst, and average case scenarios in terms of running time, as well as introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta for analyzing algorithm performance. The document emphasizes the importance of understanding algorithmic complexity to improve efficiency and user experience in computational tasks.

Uploaded by

rounak15jais24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

L3_CS1201-AsypComplexityAnalysis

The document discusses asymptotic analysis in algorithm design, focusing on the efficiency, correctness, and complexity of algorithms and data structures. It explains the concepts of best, worst, and average case scenarios in terms of running time, as well as introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta for analyzing algorithm performance. The document emphasizes the importance of understanding algorithmic complexity to improve efficiency and user experience in computational tasks.

Uploaded by

rounak15jais24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Asymptotic Analysis

Data Structure and Implementation


Algorithm Design Goals Goals

Correctness Robustness
Adaptability
Efficiency

Reusability
Data Structures and Algorithms

 Algorithm
 Outline, the essence of a computational procedure,
step-by-step instructions
 Data structure– Organization of data needed
to solve the problem
 Program
 an implementation of an algorithm in some
programming language
Algorithmic problem

Specification
Specification ? of output as
of input a function of
input

 Infinite number of input instances satisfying the


specification. For example:
 A sorted, non-decreasing sequence of natural

numbers. The sequence is of non-zero, finite length:


 1, 20, 908, 909, 100000, 1000000000.
Algorithmic Solution

Input instance, Algorithm Output


adhering to related to
the the input as
specification required

 Algorithm describes actions on the input instance


 Infinitely many correct algorithms for the same
algorithmic problem
Example: Sorting
INPUT OUTPUT
sequence of numbers a permutation of the
sequence of numbers

a1, a2, a3,….,an b1,b2,b3,….,bn


Sort
2 5 4 10 7 2 4 5 7 10

Correctness Running time


For any given input the algorithm Depends on
halts with the output: • number of elements (n)
• b1 < b2 < b3 < …. < bn • how (partially) sorted
• b1, b2, b3, …., bn is a they are
permutation of a1, a2, a3,….,an • algorithm
Best/Worst/Average Case
 Best case: elements already sorted 
tj=1, running time = f(n), i.e., linear time.
 Worst case: elements are sorted in
inverse order
 tj=j, running time = f(n2), i.e., quadratic
time
 Average case: tj=j/2, running time =
f(n2), i.e., quadratic time

Lower Bound  Running Time  Upper Bound


Best/Worst/Average Case (2)

 For a specific size of input n, investigate running


times for different input instances:

6n

5n

4n

3n

2n

1n
Best/Worst/Average Case (3)

 For inputs of all sizes:

worst-case

6n average-case
Running time

5n
best-case
4n

3n

2n

1n

1 2 3 4 5 6 7 8 9 10 11 12 …..
Input instance size
Input Size

 Input size (number of elements in the input)


 size of an array

 polynomial degree

 # of elements in a matrix

 # of bits in the binary representation of the input

 vertices and edges in a graph


Analysis

 Efficiency:
 Running time
 Space used
 Efficiency as a function of input size:
 Number of data elements (numbers, points)
 A number of bits in an input number
What is Algorithmic Complexity

 How fast or slow particular algorithm


performs

 Usually measured as a function of input


size (n)
How to efficiently measure the
time complexity?
 Does just profiling (timing ) works ? Why?
 Measure should be independent of hardware

details(processor, cache etc)

 How about different patterns of inputs of same size?


 Best Case, Worst case, Average case

 The Best way to measure is to do it theoretically using


asymptotic analysis (i.e Big-O)
 means, for very large problem size
Why do we care about runtimes?
 can you imagine if Facebook took a
minute to log in!!
 finding clever ways to do the exact
same thing but in shorter time will yield
results, and happier clients
Summation Algorithm 1 Summation Algo. 2
int sum1(int N) int sum2(int N)
{ {
int s = 0; int s = 0;
for(int i = 1; i <= N; s = N * (N +
i++) 1)/2;
{ return s;
s = s + i; }
}

return s;
}
The RAM model
RAM (Random Access Machine)

 Very important to choose the level of


detail.
 The RAM model:
 Instructions (each taking constant time):
 Arithmetic (add, subtract, multiply, etc.)
 Data movement (assign)
 Control (branch, subroutine call, return)
 Data types – integers and floats
Printing arrays (example)

To print an array, you have to print each element in


the array using a loop like the following:

void PrintArray(int arr[], int size) {


int i = 0;
while (i < size) {
printf("%d ", arr[i]);
i++;
}
}
Example
 Associate a "cost" with each statement.
 Find the "total cost“ by finding the total number of times
each statement is executed.
Algorithm 1 Algorithm 2

Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
... ...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
Another Example

 Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2
Performance Analysis

 Determining an estimate of the time and


memory requirement of the algorithm.
 Time estimation is called time
complexity analysis
 Memory size estimation is called space
complexity analysis.
Asymptotic Notation
Asymptotic Complexity
 Running time of an algorithm as a function of
input size n for large n.
 Expressed using only the highest-order
term in the expression for the exact running
time.
 Instead of exact running time, say Q(n ).
2

 Describes behavior of function in the limit.


 Written using Asymptotic Notation.
Asymptotic Notation
 O, W, Q
 Defined for functions over the natural numbers.
 Ex: f(n) = Q(n ).
2

 Describes how f(n) grows in comparison to n .


2

 Define a set of functions; in practice used to


compare two function sizes.
 The notations describe different rate-of-growth
relations between the defining function and the
defined set of functions.
O-notation
For function g(n), we define O(g(n)),
big-O of n, as the set:
O(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n , 0

we have 0  f(n)  cg(n) }


Intuitively: Set of all functions whose rate of growth is the
same as or lower than that of g(n).

g(n) is an asymptotic upper bound for f(n).


f(n) = Q(g(n))  f(n) = O(g(n)).
Q(g(n))  O(g(n)).
Big-O Notation
 Let n be a non-negative integer
representing the size of the input to an
algorithm
 Let f(n) and g(n) be two positive functions,
representing the number of basic
calculations (operations, instructions) that
an algorithm takes (or the number of
memory words an algorithm needs).
Big-O Notation (contd.)
 f(n)=O(g(n)) iff there exist a positive
constant C and non-negative integer n0
such that
f(n)  Cg(n) for all nn0.
 g(n) is said to be an upper bound of f(n).

 Any linear function an + b is in O(n2).


How?
 Show that 3n3=O(n4) for appropriate c and
n0.
Big-O Notation (Examples)
 f(n) = 5n+2 = O(n) // g(n) = n
 f(n)  6n, for n  3 (C=6, n0=3)
 f(n)=n/2 –3 = O(n)
 f(n)  0.5 n for n  0 (C=0.5, n0=0)
 n2-n = O(n2) // g(n) = n2
 n2-n  n2 for n  0 (C=1, n0=0)
 n(n+1)/2 = O(n2)
 n(n+1)/2  n2 for n  0 (C=1, n0=0)
Big-O Notation
(In Practice)
 When computing the complexity,
 f(n) is the actual time formula
 g(n) is the simplified version of f
 Since f(n) stands often for time, we use T(n)
instead of f(n)
 In practice, the simplification of T(n) occurs
while it is being computed by the designer
Examples
 2n2 = O(n3): 2n2 ≤ cn3  2 ≤ cn  c = 1 and n0= 2

 n2 = O(n2): n2 ≤ cn2  c ≥ 1  c = 1 and n0= 1

 1000n2+1000n = O(n2):
1000n2+1000n ≤ 1000n2+ n2 =1001n2 c=1001 and n0 = 1000

n ≤ cn2  cn ≥ 1  c = 1 and n0= 1


 n = O(n2):
Big-O example, graphically

 Note 30n+8 isn’t


less than n
cn =
anywhere (n>0).
31n 30n+8

Value of function 
 It isn’t even
less than 31n
everywhere. 30n+8
But it is less than
O(n)
 n
31n everywhere to
the right of n=8.
n>n0=8 
Increasing n 
Big-O Visualization
O(g(n)) is the set of
functions with smaller
or same order of
growth as g(n)
No Uniqueness
 There is no unique set of values for n0 and c in proving the
asymptotic bounds

 Prove that 100n + 5 = O(n2)


 100n + 5 ≤ 100n + n = 101n ≤ 101n2

for all n ≥ 5

n0 = 5 and c = 101 is a solution

 100n + 5 ≤ 100n + 5n = 105n ≤ 105n2


for all n ≥ 1

n0 = 1 and c = 105 is also a solution


Must find SOME constants c and n0 that satisfy the asymptotic notation relation
W -notation
For function g(n), we define W(g(n)),
big-Omega of n, as the set:
W(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n , 0

we have 0  cg(n)  f(n)}


Intuitively: Set of all functions whose rate of growth is the
same as or higher than that of g(n).

g(n) is an asymptotic lower bound for f(n).


f(n) = Q(g(n))  f(n) = W(g(n)).
Q(g(n))  W(g(n)).
Example
W(g(n)) = {f(n) :  positive constants c and n0, such
that n  n0, we have 0  cg(n)  f(n)}

 n3+2n = W(n). Choose c and n0.


Q-notation
For function g(n), we define Q(g(n)), big-
Theta of n, as the set:
Q(g(n)) = {f(n) :  positive constants c1,
c2, and n0, such that n  n , 0

we have 0  c1g(n)  f(n)  c2g(n)


}
Intuitively: Set of all functions that have the same rate
of growth as g(n).

g(n) is an asymptotically tight bound for f(n).


Example
Q(g(n)) = {f(n) :  positive constants c1, c2, and n0,
such that n  n0, 0  c1g(n)  f(n)  c2g(n)}

 10n2 - 3n = Q(n2)
 What constants for n0, c1, and c2 will
work? (c1​=9, c2=11 and n0​=10 )
 Make c1 a little smaller than the leading
coefficient, and c2 a little bigger.
 To compare orders of growth, look
at the leading term.
Relations Between Thera,O, omega
Asymptotic Notation
 O notation: asymptotic “less than”:

 f(n)=O(g(n)) implies: f(n) “≤” g(n)

 W notation: asymptotic “greater than”:

 f(n)= W (g(n)) implies: f(n) “≥” g(n)

 Q notation: asymptotic “equality”:

 f(n)= Q (g(n)) implies: f(n) “=” g(n)


Quadratic vs linear
time
T(n) = 0.25n2 + 0.5n
n T(n) 0.25n2 0.5n
10 30 25 5 16.7%
50 650 625 25 3.8%
100 2550 2500 50 2.0%
500 62750 62500 250 0.4%
1000 250500 250000 500 0.2%
Relative growth: G(n) = f( n) / f(5)

Complexity Input size n


Function f(n) 5 25 125 625
Constant 1 1 1 1 1
Logarithm log n 1 2 3 4
Linear n 1 5 25 125
“n log n” n log n 1 10 75 500
Quadratic n2 1 25 625 15,625
Cubic n3 1 125 15,625 59
Exponential 2n 1 220 2120 2620
“Big-Oh” O(…): Linear
Complexity
Linear complexity

time T(n)  n
O(n)  running
time does not
grow faster than
a linear function
of the problem
size n
Logarithmic “Big-Oh”
Complexity

Logarithmic
complexity:
time T(n)  log n
O(log n)  running
time does not grow
faster than a log
function of the
problem size n
Common orders of magnitude
Common orders of magnitude
o-notation
For a given function g(n), the set little-o:
o(g(n)) = {f(n):  c > 0,  n0 > 0 such that
 n  n0, we have 0  f(n) < cg(n)}.
f(n) becomes insignificant relative to g(n) as n
approaches infinity:
lim [f(n) / g(n)] = 0
n

g(n) is an upper bound for f(n) that is not


asymptotically tight.
Observe the difference in this definition from
previous ones. Why?
w -notation
For a given function g(n), the set little-omega:
w(g(n)) = {f(n):  c > 0,  n0 > 0 such that
 n  n0, we have 0  cg(n) < f(n)}.
f(n) becomes arbitrarily large relative to g(n) as
n approaches infinity:
lim [f(n) / g(n)] = .
n

g(n) is a lower bound for f(n) that is not


asymptotically tight.
Comparison of Functions
fg  ab

f (n) = O(g(n))  a  b
f (n) = W(g(n))  a  b
f (n) = Q(g(n))  a = b
f (n) = o(g(n))  a < b
f (n) = w (g(n))  a > b
Limits
 lim [f(n) / g(n)] = 0  f(n)  o(g(n))
n

 lim [f(n) / g(n)] <   f(n)  O(g(n))


n

 0 < lim
n
[f(n) / g(n)] <   f(n)  Q(g(n))
 0 < lim
n
[f(n) / g(n)]  f(n)  W(g(n))
 lim [f(n) / g(n)] =   f(n)  w(g(n))
n

 lim
n
[f(n) / g(n)] undefined  can’t say
Exercise
Exercise-Ans
Useful Mathematic Summations

n(n  1)
1  2  3  ....  (n  1)  n 
2
n 1
( n 1) a 1
a  a  a  a  ....  a
0 1 2 3
a  n

a 1
n 1
( n 1) 2 1
2  2  2  2  ....  2
0 1 2 3
2 
n

2 1
Examples: Determining Big-O

Repetition

Sequence

Selection

Logarithm
Repetition: Simple Loops
for (i = 1; i <= n; i++)
executed {
n times k = k + 5;
} constant time

Time Complexity
T(n) = (a constant c) * n = cn = O(n)

Ignore multiplicative constants (e.g., “c”).


Repetition: Nested Loops
for (i = 1; i <= n; i++)
{
executed for (j = 1; j <= n; j++)
n times { inner loop
k = k + i + j; executed
} n times
}

constant time

Time Complexity
T(n) = (a constant c) * n * n = cn2 = O(n2)

Ignore multiplicative constants (e.g., “c”).


Repetition: Nested Loops
for (i = 1; i <= n; i++)
{
executed for (j = 1; j <= i; j++)
n times { inner loop
k = k + i + j; executed
} i times
}

constant time
Time Complexity
T(n) = c + 2c + 3c + 4c + … + nc = cn(n+1)/2 =
(c/2)n2 + (c/2)n = O(n2)
Ignore non-dominating terms

Ignore multiplicative constants


Repetition: Nested Loops
for (i = 1; i <= n; i++)
{
executed for (j = 1; j <= 20; j++)
n times { inner loop
k = k + i + j; executed
} 20 times
} constant time

Time Complexity
T(n) = 20 * c * n = O(n)

Ignore multiplicative constants (e.g., 20*c)


Sequence
for (j = 1; j <= 10; j++)
executed {
10 times k = k + 4;
}
for (i = 1; i <= n; i++)
{
executed for (j = 1; j <= 20; j++)
n times inner loop
{
k = k + i + j; executed
} 20 times
}

Time Complexity
T(n) = c *10 + 20 * c * n = O(n)
For Loops Analysis, Example
sum1 = 0;
for (k = 1; k < n; k *= 2) {
for (j = 0; j < n; j++) {
sum1++;
}
}

• Outer loop runs log(n) times


• For each of these runs, the inner loop runs n times
• Loop variable k and j does not depend each other
• So total iterations = n * log(n)
• No. of operations executed inside the loop = 1 or
O(1)
• So total complexity = O(n log(n))
Find the Big-O for the code !!

sum1 = 0;
for (i = n; i > 0; i /= 2) {
for (j = 1; j < n; j *= 2) {
for (k = 0; k < n; k +=
2) {
sum1++;
}
}
}
Find the Big-O for the code !!

sum1 = 0;
for (i = n; i > 0; i /= 2) {
for (j = 1; j < n; j *= 2) {
for (k = 0; k < n; k+ = 2) {
//Const. no. of
operations
}
}
}
sum1 = 0;
for (i = n; i > 0; i --) {
for (j = 1; j < n; j *= 2) {
for (k = 0; k < j; k++) {
sum1++;
}
}
}
sum1 = 0;
for (i = n; i > 0; i --) {
for (j = 1; j < n; j *= 2) {
for (k = 0; k < j; k++) {
sum1++;
}
}
}
Successive inner loops
sum1 = 0;
for (int bound = 1; bound < n; bound *= 2) {
for (i = 0; i < n; i++) {
for (j = 0; j < n; j += 2) {
// O(1) operations
}
for (k= 1; k < n; k *= 2) {
//constant no. of operations
}
}
}
Common Functions
Monotonicity

 f(n) is
 monotonically increasing if m  n  f(m)  f(n).
 monotonically decreasing if m  n  f(m)  f(n).
 strictly increasing if m < n  f(m) < f(n).
 strictly decreasing if m > n  f(m) > f(n).
Exponentials
 Useful Identities:

1 1
a 
a
(a m ) n  a mn
a m a n  a m n

 Exponentials and polynomials


nb
lim n  0
n a

 n b  o( a n )

Stirling Approximation for


factorial
Logarithms
ab log b a

log c (ab)  log c a  log c b


x = logba is the
exponent for a = bx. log b a  n log b a
n

log c a
log b a 
Natural log: ln a = logea log c b
Binary log: lg a = log2a log (1 / a )   log a
b b

1
lg2a = (lg a)2 log b a 
lg lg a = lg (lg a) log a b
a log b c
c log b a
Sample Problems
Algorithm Big O ( Big Big Theta
Upper) Omega (Tight
(Lower) Bound)
Linear O(n) Ω(1) Θ(n)
Search
Binary O(log n) Ω(1) Θ(log n)
Search
Bubble O(n^2) Ω(n) Θ(n^2)
Sort
Useful Mathematic
Summations
n(n  1)
1  2  3  ....  (n  1)  n 
2
n 1
( n 1) a 1
a  a  a  a  ....  a
0 1 2 3
a  n

a 1
n 1
( n 1) 2 1
2  2  2  2  ....  2
0 1 2 3
2 
n

2 1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy