0% found this document useful (0 votes)
2 views

Asymptotic notation

The document discusses algorithms, focusing on their definitions, types (iterative vs. recursive), and the concepts of time and space complexity. It explains how to analyze algorithms using asymptotic notation, including Big O and Theta notations, to evaluate their efficiency. Examples are provided to illustrate the differences between iterative and recursive approaches, as well as how to calculate and compare their complexities.

Uploaded by

divyadarakha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Asymptotic notation

The document discusses algorithms, focusing on their definitions, types (iterative vs. recursive), and the concepts of time and space complexity. It explains how to analyze algorithms using asymptotic notation, including Big O and Theta notations, to evaluate their efficiency. Examples are provided to illustrate the differences between iterative and recursive approaches, as well as how to calculate and compare their complexities.

Uploaded by

divyadarakha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Chapter 1

Space and time complexity


Algorithms
• An algorithm is a step procedure to solve logical and
mathematical problems.
• A recipe is a good example of an algorithm because it says
what must be done, step by step. It takes inputs (ingredients)
and produces an output (the completed dish).
• In mathematics and computer science, an algorithm is a finite
sequence of well-defined, computer-implementable
instructions, typically to solve a class of specific problems or to
perform a computation. Algorithms are always unambiguous
and are used as specifications for performing calculations,
data processing, automated reasoning, and other tasks. In
contrast, a heuristic is a technique used in problem solving that
uses practical methods and/or various estimates in order to
produce solutions that may not be optimal but are sufficient
given the circumstances.
Recursive and Non recursive algorithms
• In general, we can define the factorial function in
the following way:
Iterative Definition
• This is an iterative definition of the factorial
function.
• It is iterative because the definition only
contains the algorithm parameters and not
the algorithm itself.
• This will be easier to see after defining the
recursive implementation.
Recursive Definition
• We can also define the factorial function in
the following way:
Iterative vs. Recursive Function does NOT
calls itself
• Iterative 1 if n=0
factorial(n) =
n x (n-1) x (n-2) x … x 2 x 1 if n>0

• Recursive
Function calls itself
factorial(n) =
1 if n=0
n x factorial(n-1) if n>0
Recursion
• To see how the recursion works, let’s break
down the factorial function to solve
factorial(3)
Breakdown

• Here, we see that we start at the top level,


factorial(3), and simplify the problem into
3 x factorial(2).
• Now, we have a slightly less complicated problem
in factorial(2), and we simplify this problem into 2
x factorial(1).
Breakdown

• We continue this process until we are able to reach a


problem that has a known solution.
• In this case, that known solution is factorial(0) = 1.
• The functions then return in reverse order to complete
the solution.
Breakdown
• This known solution is called the base case.
• Every recursive algorithm must have a base
case to simplify to.
• Otherwise, the algorithm would run forever
(or until the computer ran out of memory).
Breakdown
• The other parts of the algorithm, excluding
the base case, are known as the general case.
• For example:
3 x factorial(2)  general case
2 x factorial(1)  general case
etc …
Iteration vs. Recursion
• Now that we know the difference between an
iterative algorithm and a recursive algorithm,
we will develop both an iterative and a
recursive algorithm to calculate the factorial of
a number.
• We will then compare the 2 algorithms.
Iterative Algorithm

factorial(n) { The iterative


i=1 solution is very
factN = 1 straightforward. We
loop (i <= n) simply loop through
factN = factN * i all the integers
i=i+1 between 1 and n
end loop and multiply them
return factN together.
}
Recursive Algorithm

Note how much simpler


factorial(n) { the code for the
if (n = 0) recursive version of the
return 1 algorithm is as
else compared with the
iterative version 
return n*factorial(n-1)
end if
} we have eliminated the
loop and implemented
the algorithm with 1 ‘if’
• Two criteria are used to judge algorithms:
(i) time complexity (ii) space complexity.
• Space Complexity of an algorithm is the amount
of memory it needs to run to completion.
• Time Complexity of an algorithm is the amount of
CPU time it needs to run to completion.
• Space Complexity:
• Memory space S(P) needed by a program P,
consists of two components:
– A fixed part: needed for instruction space (byte code),
simple variable space, constants space etc.  c
– A variable part: dependent on a particular instance of
input and output data.  Sp(instance)
• S(P) = c + Sp(instance characteristics)
Space Complexity
• Space complexity is a measure of the amount
of working storage an algorithm needs. That
means how much memory, in the worst case,
is needed at any point in the algorithm.
int sum(int x, int y, int z)
{ int r = x + y + z;
return r; }.
requires 3 units of space for the parameters and
1 for the local variable, and this never
changes, so this is O(1).
Space complexity example:
1. Algorithm abc (a, b, c)
2. {
3. return a+b+b*c+(a+b-c)/(a+b)
+4.0;
4. }

For every instance 3 computer words required to store variables: a,


b, and c. Therefore
Sp()= 3. S(P) = 3.
1. Algorithm Sum(a[ ], n)
2. {
3. s:= 0.0;
4. for i = 1 to n do
5. s := s + a[i];
6. return s;
7. }
• Every instance needs to store array a[] & n.
– Space needed to store n = 1 word.
– Space needed to store a[ ] = n floating point
words (or at least n words)
– Space needed to store i and s = 2 words
• Sp(n) = (n + 3). Hence S(P) = (n + 3).
Time Complexity
• Time required T(P) to run a program P also consists
of two components:
– A fixed part: compile time which is independent of the
problem instance  c.
– A variable part: run time which depends on the problem
instance  tp(instance)
• T(P) = c + tp(instance characteristic)
• How to measure T(P)?
– Measure experimentally, using a “stop watch”
 T(P) obtained in secs, msecs.
– Count program steps  T(P) obtained as a step count.
• Fixed part is usually ignored; only the variable part
tp() is measured.
• What is a program step?
– a+b+b*c+(a+b)/(a-b)  one step;
– comments  zero steps;
– while (<expr>) do  step count equal to the
number of times <expr> is executed.
– for i=<expr> to <expr1> do  step count
equal to number of times <expr1> is checked.
#include<stdio.h>
int main()
{ printf(“Hello World”);
}
#include< stdio . h>
void main()
{ int I, n=8 ;
for (i=1; i<=n;i++)
printf(“Hello world”);
}
Sum(a, b)
{
return (a+b);

}
Statements S/E Freq. Total

1 Algorithm Sum(a[ ],n) 0 - 0

2 { 0 - 0

3 S = 0.0; 1 1 1

4 for i=1 to n do 1 n+1 n+1

5 s = s+a[i]; 1 n n

6 return s; 1 1 1

7 } 0 - 0
2n+3
Statements S/E Freq. Total

1 Algorithm Sum(a[ ],n,m) 0 - 0

2 { 0 - 0

3 for i=1 to n do; 1 n+1 n+1

4 for j=1 to m do 1 n(m+1) n(m+1)

5 s = s+a[i][j]; 1 nm nm

6 return s; 1 1 1

7 } 0 - 0

2nm+2n+2
#include<stdio.h>
{
int a=4;
Int b=6;
Int c;
C=a+b;
printf(“%d” , c);

}
int i, j, n=8;
For (i=1; i<=n; i++)
{ for (j=1 ; j<=n ; j++)
{printf(“DAA”);
}}}
• Example: f(n) = 10n2+4n+2
is O(n2) because 10n2+4n+2 <= 11n2
for all n >=5.
• Example: f(n) = 6*2n+n2 is O(2n)
because 6*2n+n2 <=7*2n for all n>=4.
• Algorithms can be: O(1)  constant;
• O(log n)  logarithmic;
• O(nlogn); O(n) linear;
• O(n2)  quadratic;
• O(n3)  cubic;
• O(2n)  exponential.
Some results
Sum of two functions: If f(n) = f1(n) + f2(n), and
f1(n) is O(g1(n)) and f2(n) is O(g2(n)), then f(n) =
O(max(|g1(n)|, |g2(n)|)).

Product of two functions: If f(n) = f1(n)* f2(n), and


f1(n) is O(g1(n)) and f2(n) is O(g2(n)), then f(n) =
O(g1(n)* g2(n)).
int sum(int a[], int n)
{ int r = 0;
for (int i = 0; i < n; ++i)
{ r += a[i]; }
return r; }
requires N units for a, plus space for n, r and i, so
it's O(N).
Asymptotic Notation
• Asymptotic analysis is to have a measure of
efficiency of algorithms that doesn’t depend on
machine specific constants, and doesn’t require
algorithms to be implemented and time taken by
programs to be compared.
• Asymptotic notations are mathematical tools to
represent time complexity of algorithms for
asymptotic analysis.
• The following 3 asymptotic notations are mostly
used to represent time complexity of algorithms.
Θ Notation:
• The theta notation bounds a functions from above
and below, so it defines exact asymptotic behavior.
A simple way to get Theta notation of an
expression is to drop low order terms and ignore
leading constants.
• For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because
there will always be a n0 after which Θ(n3) beats
Θ(n2) irrespective of the constants involved.
For a given function g(n), we denote Θ(g(n)) is
following set of functions.
• Θ(g(n)) = { f(n): there exist positive constants c1,
c2 and n0
such that 0 <= c1*g(n) <= f(n) <=
c2*g(n)
for all n >= n0
}
• The above definition means, if f(n) is theta of
g(n), then the value f(n) is always between
c1*g(n) and c2*g(n) for large values of n (n >=n0).
The definition of theta also requires that f(n)
must be non-negative for values of n greater than
n0.
Big O Notation:
• The Big O notation defines an upper bound of an algorithm, it bounds
a function only from above.
• Big O of a function gives us ‘rate of growth’ of the step count function
f(n), in terms of a simple function g(n), which is easy to compare.
• For example : Consider the case of Insertion Sort. It takes linear time in
best case and quadratic time in worst case. We can safely say that the
time complexity of Insertion sort is O(n2). Note that O(n2) also covers
linear time.
• If we use Θ notation to represent time complexity of Insertion sort, we
have to use two statements for best and worst cases:

• 1. The worst case time complexity of Insertion Sort is Θ(n2).


2. The best case time complexity of Insertion Sort is Θ(n).

• The Big O notation is useful when we only have upper bound on time
complexity of an algorithm. Many times we easily find an upper bound
by simply looking at the algorithm.
• O(g(n)) = { f(n): there exist positive constants c
and n0 such that 0 <= f(n) <= c*g(n)
for all n >= n0
}
• Example: f(n) = 3n+2 is O(n) because 3n+2 <= 4n for all n
>= 2. c = 4, n0 = 2. Here g(n) = n.
• F(n) = 3n + 3 = O (n) as 3n+3 <= 4n for all n>=3.
• F(n) = 100n + 6 = O(n) as 100n + 6 <=101n for all n>=6.
• F(n) = 10n^2+4n+2 <=11n^2
• Suppose f(n) = 5n and g(n) = n. • To show that f = O(g), we have to
show the existence of a constant C as given earlier. Clearly 5 is such a
constant so f(n) = 5 * g(n). • We could choose a larger C such as 6,
because the definition states that f(n) must be less than or equal to CS
4407, Algorithms , University College Cork,, Gregory M. Provan C *
g(n), but we usually try and find the smallest one. Therefore, a
constant C exists (we only need one) and f = O(g).
• In the previous timing analysis, we ended up with T(n) = 4n + 5, and
we concluded intuitively that T(n) = O(n) because the running time
grows linearly as n grows. Now, however, we can prove it
mathematically: To show that f(n) = 4n + 5 = O(n), we need to produce
a constant C such that: f(n) <= C * n for all n. If we try C = 4, this
doesn't work because 4n + 5 is not less than 4n. We need C to be at
least 9 to cover all n. If n = 1, C has to be 9, but C can be smaller CS
4407, Algorithms , University College Cork,, Gregory M. Provan for
greater values of n (if n = 100, C can be 5). Since the chosen C must
work for all n, we must use 9: 4n + 5 <= 4n + 5n = 9n Since we have
produced a constant C that works for all n, we can conclude: T(4n + 5)
= O(n)
Say f(n) = n2: We will prove that f(n) ≠ O(n). • To do this, we must show
that there cannot exist a constant C that satisfies the big-Oh definition.
We will prove this by contradiction. Suppose there is a constant C that
works; then, by the definition of big-Oh: n2 ≤ C * n for all n. • Suppose n
is any positive real number greater than C, CS 4407, Algorithms ,
University College Cork,, Gregory M. Provan then: n * n > C * n, or n2 > C
* n. So there exists a real number n such that n2 > C * n. This contradicts
the supposition, so the supposition is false. There is no C that can work
for all n: f(n) ≠ O(n) when f(n) = n2
Suppose f(n) = n2 + 3n - 1. We want to show that f(n) = O(n2). f(n) = n2 +
3n - 1 < n2 + 3n (subtraction makes things smaller so drop it) <= n2 + 3n2
(since n <= n2 for all integers n) = 4n2 Therefore, if C = 4, we have shown
that f(n) = O(n2). Notice that all we are doing is finding a simple function
that is an upper bound on the original function. Because of this, we
could also say that CS 4407, Algorithms , University College Cork,,
Gregory M. Provan This would be a much weaker description, but it is
still valid. f(n) = O(n3) since (n3) is an upper bound on n2
• Ω Notation: Just as Big O notation provides an
asymptotic upper bound on a function, Ω
notation provides an asymptotic lower bound.
• Ω Notation : can be useful when we have lower
bound on time complexity of an algorithm.
• The
best case performance of an algorithm is generall
y not useful
, the Omega notation is the least used notation
among all three.
• For a given function g(n), we denote by Ω(g(n))
the set of functions.
• Ω (g(n)) = { f(n) : there exist positive constants c
and n0 such that 0 <= c*g(n) <= f(n)
for all n >= n0
}.
• Definition (Theta) : Consider a function f(n) which is non-
negative for all integers . We say that ``f(n) is theta g(n),''
which we write , if and only if f(n) is O(g(n)) and f(n) is .
• Definition (Little Oh) : Consider a function f(n) which is
non-negative for all integers. We say that ``f(n) is little
oh g(n),'' which we write f(n)=o(g(n)), if and only if f(n)
is O(g(n)) but f(n) is not.
• Little oh notation represents a kind of loose asymptotic
bound in the sense that if we are given that f(n)=o(g(n)),
then we know that g(n) is an asymptotic upper bound
since f(n)=O(g(n)), but g(n) is not an asymptotic lower
bound since f(n)=O(g(n)) and implies that.
• For example, consider the function f(n)=n+1.
• what c we choose, for large enough n, . Thus, we may
write .
• Suppose f(n) = 5n and g(n) = n. • To show that
f = O(g),
• we have to show the existence of a constant C
as given earlier.
• Clearly 5 is such a constant so f(n) = 5 * g(n).
• We could choose a larger C such as 6, because
the definition states that f(n) must be less
than or equal to C * g(n), but we usually try
and find the smallest one. Therefore, a
constant C exists (we only need one) and f =
O(g).
Algorithm Time Complexity Space Complexity

Best Average Worst Worst

Selection Sort O(n2) O(n2) O(n2) O(1)

Bubble Sort O(n) O(n2) O(n2) O(1)

Insertion Sort O(n) O(n2) O(n2) O(1)

Heap Sort O(n log(n)) O(n log(n)) O(n log(n)) O(1)


Quick Sort O(n log(n)) O(n log(n)) O(n2) O(n)
Merge Sort O(n log(n)) O(n log(n)) O(n log(n)) O(n)
Bucket Sort O(n +k) O(n +k) O(n2) O(n)
Radix Sort O(nk) O(nk) O(nk) O(n + k)
Count Sort O(n +k) O(n +k) O(n +k) O(k)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy