0% found this document useful (0 votes)
95 views

Asymptotic Time Complexity

Here are the answers: 1) Incorrect. The time complexity of QuickSort is Θ(nlogn) in average case and O(n^2) in worst case. 2) Correct. The time complexity of QuickSort can be written as O(n^2) since it is O(nlogn) on average and O(n^2) in worst case. 3) Correct. This statement is valid and defines the relationship between Θ, O and Ω notations. 4) Incorrect. Ω(1) only provides a lower bound which is not a tight bound for most algorithms.

Uploaded by

Deepraj Baidya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views

Asymptotic Time Complexity

Here are the answers: 1) Incorrect. The time complexity of QuickSort is Θ(nlogn) in average case and O(n^2) in worst case. 2) Correct. The time complexity of QuickSort can be written as O(n^2) since it is O(nlogn) on average and O(n^2) in worst case. 3) Correct. This statement is valid and defines the relationship between Θ, O and Ω notations. 4) Incorrect. Ω(1) only provides a lower bound which is not a tight bound for most algorithms.

Uploaded by

Deepraj Baidya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

asymptotic time complexity

Definition: The limiting behavior of the execution time of an algorithm when the size of the problem
goes to infinity. This is usually denoted in big-O notation.

The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the amount of
data the algorithm must process. Usually there are natural units for the domain and range of this function. There are
two main complexity measures of the efficiency of an algorithm:

• Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of
input to the algorithm. "Time" can mean the number of memory accesses performed, the number of
comparisons between integers, the number of times some inner loop is executed, or some other natural unit
related to the amount of real time the algorithm will take. We try to keep this idea of time separate from
"wall clock" time, since many factors unrelated to the algorithm itself can affect the real time (like the
language used, type of computing hardware, proficiency of the programmer, optimization in the compiler,
etc.). It turns out that, if we chose the units wisely, all of the other stuff doesn't matter and we can get an
independent measure of the efficiency of the algorithm.
• Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the
amount of input to the algorithm. We often speak of "extra" memory needed, not counting the memory
needed to store the input itself. Again, we use natural (but fixed-length) units to measure this. We can use
bytes, but it's easier to use, say, number of integers used, number of fixed-sized structures, etc. In the end,
the function we come up with will be independent of the actual number of bytes needed to represent the
unit. Space complexity is sometimes ignored because the space used is minimal and/or obvious, but
sometimes it becomes as important an issue as time.

Analysis of Algorithms
We have discussed Asymptotic Analysis, and Worst, Average and Best Cases of Algorithms.
The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that doesn’t depend
on machine specific constants, and doesn’t require algorithms to be implemented and time taken by
programs to be compared. Asymptotic notations are mathematical tools to represent time complexity of
algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly used to represent time
complexity of algorithms.
1) Θ Notation: The theta notation bounds a functions from above and below, so it defines exact
asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order
terms and ignore leading constants. For example, consider the following
expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because there will always be a
n0 after which Θ(n3) has higher values than Θn2) irrespective of the
constants involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0
such

that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n


>= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n) and
c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be non-negative
for values of n greater than n0.

2) Big O Notation: The Big O notation defines an upper bound of an


algorithm, it bounds a function only from above. For example, consider the
case of Insertion Sort. It takes linear time in best case and quadratic time
in worst case. We can safely say that the time complexity of Insertion sort
is O(n^2). Note that O(n^2) also covers linear time.
If we use Θ notation to represent time complexity of Insertion sort, we have
to use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have upper bound on time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.

O(g(n)) = { f(n): there exist positive constants c and

n0 such that 0 <= f(n) <= cg(n) for

all n >= n0}

3) Ω Notation: Just as Big O notation provides an asymptotic upper


bound on a function, Ω notation provides an asymptotic lower bound.
Ω Notation< can be useful when we have lower bound on time complexity
of an algorithm. As discussed in the previous post, the best case
performance of an algorithm is generally not useful, the Omega
notation is the least used notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions.

Ω (g(n)) = {f(n): there exist positive constants c and

n0 such that 0 <= cg(n) <= f(n) for

all n >= n0}.

Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be
written as Ω(n), but it is not a very useful information about insertion sort, as we are generally interested
in worst case and sometimes in average case.

Exercise:
Which of the following statements is/are valid?
1. Time Complexity of QuickSort is Θ(n^2)
2. Time Complexity of QuickSort is O(n^2)
3. For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) =
Ω(g(n)).
4. Time complexity of all computer algorithms can be written as Ω(1)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy