0% found this document useful (0 votes)
7 views7 pages

Analysis of Algorithms (1) (1)(1)

Uploaded by

Tusar Rahaman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views7 pages

Analysis of Algorithms (1) (1)(1)

Uploaded by

Tusar Rahaman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 7

Analysis of Algorithms

The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that
doesn’t depend on machine specific constants, and doesn’t require algorithms to be implemented
and time taken by programs to be compared. Asymptotic notations are mathematical tools to
represent time complexity of algorithms for asymptotic analysis. The following 3 asymptotic
notations are mostly used to represent time complexity of algorithms.

1) Θ Notation: The theta notation bounds a functions from above and below, so it defines exact
asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order terms and ignore
leading constants. For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because there will always be a n0 after which Θ(n 3)
has higher values than Θn2) irrespective of the constants involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.

Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such


That, 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}

The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n)
and c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be
non-negative for values of n greater than n0.
2) Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a
function only from above. For example, consider the case of Insertion Sort. It takes linear time in
best case and quadratic time in worst case. We can safely say that the time complexity of
Insertion sort is O(n^2). Note that O(n^2) also covers linear time.

If we use Θ notation to represent time complexity of Insertion sort, we have to use two
statements for best and worst cases:

1. The worst case time complexity of Insertion Sort is Θ(n^2).


2. The best case time complexity of Insertion Sort is Θ(n).

The Big O notation is useful when we only have upper bound on time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the algorithm.

O(g(n)) = { f(n): there exist positive constants c and


n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}
3) Ω Notation: Just as Big O notation provides an asymptotic upper bound on a function, Ω
notation provides an asymptotic lower bound.

Ω Notation can be useful when we have lower bound on time complexity of an algorithm. As
discussed in the previous post, the best case performance of an algorithm is generally not useful,
the Omega notation is the least used notation among all three.

For a given function g(n), we denote by Ω(g(n)) the set of functions.

Ω (g(n)) = {f(n): there exist positive constants c and


n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can
be written as Ω(n), but it is not a very useful information about insertion sort, as we are generally
interested in worst case and sometimes in average case.
Complexity Analysis of Insertion Sort

Best Case: O(n)


Worst Case : O(n2)

1+2+3+4+…+n=[n(n+1)/2]

2+3+4+…+n=[n(n+1)/2]-1

j=> 2+3+…..+n = (n(n+1)/2 -1)


Analysis of Algorithms(Analysis of Loops)

We have discussed Asymptotic Analysis, Worst, Average and Best Cases and Asymptotic
Notations in previous. In this post, analysis of iterative programs with simple examples is
discussed.

1) O(1): Time complexity of a function (or set of statements) is considered as O(1) if it doesn’t
contain loop, recursion and call to any other non-constant time function.

// set of non-recursive and non-loop statements

For example swap() function has O(1) time complexity.


A loop or recursion that runs a constant number of times is also considered as O(1). For example
the following loop is O(1).

// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions

}2) O(n): Time Complexity of a loop is considered as O(n) if the loop variables is
incremented / decremented by a constant amount. For example following functions have O(n)
time complexity.

// Here c is a positive integer constant


for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}

for (int i = n; i > 0; i -= c) {


// some O(1) expressions
}

3) O(nc): Time complexity of nested loops is equal to the number of times the innermost
statement is executed. For example the following sample loops have O(n2) time complexity

for (int i = 1; i <=n; i += c) {


for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}

for (int i = n; i > 0; i -= c) {


for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}

For example Selection sort and Insertion Sort have O(n2) time complexity.
4) O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop variables is
divided / multiplied by a constant amount.

for (int i = 1; i <=n; i *= c) {


// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}

For example Binary Search(refer iterative implementation) has O(Logn) time complexity.

5) O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop variables is


reduced / increased exponentially by a constant amount.

// Here c is a constant greater than 1


for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 1; i = fun(i)) {
// some O(1) expressions
}

How to combine time complexities of consecutive loops?


When there are consecutive loops, we calculate time complexity as sum of time complexities of
individual loops.

for (int i = 1; i <=m; i += c) {


// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy