There are mainly three asymptotic notations: 1.Big-O Notation (O-notation) 2.Omega Notation (Ω-notation) 3.Theta Notation (Θ-notation)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Asymptotic Notation for time and space complexity

Asymptotic Notations: They are mathematical tools that allow you to analyze an
algorithm’s running time by identifying its behavior as its input size grows.
This is also referred to as an algorithm’s growth rate.
You compare space and time complexity using asymptotic analysis.
It compares two algorithms based on changes in their performance as the input size is
increased or decreased.
There are mainly three asymptotic notations:
1.Big-O Notation (O-notation)
2.Omega Notation (Ω-notation)
3.Theta Notation (Θ-notation)
4. Little-oh notation
5. Little-Omega Notation
1. Theta Notation (Θ-Notation) :
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Theta (Average Case) You add the running times for each possible input combination
and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said
to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤
f(n) ≤ c2 * g(n) for all n ≥ n0

Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n)
≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is
always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of
theta also requires that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s
time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms and
ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 =
Θ(n3), the dropping lower order terms is always fine because there will always be a
number(n) after which Θ(n3) has higher values than Θ(n2) irrespective of the constants
involved. For a given function g(n), we denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
2. Big-O Notation (O-notation) :
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n
≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of the
Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use
two statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time complexity
of an algorithm. Many times we easily find an upper bound by simply looking at the
algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O provides
exact or upper bounds .
3. Omega Notation (Ω-Notation) :
Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is said
to be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for
all n ≥ n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n
≥ n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion
Sort can be written as Ω(n), but it is not very useful information about insertion sort, as
we are generally interested in worst-case and sometimes in the average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω provides
exact or lower bounds.

Little o Notations

There are some other notations present except the Big-Oh, Big-Omega and Big-Theta
notations. The little o notation is one of them.

Little o notation is used to describe an upper bound that cannot be tight. In other words,
loose upper bound of f(n).
Let f(n) and g(n) are the functions that map positive real numbers. We can say that the
function f(n) is o(g(n)) if for any real positive constant c, there exists an integer constant n0
≤ 1 such that f(n) > 0.

Mathematical Relation of Little o notation

Using mathematical relation, we can say that f(n) = o(g(n)) means,

Example on little o asymptotic notation

If f(n) = n2 and g(n) = n3 then check whether f(n) = o(g(n)) or not.

The result is 0, and it satisfies the equation mentioned above. So we can say
that f(n) = o(g(n)).

Little-omega

Small-omega, commonly written as ω, is an Asymptotic Notation to denote the lower


bound (that is not asymptotically tight) on the growth rate of runtime of an algorithm.

f(n) is ω(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is > c g(n) for every input
size n (n > n0).

The definitions of Ω-notation and ω-notation are similar. The main difference is that in f(n)
= Ω(g(n)), the bound f(n) >= g(n) holds for some constant c > 0, but in f(n) = ω(g(n)), the
bound f(n) > c g(n) holds for all constants c > 0.
Recurrence Relation

A recurrence is an equation or inequality that describes a function in terms of its values on


smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.

T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:

Substitution Method, Iteration Method, Recursion Tree Method, Master Method

1. Substitution Method:

The Substitution Method Consists of two main steps: Skip 10s

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that the
guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T +n

We have to show that it is asymptotically bound by O (log n).

Solution:

For T (n) = O (log n)

We have to show that for some constant c

1. T (n) ≤c logn.

Put this in given Recurrence Equation.

T (n) ≤c log +1

≤c log + 1 = c logn-clog2 2+1


≤c logn for c≥1
Thus T (n) =O logn.
Example2 Consider the Recurrence

T (n) = 2T + n n>1

Find an Asymptotic bound on T.

Solution:

We guess the solution is O (n (logn)).Thus for constant 'c'.


T (n) ≤c n logn
Put this in given Recurrence Equation.
Now,

T (n) ≤2c log +n


≤cnlogn-cnlog2+n
=cn logn-n (clog2-1)
≤cn logn for (c≥1)
Thus T (n) = O (n logn).

2. Iteration Methods

It means to expand the recurrence and express it as a summation of terms of n and initial
condition.

Example1: Consider the Recurrence

1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1

Solution:

T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)

Repeat the procedure for i times

T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1

Example2: Consider the Recurrence

1. T (n) = T (n-1) +1 and T (1) = θ (1).


Solution:

T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).
Recursion Tree Method

Recursion is a fundamental concept in computer science and mathematics that allows


functions to call themselves, enabling the solution of complex problems through iterative
steps. One visual representation commonly used to understand and analyze the execution
of recursive functions is a recursion tree. In this article, we will explore the theory behind
recursion trees, their structure, and their significance in understanding recursive
algorithms.

What is a Recursion Tree?

A recursion tree is a graphical representation that illustrates the execution flow of a


recursive function. It provides a visual breakdown of recursive calls, showcasing the
progression of the algorithm as it branches out and eventually reaches a base case. The
tree structure helps in analyzing the time complexity and understanding the recursive
process involved.

Tree Structure

Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it. The tree grows
downward, forming a hierarchical structure. The branching factor of each node depends on
the number of recursive calls made within the function. Additionally, the depth of the tree
corresponds to the number of recursive calls before reaching the base case.

Base Case

The base case serves as the termination condition for a recursive function. It defines the
point at which the recursion stops and the function starts returning values. In a recursion
tree, the nodes representing the base case are usually depicted as leaf nodes, as they do
not result in further recursive calls.

Recursive Calls

The child nodes in a recursion tree represent the recursive calls made within the function.
Each child node corresponds to a separate recursive call, resulting in the creation of new
sub problems. The values or parameters passed to these recursive calls may differ,
leading to variations in the sub problems' characteristics.

Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive function.
Starting from the initial call at the root node, we follow the branches to reach subsequent
calls until we encounter the base case. As the base cases are reached, the recursive calls
start to return, and their respective nodes in the tree are marked with the returned values.
The traversal continues until the entire tree has been traversed.

Time Complexity Analysis

Recursion trees aid in analyzing the time complexity of recursive algorithms. By examining
the structure of the tree, we can determine the number of recursive calls made and the
work done at each level. This analysis helps in understanding the overall efficiency of the
algorithm and identifying any potential inefficiencies or opportunities for optimization.

Introduction

o Think of a program that determines a number's factorial. This function takes a


number N as an input and returns the factorial of N as a result. This function's
pseudo-code will resemble,Recursion is exemplified by the function that was
previously mentioned. We are invoking a function to determine a number's factorial.
Then, given a lesser value of the same number, this function calls itself. This
continues until we reach the basic case, in which there are no more function calls.

o Recursion is a technique for handling complicated issues when the outcome is


dependent on the outcomes of smaller instances of the same issue.
o If we think about functions, a function is said to be recursive if it keeps calling itself
until it reaches the base case.
o Any recursive function has two primary components: the base case and the
recursive step. We stop going to the recursive phase once we reach the basic case.
To prevent endless recursion, base cases must be properly defined and are crucial.
The definition of infinite recursion is a recursion that never reaches the base case. If
a program never reaches the base case, stack overflow will continue to occur.

Recursion Types

Generally speaking, there are two different forms of recursion:

o Linear Recursion
o Tree Recursion
o Linear Recursion

Linear Recursion

o A function that calls itself just once each time it executes is said to be linearly
recursive. A nice illustration of linear recursion is the factorial function. The name
"linear recursion" refers to the fact that a linearly recursive function takes a linear
amount of time to execute.

Tree Recursion

o When you make a recursive call in your recursive case more than once, it is referred
to as tree recursion. An effective illustration of Tree recursion is the fibonacci
sequence. Tree recursive functions operate in exponential time; they are not linear in
their temporal complexity.

What Is Recursion Tree Method?

o Recurrence relations like T(N) = T(N/2) + N or the two we covered earlier in the
kinds of recursion section are solved using the recursion tree approach. These
recurrence relations often use a divide and conquer strategy to address problems.
o It takes time to integrate the answers to the smaller sub problems that are created
when a larger problem is broken down into smaller sub problems.
o The recurrence relation, for instance, is T(N) = 2 * T(N/2) + O(N) for the Merge sort.
The time needed to combine the answers to two sub problems with a combined size
of T(N/2) is O(N), which is true at the implementation level as well.
o For instance, since the recurrence relation for binary search is T(N) = T(N/2) + 1, we
know that each iteration of binary search results in a search space that is cut in half.
Once the outcome is determined, we exit the function. The recurrence relation has
+1 added because this is a constant time operation.
o The recurrence relation T(n) = 2T(n/2) + Kn is one to consider. Kn denotes the
amount of time required to combine the answers to n/2-dimensional sub problems.
o Let's depict the recursion tree for the aforementioned recurrence relation.
We may draw a few conclusions from studying the recursion tree above, including

1. The magnitude of the problem at each level is all that matters for determining the value
of a node. The issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.

2. In general, we define the height of the tree as equal to log (n), where n is the size of the
issue, and the height of this recursion tree is equal to the number of levels in the tree. This
is true because, as we just established, the divide-and-conquer strategy is used by
recurrence relations to solve problems, and getting from issue size n to problem size 1
simply requires taking log (n) steps.

o Consider the value of N = 16, for instance. If we are permitted to divide N by 2 at


each step, how many steps are required to get N = 1? Considering that we are
dividing by two at each step, the correct answer is 4, which is the value of log(16)
base 2.

log(16) base 2

log(2^4) base 2

4 * log(2) base 2, since log(a) base a = 1

so, 4 * log(2) base 2 = 4

3. At each level, the second term in the recurrence is regarded as the root.

Although the word "tree" appears in the name of this strategy, you don't need to be an
expert on trees to comprehend it.

How to Use a Recursion Tree to Solve Recurrence Relations?

The cost of the sub problem in the recursion tree technique is the amount of time needed
to solve the sub problem. Therefore, if you notice the phrase "cost" linked with the
recursion tree, it simply refers to the amount of time needed to solve a certain sub
problem.

Let's understand all of these steps with a few examples.

Example

Consider the recurrence relation,

T(n) = 2T(n/2) + K

Solution

The given recurrence relation shows the following properties,

A problem size n is divided into two sub-problems each of size n/2. The cost of combining
the solutions to these sub-problems is K.
Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.

At the last level, the sub-problem size will be reduced to 1. In other words, we finally hit the
base case.

Let's follow the steps to solve this recurrence relation,

Step 1: Draw the Recursion Tree

Step 2: Calculate the Height of the Tree

Since we know that when we continuously divide a number by 2, there comes a time when
this number is reduced to 1. Same as with the problem size N, suppose after K divisions by
2, N becomes equal to 1, which implies, (n / 2^k) = 1

Here n / 2^k is the problem size at the last level and it is always equal to 1.

Now we can easily calculate the value of k from the above expression by taking log() to
both sides. Below is a more clear derivation,

n = 2^k

o log(n) = log(2^k)
o log(n) = k * log(2)
o k = log(n) / log(2)
o k = log(n) base 2

So the height of the tree is log (n) base 2.

Step 3: Calculate the cost at each level

o Cost at Level-0 = K, two sub-problems are merged.


o Cost at Level-1 = K + K = 2*K, two sub-problems are merged two times.
o Cost at Level-2 = K + K + K + K = 4*K, two sub-problems are merged four times. and
so on....

Step 4: Calculate the number of nodes at each level

Let's first determine the number of nodes in the last level. From the recursion tree, we can
deduce this

o Level-0 have 1 (2^0) node


o Level-1 have 2 (2^1) nodes
o Level-2 have 4 (2^2) nodes
o Level-3 have 8 (2^3) nodes

So the level log(n) should have 2^(log(n)) nodes i.e. n nodes.

Step 5: Sum up the cost of all the levels

o The total cost can be written as,


o Total Cost = Cost of all levels except last level + Cost of last level
o Total Cost = Cost for level-0 + Cost for level-1 + Cost for level-2 +.... + Cost for level-
log(n) + Cost for last level

The cost of the last level is calculated separately because it is the base case and no
merging is done at the last level so, the cost to solve a single problem at this level is some
constant value. Let's take it as O (1).

Let's put the values into the formulae,

o T(n) = K + 2*K + 4*K + .... + log(n)` times + `O(1) * n


o T(n) = K(1 + 2 + 4 + .... + log(n) times)` + `O(n)
o T(n) = K(2^0 + 2^1 + 2^2 + ....+ log(n) times + O(n)

If you closely take a look to the above expression, it forms a Geometric progression (a, ar,
ar^2, ar^3 ...... infinite time). The sum of GP is given by S(N) = a / (r - 1). Here is the first
term and r is the common ratio.

Master Method

The Master Method is used for solving the following types of recurrence

T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be
interpreted as

Let T (n) is defined on non-negative integers by the recurrence.


T (n) = a T + f (n)

In the function to the analysis of a recursive algorithm, the constants and function take on
the following significance:

o n is the size of the problem.


o a is the number of subproblems in the recursion.
o n/b is the size of each subproblem. (Here it is assumed that all subproblems are
essentially the same size.)
o f (n) is the sum of the work done outside the recursive calls, which includes the sum
of dividing the problem and the sum of combining the solutions to the subproblems.
o It is not possible always bound the function according to the requirement, so we
make three cases which will tell us what kind of bound we can apply on the function.

Master Theorem:

It is possible to complete an asymptotic tight bound in these three cases:

Case1: If f (n) = for some constant ε >0, then it follows that:

T (n) = Θ

Example:

T (n) = 8 T apply master theorem on it.

Solution:
Compare T (n) = 8 T with

T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3

Put all the values in: f (n) =


1000 n2 = O (n3-ε )
If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)

Since this equation holds, the first case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:

T (n) = Θ
Therefore: T (n) = Θ (n3)

Case 2: If it is true, for some constant k ≥ 0 that:

F (n) = Θ then it follows that: T (n) = Θ

Example:

T (n) = 2 , solve the recurrence by using the master method.

As compare the given problem with T (n) = a T a = 2, b=2,


k=0, f (n) = 10n, logba = log22 =1

Put all the values in f (n) =Θ , we will get


10n = Θ (n1) = Θ (n) which is true.

Therefore: T (n) = Θ
= Θ (n log n)

Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that: a

f for some constant c<1 for large value of n ,then :

1. T (n) = Θ((f (n))

Example: Solve the recurrence relation:


T (n) = 2

Solution:

Compare the given problem with T (n) = a T


a= 2, b =2, f (n) = n2, logba = log22 =1

Put all the values in f (n) = Ω ..... (Eq. 1)


If we insert all the value in (Eq.1), we will get
n2 = Ω(n1+ε) put ε =1, then the equality will hold.
n2 = Ω(n1+1) = Ω(n2)
Now we will also check the second condition:

2
If we will choose c =1/2, it is true:

∀ n ≥1
So it follows: T (n) = Θ ((f (n))
T (n) = Θ(n2)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy