There are mainly three asymptotic notations: 1.Big-O Notation (O-notation) 2.Omega Notation (Ω-notation) 3.Theta Notation (Θ-notation)
There are mainly three asymptotic notations: 1.Big-O Notation (O-notation) 2.Omega Notation (Ω-notation) 3.Theta Notation (Θ-notation)
There are mainly three asymptotic notations: 1.Big-O Notation (O-notation) 2.Omega Notation (Ω-notation) 3.Theta Notation (Θ-notation)
Asymptotic Notations: They are mathematical tools that allow you to analyze an
algorithm’s running time by identifying its behavior as its input size grows.
This is also referred to as an algorithm’s growth rate.
You compare space and time complexity using asymptotic analysis.
It compares two algorithms based on changes in their performance as the input size is
increased or decreased.
There are mainly three asymptotic notations:
1.Big-O Notation (O-notation)
2.Omega Notation (Ω-notation)
3.Theta Notation (Θ-notation)
4. Little-oh notation
5. Little-Omega Notation
1. Theta Notation (Θ-Notation) :
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Theta (Average Case) You add the running times for each possible input combination
and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said
to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤
f(n) ≤ c2 * g(n) for all n ≥ n0
Theta notation
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.
Little o Notations
There are some other notations present except the Big-Oh, Big-Omega and Big-Theta
notations. The little o notation is one of them.
Little o notation is used to describe an upper bound that cannot be tight. In other words,
loose upper bound of f(n).
Let f(n) and g(n) are the functions that map positive real numbers. We can say that the
function f(n) is o(g(n)) if for any real positive constant c, there exists an integer constant n0
≤ 1 such that f(n) > 0.
The result is 0, and it satisfies the equation mentioned above. So we can say
that f(n) = o(g(n)).
Little-omega
f(n) is ω(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is > c g(n) for every input
size n (n > n0).
The definitions of Ω-notation and ω-notation are similar. The main difference is that in f(n)
= Ω(g(n)), the bound f(n) >= g(n) holds for some constant c > 0, but in f(n) = ω(g(n)), the
bound f(n) > c g(n) holds for all constants c > 0.
Recurrence Relation
For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.
2T + θ (n) if n>1
1. Substitution Method:
T (n) = T +n
Solution:
1. T (n) ≤c logn.
T (n) ≤c log +1
T (n) = 2T + n n>1
Solution:
2. Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and initial
condition.
1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1
Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)
T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1
T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).
Recursion Tree Method
Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it. The tree grows
downward, forming a hierarchical structure. The branching factor of each node depends on
the number of recursive calls made within the function. Additionally, the depth of the tree
corresponds to the number of recursive calls before reaching the base case.
Base Case
The base case serves as the termination condition for a recursive function. It defines the
point at which the recursion stops and the function starts returning values. In a recursion
tree, the nodes representing the base case are usually depicted as leaf nodes, as they do
not result in further recursive calls.
Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the function.
Each child node corresponds to a separate recursive call, resulting in the creation of new
sub problems. The values or parameters passed to these recursive calls may differ,
leading to variations in the sub problems' characteristics.
Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive function.
Starting from the initial call at the root node, we follow the branches to reach subsequent
calls until we encounter the base case. As the base cases are reached, the recursive calls
start to return, and their respective nodes in the tree are marked with the returned values.
The traversal continues until the entire tree has been traversed.
Recursion trees aid in analyzing the time complexity of recursive algorithms. By examining
the structure of the tree, we can determine the number of recursive calls made and the
work done at each level. This analysis helps in understanding the overall efficiency of the
algorithm and identifying any potential inefficiencies or opportunities for optimization.
Introduction
Recursion Types
o Linear Recursion
o Tree Recursion
o Linear Recursion
Linear Recursion
o A function that calls itself just once each time it executes is said to be linearly
recursive. A nice illustration of linear recursion is the factorial function. The name
"linear recursion" refers to the fact that a linearly recursive function takes a linear
amount of time to execute.
Tree Recursion
o When you make a recursive call in your recursive case more than once, it is referred
to as tree recursion. An effective illustration of Tree recursion is the fibonacci
sequence. Tree recursive functions operate in exponential time; they are not linear in
their temporal complexity.
o Recurrence relations like T(N) = T(N/2) + N or the two we covered earlier in the
kinds of recursion section are solved using the recursion tree approach. These
recurrence relations often use a divide and conquer strategy to address problems.
o It takes time to integrate the answers to the smaller sub problems that are created
when a larger problem is broken down into smaller sub problems.
o The recurrence relation, for instance, is T(N) = 2 * T(N/2) + O(N) for the Merge sort.
The time needed to combine the answers to two sub problems with a combined size
of T(N/2) is O(N), which is true at the implementation level as well.
o For instance, since the recurrence relation for binary search is T(N) = T(N/2) + 1, we
know that each iteration of binary search results in a search space that is cut in half.
Once the outcome is determined, we exit the function. The recurrence relation has
+1 added because this is a constant time operation.
o The recurrence relation T(n) = 2T(n/2) + Kn is one to consider. Kn denotes the
amount of time required to combine the answers to n/2-dimensional sub problems.
o Let's depict the recursion tree for the aforementioned recurrence relation.
We may draw a few conclusions from studying the recursion tree above, including
1. The magnitude of the problem at each level is all that matters for determining the value
of a node. The issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.
2. In general, we define the height of the tree as equal to log (n), where n is the size of the
issue, and the height of this recursion tree is equal to the number of levels in the tree. This
is true because, as we just established, the divide-and-conquer strategy is used by
recurrence relations to solve problems, and getting from issue size n to problem size 1
simply requires taking log (n) steps.
log(16) base 2
log(2^4) base 2
3. At each level, the second term in the recurrence is regarded as the root.
Although the word "tree" appears in the name of this strategy, you don't need to be an
expert on trees to comprehend it.
The cost of the sub problem in the recursion tree technique is the amount of time needed
to solve the sub problem. Therefore, if you notice the phrase "cost" linked with the
recursion tree, it simply refers to the amount of time needed to solve a certain sub
problem.
Example
T(n) = 2T(n/2) + K
Solution
A problem size n is divided into two sub-problems each of size n/2. The cost of combining
the solutions to these sub-problems is K.
Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.
At the last level, the sub-problem size will be reduced to 1. In other words, we finally hit the
base case.
Since we know that when we continuously divide a number by 2, there comes a time when
this number is reduced to 1. Same as with the problem size N, suppose after K divisions by
2, N becomes equal to 1, which implies, (n / 2^k) = 1
Here n / 2^k is the problem size at the last level and it is always equal to 1.
Now we can easily calculate the value of k from the above expression by taking log() to
both sides. Below is a more clear derivation,
n = 2^k
o log(n) = log(2^k)
o log(n) = k * log(2)
o k = log(n) / log(2)
o k = log(n) base 2
Let's first determine the number of nodes in the last level. From the recursion tree, we can
deduce this
The cost of the last level is calculated separately because it is the base case and no
merging is done at the last level so, the cost to solve a single problem at this level is some
constant value. Let's take it as O (1).
If you closely take a look to the above expression, it forms a Geometric progression (a, ar,
ar^2, ar^3 ...... infinite time). The sum of GP is given by S(N) = a / (r - 1). Here is the first
term and r is the common ratio.
Master Method
The Master Method is used for solving the following types of recurrence
T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be
interpreted as
In the function to the analysis of a recursive algorithm, the constants and function take on
the following significance:
Master Theorem:
T (n) = Θ
Example:
Solution:
Compare T (n) = 8 T with
T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3
Since this equation holds, the first case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:
T (n) = Θ
Therefore: T (n) = Θ (n3)
Example:
Therefore: T (n) = Θ
= Θ (n log n)
Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that: a
Solution:
2
If we will choose c =1/2, it is true:
∀ n ≥1
So it follows: T (n) = Θ ((f (n))
T (n) = Θ(n2)