Daa Unit1 PPT

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 243

UNIT-I

03/13/2023 1
Introduction-Algorithm Design
• Session Learning Outcome-SLO-To understand why algorithm
design is important.
• Example 1 :
Take 2 arrays A and B of 1010 entries each. Now write an algorithm to find common
entries.

03/13/2023 2
Algorithm:
Assume 2 arrays
k=1
for i=1 ton
for j=1 to n
if A[i]=B[j] then C[k] = A[i]; k=k+1
Output the array C

• Steps taken by the above algorithm is n2 that is (1010 ) 2 = 1020


• How many seconds 1020 steps take?
• 1010 Seconds = 317 years
• On a modern powerful computer it takes 3 years.

03/13/2023 3
Motivation:
• Whether any way to write an algorithm for the above problem which takes less time than
previously mentioned (Yes/No)
Yes
• Now take example 1, finding common entries . Assume array B is arranged in increasing order ?
Can I get a better algorithm. (yes / no)
Yes
• If the example 2 concept (binary search) is used here. Then algorithm will be
for i=1 to n
look for A[i] in B by doing binary search and
if found store in C
Output the array C

03/13/2023 4
Analysis:
• How many steps it takes. n(log2 n)
• n= 1010
• n(log2 n) = 1010 (log2 1010 )
• 1010steps in 1 second then it for the above steps, it takes 1010 (log2 1010 ) / 1010 <=40 secs

03/13/2023 5
Example 2
A book contains page number from 1 and 1024, and I have to search 73 page
Maximum number of comparisons is
512, 256, 128, 64, 32, 16, 8, 4, 2, 1.
If Binary search is applied
For the range 1-1024 –10 comparisons
For the range 1-2048 – 11 comparisons What is the relation
For the range 1-4096- 12 comparisons
For the range 1-N--- ? Comparisons

• It is log N comparisons ( 1024= 2 10 ), 2048= 2 11 4096 = 2 12 …….


• log2 N is the number of powers of 2 in N
• log2 N is a slow growing function

03/13/2023 6
Summary:
• More than the power of computers, an improvement in algorithm can make
a difference between 317 years and 40 seconds.
• How much time to take for sorting?
• Can I choose O(n2) ?
• Quick sort, Mergesort…

03/13/2023 7
Fundamentals of Algorithms
Session Learning Outcome-SLO:
• Able to know what is an algorithm and how to write algorithm /
pseudocode.
What is an algorithm?
It is a finite set of instructions or a sequence of computational steps that if followed it
accomplishes a task.
Characteristics of an algorithm:
• Input : 0 or more input
• Output: 1 or more output
• Definiteness: clear and unambiguous
• Finiteness: termination after finite number of steps
• Effectiveness: solved by pen and paper

03/13/2023 8
Describing the Algorithm
The skills required to effectively design and analyze algorithms are
entangled with the skills required to effectively describe algorithms. A
complete description of any algorithm has four components:
⮚ What: A precise specification of the problem that the algorithm solves.

⮚ How: A precise description of the algorithm itself.

⮚ Why: A proof that the algorithm solves the problem it is supposed to


solve.
⮚ How fast: An analysis of the running time of the algorithm.

It is not necessary (or even advisable) to develop these four


components in this particular order. Problem specifications, algorithm
descriptions, correctness proofs, and time analyses usually evolve
simultaneously, with the development of each component informing the
development of the others.
03/13/2023 9
Specification/ Design of Algorithm
• The algorithm can be specified either using:
• Natural language
• Pseudo code
• Flow chart

03/13/2023 10
Natural Language
• Step 1: Start
• Step 2: Declare variables num1, num2 and sum.
• Step 3: Read values num1 and num2.
• Step 4: Add num1 and num2 and assign the result to sum.
sum←num1+num2
• Step 5: Display sum
• Step 6: Stop

03/13/2023 11
Pseudo code
• BEGIN.
• NUMBER s1, s2, sum.
• OUTPUT("Input number1:")
• INPUT s1.
• OUTPUT("Input number2:")
• INPUT s2.
• sum=s1+s2.
• OUTPUT sum.

03/13/2023 12
Flowchart

03/13/2023 13
Specifying the Problem

• To describe an algorithm, the problem is described that the algorithm is supposed to


solve.

• Algorithmic problems are often presented using standard English, in terms of real-
world objects. The algorithm designers, restate these problems in terms of formal,
abstract, mathematical objects—numbers, arrays, lists, graphs, trees, and so on

• If the problem statement carries any hidden assumptions, state those assumptions
explicitly;

• Specification to be refined as we develop the algorithm. For example, an algorithm


may require a particular input representation, or produce a particular output
representation, that was left unspecified in the original informal problem
description.
03/13/2023 14
• The specification should include just enough detail that someone else could use our

algorithm as a black box, without knowing how or why the algorithm actually works.

• In particular, the type and meaning of each input parameter, and exactly how the

eventual output depends on the input parameters are described. On the other hand,

specification should deliberately hide any details that are not necessary to use the

algorithm as a black box.

• Example: Given two non-negative integers x and y, each represented as an array of

digits, compute the product x · y, also represented as an array of digits. To someone

using these algorithms, the choice of algorithm is completely irrelevant.


03/13/2023 15
Describing Algorithms

• The clearest way to present an algorithm is using a combination of pseudocode


and structured English.

• Pseudocode uses the structure of formal programming languages and


mathematics to break algorithms into primitive steps; the primitive steps
themselves can be written using mathematical notation, pure English, or an
appropriate mixture of the two, whatever is clearest.

• Well written pseudocode reveals the internal structure of the algorithm but hides
irrelevant implementation details, making the algorithm easier to understand,
analyze, debug, and implement.

03/13/2023 16
Pseudocode conventions
1. Comments //

2. Blocks {and }.

3. An identifier begins with a letter

4. ; (dot), -> operator

5. Assignment---

6. Two Boolean values true and false, logical operators and, or, and
not, relational operators <, ≤, ≥,and >

03/13/2023 17
7. Elements of multi-dimensional arrays are accessed using[ and ]. For example, if A is
a two dimensional array, the (i,j)th element of the array is denoted as - A[i, j]. Array
indices start at zero.

8. Looping statements- for, while and repeat-until


• while loop written as

As long as (condition)is true, the statements get executed. When (condition)becomes false, the loop is
exited. The value of (condition) is evaluated at the top of the loop.

03/13/2023 18
• The general form of a for loop is

Here valuel, value2, and step are arithmetic expression The clause "step
step" is optional and taken as +1 if it does not occur. Step could either be
positive or negative

• A repeat-until statement is constructed as follows:

03/13/2023 19
9. break –
10. return
11. Conditional statement has the following forms

12. Multiple-decision statement as:

13. Input, output : read, write

03/13/2023 20
14. Procedure: Algorithm. An algorithm consists of a heading and a body.

Where Name is procedure name and (<parameter list>) is a listing of the procedure parameters. The body
has one or more (simple or compound)statements enclosed within braces {and } .

Example of pseudocode:

03/13/2023 21
Analysing Algorithms:
We have to prove that the algorithm actually does what it’s supposed to do, and that it
does so efficiently.

Correctness

• In some application settings, it is acceptable for programs to behave correctly most


of the time, on all “reasonable” inputs.

• But we require algorithms that are always correct, for all possible inputs.

• We must prove that our algorithms are correct; trusting our instincts, or trying a
few test cases, isn’t good enough.

• In particular, correctness proofs usually involve induction.

03/13/2023 22
Analyzing Algorithms:
Running Time
• The most common way of ranking different algorithms for the same
problem is by how quickly they run. Ideally, we want the fastest possible
algorithm for any particular problem.

03/13/2023 23
Summary:
• Every problem is to be specified, for which an algorithm is
described in the form of pseudocode and is analysed to test
its efficiency.
Home assignment:
1. Write the pseudocode for finding matrix multiplication of
2 two-dimensional arrays.
2. Write the pseudocode for linear search.

03/13/2023 24
Correctness of Algorithm
Session Learning Outcome-SLO:
• Able to prove the correctness of an algorithm by any of the methods.
A proof of correctness requires that the solution be stated in two forms.
• One form is usually as a program which is annotated by a set of assertions about the input and
output variables of the program. These assertions are often expressed in the predicate calculus.
• The second form is called a specification, and this may also be expressed in the predicate
calculus.

• A proof consists of showing that these two forms are equivalent in that for every given
legal input, they describe the same output. A complete proof of program correctness
requires that each statement of the programming language be precisely defined and all
basic operations be proved correct.
• A proof of correctness is much more valuable than a thousand tests(if that proof is
correct),since it guarantees that the program will work correctly for all possible inputs.

03/13/2023 25
Methods of proving correctness
How to prove that an algorithm is correct?

Proof by:
▪ Counterexample (indirect proof )
▪ Induction (direct proof )
▪ Loop Invariant
▪ Other approaches:
▪ proof by cases/enumeration
▪ proof by chain of iffs
▪ proof by contradiction
▪ proof by contrapositive

• For any algorithm, we must prove that it always returns the desired output for all legal
instances of the problem. For sorting, this means even if the input is already sorted or it
contains repeated elements.
03/13/2023 26
Assertions
 To prove correctness we associate a number of assertions
(statements about the state of the execution) with specific
checkpoints in the algorithm.
 E.g., A[1], …, A[k] form an increasing sequence
 Preconditions – assertions that must be valid before the
execution of an algorithm or a subroutine
 Postconditions – assertions that must be valid after the
execution of an algorithm or a subroutine

03/13/2023 27
Loop Invariants
 Invariants– assertions that are valid any time they are reached
(many times during the execution of an algorithm, e.g., in loops)

 We must show three things about loop invariants:

 Initialization – it is true prior to the first iteration


 Maintenance – if it is true before an iteration, it remains true before the
next iteration
 Termination – when loop terminates the invariant gives a useful
property to show the correctness of the algorithm
03/13/2023 28
Proof by mathematical induction
Mathematical induction (MI) is an essential tool for proving the statement that proves an
algorithm's correctness. The general idea of MI is to prove that a statement is true for every
natural number n.
It contains 3 steps:
1.Induction Hypothesis: Define the rule we want to prove for every n, let's call the
rule f(n).
2.Induction Base: Proving the rule is valid for an initial value, or rather a starting point -
this is often proven by solving the Induction Hypothesis f(n) for n=1 or whatever initial
value is appropriate
3.Induction Step: Proving that if we know that f(n) is true, we can step one step forward
and assume f(n+1) is correct

03/13/2023 29
Example
If we define S(n)  as the sum of the first n natural numbers, for example S(3) = 3+2+1, prove
that the following formula can be applied to any n:

Let's trace our steps:


1.Induction Hypothesis: S(n) defined with the formula above

2.Induction Base: In this step we have to prove that S(1) = 1

03/13/2023 30
3. Induction Step: In this step we need to prove that if the formula applies to S(n), it also applies
to S(n+1) as follows:

This is known as an implication (a=>b), which just means that we have to prove b is correct
providing we know a is correct.

Note that S(n+1) = S(n) + (n+1) just means we are recursively calculating the sum.
Example with literals: 31

S(3) = S(2) + 3= S(1) + 2 + 3 = 1 + 2 + 3 = 6 Hence proved


03/13/2023
Summary:
For every algorithm proof of correctness is very important to show that the program is correct in
all cases.

Home Assignment

03/13/2023 32
Conjecture : inferring
Proof by Case
method
Assume I didn’t Saving in bank
Proof by chain of IFFs?
not at all logical
Contrapositive
Loop Invariant
Performance Analysis
• Time Complexity
• Space Complexity
• Communication Bandwidth

03/13/2023 55
Space Complexity Analysis
• The space complexity of an algorithm is the amount of memory it
needs to run to completion.

03/13/2023 56
Space Complexity
S(P)=C+SP(I)

• Fixed Space Requirements (C)


Independent of the characteristics of the inputs and outputs
• instruction space
• space for simple variables, fixed-size structured variable, constants
• Variable Space Requirements (SP(I))
depend on the instance characteristic I
• number, size, values of inputs and outputs associated with I
• recursive stack space, formal parameters, local variables, return address

03/13/2023 57
Example
• #include<stdio.h>
int main()
{
int a = 5, b = 5, c;
c = a + b;
printf("%d", c);
}
Space Complexity: size(a)+size(b)+size(c)
=>let sizeof(int)=2 bytes=>2+2+2=6 bytes
=>O(1) or constant
03/13/2023 58
Example
#include <stdio.h>
int main()
{
int n, i, sum = 0;
scanf("%d", &n);
int arr[n];
for(i = 0; i < n; i++)
{
scanf("%d", &arr[i]); sum = sum + arr[i];
}
printf("%d", sum);
}

Space Complexity:
• The array consists of n integer elements.
• So, the space occupied by the array is 4 * n. Also we have integer variables such as n, i and sum. Assuming 4 bytes
for each variable, the total space occupied by the program is 4n + 12 bytes.
• Since the highest order of n in the equation 4n + 12 is n, so the space complexity is O(n) or linear.

03/13/2023 59
Time complexity analysis
Session Learning Outcome-SLO:
Able to write algorithms and analyze its complexity
• The time T(P) taken by a program P is
T(P) =compile time + run (or execution) time.
• The compile time does not depend on the instance characteristics.
• Also, the compiled program will be run several times without recompilation. Consequently,
the run time of a program only considered. This run time is denoted by t p (instance
characteristics)
• As the factors that influence tp , when the program run it is better to estimate tp
• If the compiler characteristics and the time taken for addition, subtraction, division,
multiplication etc.. are known then we can find tp as

03/13/2023 60
• Where n denotes the instance characteristics, and ca,cs,cm, cd, and so on,
respectively, denote the time needed for an addition, subtraction,
multiplication, division, and so on.

• ADD, SUB,MUL,DIV, and so on, are functions whose values are the numbers of
additions, subtractions, multiplications, divisions, and so on, that are
performed when the code for P is used on an instance with characteristic n.

• It is difficult to get such an exact formula because the time taken for addition
and other operations depends on numbers that are being added.

• So alternative is to do all activities such as program typing, compiling and


running on a machine and the execution time is physically clocked to get tp(n).

• Drawbacks of experimental approach is the execution time depends on the


other programs running on the computer at the time the program P run. It also
depends on machine architecture.

03/13/2023 61
Solution:
• Obtain a count for the total number of operations in the algorithm. We can go one step further
and count only the number of program steps.

• A program step is loosely defined as a syntactically or semantically meaningful segment of a


program that has an execution time that is independent of the instance characteristics. For
example, the entire statement is a single program step.

• The number of steps any program statement is assigned depends on the kind of statement.
• Comments count as zero steps;
• an assignment statement which does not involve any calls to other algorithms is counted as one
step;
• In an iterative statement such as the for, while, and repeat-until statements, we consider the
step counts only for the control part of the statement.

03/13/2023 62
Two methods to find the time complexity for an algorithm
1. Count variable method

2. Table method

03/13/2023 63
Using Count Method

03/13/2023 64
Table method
• The table contains s/e and frequency.

• The s/e of a statement is the amount by which the count changes as a


result of the execution of that statement.

• Frequency is defined as the total number of times each statement is


executed.

• Combining these two, the step count for the entire algorithm is
obtained.

03/13/2023 65
Example 1

03/13/2023 66
Example 2

03/13/2023 67
Class Activity- Find Time Complexity
1. for (let i = 0; i < n; ++i)
{
console.log(i);
}
O(n) ----------- Linear Time Complexity
2. for (let i = 0; i < n; ++i)
{
for (let j = 0; j < n; ++j)
{
console.log(i, j);
}
} O(n2 ) --------- Exponential Time Complexity

3. for (let i = 1; i < n; i *= 2)


{
console.log(i);
}
O(log2n) ----------Logarithmic Time Complexity

03/13/2023 68
Home assignment
Calculate the time complexity for the below algorithms using table
method

1.

03/13/2023 83
2.

3.

03/13/2023 84
Insertion Sort

03/13/2023 85
INTRODUCTION

• Session Learning Outcome - SLO – To Understand the concept of


Insertion sort, the time complexity of an algorithm and Algorithm design
paradigms.

03/13/2023 86
ALGORITHM DESIGN AND ANALYSIS

Efficiency of the algorithm’s design is totally dependent on the understanding of the problem.
The important parameters to be considered in understanding the problem are:
• Input
• Output
• Order of Instructions
• Check for repetition of Instructions
• Check for the decisions based on conditions

03/13/2023 87
INSERTION SORT

• Insertion Sort Algorithm sorts array by shifting elements one by one and inserting the right
element at the right position

• It works similar to the way you sort playing cards in your hands

03/13/2023 88
PROCEDURE

• We start by making the second element of the given array, i.e. element at index 1, the key.
• We compare the key element with the element(s) before it, i.e., element at index 0:
– If the key element is less than the first element, we insert the key element before the first
element.
– If the key element is greater than the first element, then we insert it after the first element.
• Then, we make the third element of the array as key and will compare it with elements to it's
left and insert it at the right position.
• And we go on repeating this, until the array is sorted.

03/13/2023 89
EXAMPLE

• Step 1:
7 2 8 1 5 3

• Step 2: (A[1] is compared with A[0], Since 2<7, its swapped)

2 7 8 1 5 3

• Step 3: (A[2] is compared with A[1] and A[0]. As 8 is the largest of all 3, no swapping is
done)

2 7 8 1 5 3

03/13/2023 90
EXAMPLE (Contd..)

• Step 4: (A[3] is compared with A[2], A[1] and A[0]. Since 1 is the smallest of all, its is
swapped)
1 2 7 8 5 3

• Step 5: (A[4] is compared with A[3], A[2], A[1] and A[0]. Since 5 is lesser than 8 and 7
and greater than 2, it is swapped and placed between 2 and 7)

1 2 5 7 8 3

03/13/2023 91
EXAMPLE (Contd..)

• Step 6: (A[5] is compared with A[4], A[3], A[2], A[1] and A[0]. Since 3 is lesser than
8,7,5 and greater than 1 &2, it is placed between 2 and 5).

1 2 3 5 7 8
• Array is sorted.

03/13/2023 92
03/13/2023 93
Insertion Sort
Example :

9 2 7 5 1 4 3 6

03/13/2023 94
Insertion Sort
Sorte d
sectio
n

9 2 7 5 1 4 3 6

We start by dividing the array in a sorted section and an unsorted section. We put the
first element as the only element in the sorted section, and the rest of the array is the
unsorted section.

03/13/2023 95
Insertion Sort
Sorte Item to
d positi
sectio on
n

9 2 7 5 1 4 3 6

The first element in the unsorted section is the next element to be put into the
correct position.

03/13/2023 96
Insertion Sort
Item to
positi
on

9 2 7 5 1 4 3 6

We copy the element to be placed into another variable so it doesn’t get overwritten.

03/13/2023 97
Insertion Sort

9 2 7 5 1 4 3 6

2
compare

If the previous position is more than the item being placed, copy
the value into the next position

03/13/2023 98
Insertion Sort

belongs here

9 9 7 5 1 4 3 6

If there are no more items in the sorted section to compare with, the
item to be placed must go at the front.

03/13/2023 99
Insertion Sort

2 9 7 5 1 4 3 6

03/13/2023 100
Insertion Sort

2 9 7 5 1 4 3 6

03/13/2023 101
Insertion Sort
Item to
positi
on

2 9 7 5 1 4 3 6

03/13/2023 102
Insertion Sort

2 9 7 5 1 4 3 6

7
compare

03/13/2023 103
Insertion Sort
Copied from
previous
belongs here position

2 9 9 5 1 4 3 6

7
compare

If the item in the sorted section is less than the item to place, the item to place goes
after it in the array.
03/13/2023 104
Insertion Sort

2 7 9 5 1 4 3 6

03/13/2023 105
Insertion Sort

2 7 9 5 1 4 3 6

03/13/2023 106
Insertion Sort
Item to
positi
on

2 7 9 5 1 4 3 6

03/13/2023 107
Insertion Sort

2 7 9 5 1 4 3 6

5
compare

03/13/2023 108
Insertion Sort

2 7 9 9 1 4 3 6

5
compare

03/13/2023 109
Insertion Sort

belongs here

2 7 7 9 1 4 3 6

5
compare

03/13/2023 110
Insertion Sort

2 5 7 9 1 4 3 6

03/13/2023 111
Insertion Sort

2 5 7 9 1 4 3 6

03/13/2023 112
Insertion Sort
Item to
positi
on

2 5 7 9 1 4 3 6

03/13/2023 113
Insertion Sort

2 5 7 9 1 4 3 6

1
compare

03/13/2023 114
Insertion Sort

2 5 7 9 9 4 3 6

1
compare

03/13/2023 115
Insertion Sort

2 5 7 7 9 4 3 6

1
compare

03/13/2023 116
Insertion Sort

2 5 5 7 9 4 3 6

1
compare

03/13/2023 117
Insertion Sort

belongs here

2 2 5 7 9 4 3 6

03/13/2023 118
Insertion Sort

1 2 5 7 9 4 3 6

03/13/2023 119
Insertion Sort

1 2 5 7 9 4 3 6

03/13/2023 120
Insertion Sort
Item to
positi
on

1 2 5 7 9 4 3 6

03/13/2023 121
Insertion Sort

1 2 5 7 9 4 3 6

4
compar e

03/13/2023 122
Insertion Sort

1 2 5 7 9 9 3 6

4
compare

03/13/2023 123
Insertion Sort

1 2 5 7 7 9 3 6

4
compare

03/13/2023 124
Insertion Sort

belongs here

1 2 5 5 7 9 3 6

4
compare

03/13/2023 125
Insertion Sort

1 2 4 5 7 9 3 6

03/13/2023 126
Insertion Sort

1 2 4 5 7 9 3 6

03/13/2023 127
Insertion Sort
Item to
positi
on

1 2 4 5 7 9 3 6

03/13/2023 128
Insertion Sort

1 2 4 5 7 9 3 6

3
compare

03/13/2023 129
Insertion Sort

1 2 4 5 7 9 9 6

3
compare

03/13/2023 130
Insertion Sort

1 2 4 5 7 7 9 6

3
compare

03/13/2023 131
Insertion Sort

1 2 4 5 5 7 9 6

3
compare

03/13/2023 132
Insertion Sort

belongs here

1 2 4 4 5 7 9 6

3
compare

03/13/2023 133
Insertion Sort

1 2 3 4 5 7 9 6

03/13/2023 134
Insertion Sort

1 2 3 4 5 7 9 6

03/13/2023 135
Insertion Sort
Item to
positi
on

1 2 3 4 5 7 9 6

03/13/2023 136
Insertion Sort

1 2 3 4 5 7 9 6

6
compar e

03/13/2023 137
Insertion Sort

1 2 3 4 5 7 9 9

6
compare

03/13/2023 138
Insertion Sort

belongs here

1 2 3 4 5 7 7 9

6
compare

03/13/2023 139
Insertion Sort

1 2 3 4 5 6 7 9

03/13/2023 140
Insertion Sort

1 2 3 4 5 6 7 9

SORTED!

03/13/2023 141
03/13/2023 Figure 1: Insertion sort working model 142
Algorithm: Insertion Sort

03/13/2023 143
Loop Invariants and Correctness of
Insertion Sort
• Initialization: Before the first loop starts, j=1. So, A[0] is an array of
single element and so is trivially sorted.

• Maintenance: The outer for loop has its index moving like j=1,2,…,n-1 (if
A has n elements). At the beginning of the for loop assume that the array is
sorted from A[0..j-1]. The inner while loop of the jth iteration places A[j] at
its correct position. Thus at the end of the jth iteration, the array is sorted
from A[0..j]. Thus, the invariance is maintained. Then j becomes j+1.
• Also, using the same inductive reasoning the elements are also the same as in the
original array in the locations A[0..j].

03/13/2023 144
Loop Invariants and Correctness of
Insertion Sort
• Termination: The for loop terminates when j=n, thus by the
previous observations the array is sorted from A[0..n-1] and the
elements are also the same as in the original array.

Thus, the algorithm indeed sorts and is thus


correct!

03/13/2023 145
Insertion Sort: Line and Operation Counts

03/13/2023 146
Running time of the Insertion sort

03/13/2023 147
Home assignment
1. Using Figure 1 as a model, illustrate the operation of
INSERTION-SORT on the array A [31, 41, 59, 26, 41, 58].

2. Rewrite the INSERTION-SORT procedure to sort into non-


increasing instead of non-decreasing order.

03/13/2023 148
ALGORITHM DESIGN PARADIGMS
• SLO: To understand the different algorithm design
paradigms.
• Specifies the pattern to write or design an algorithm

• Various algorithm paradigms are


• Divide and Conquer
• Dynamic programming
• Backtracking
• Greedy Approach
• Branch and Bound

• Selection of the paradigms depends upon the problem to be addressed

03/13/2023 149
DIVIDE AND CONQUER

• The Divide and Conquer Paradigm is an algorithm design paradigm which uses this simple
process: It Divides the problem into smaller sub-parts until these sub-parts become simple
enough to be solved, and then the sub parts are solved recursively, and then the solutions to
these sub-parts can be combined to give a solution to the original problem.

• Examples :
• Binary search
• Merge sort
• Quick sort

03/13/2023 150
DYNAMIC PROGRAMMING

• Dynamic Programming is an algorithmic paradigm that solves a given complex problem by


breaking it into sub-problems and stores the results of sub-problems to avoid computing the
same results again.

• It is an optimization technique

• Examples :
• All pairs of shortest path
• Fibonacci series

03/13/2023 151
BACKTRACKING

• Backtracking is an algorithmic paradigm aimed at improving the time complexity of the


exhaustive search technique if possible. Backtracking does not generate all possible solutions
first and checks later. It tries to generate a solution and as soon as even one constraint fails,
the solution is rejected and the next solution is tried.

• A backtracking algorithm tries to construct a solution incrementally, one small piece at a


time. It's a systematic way of trying out different sequences of decisions until we find one
that works.

• Examples :
• 8 Queens problem
• Sum of subsets

03/13/2023 152
GREEDY APPROACH
• Greedy algorithms build a solution part by part, choosing the next part in such a way, that it
gives an immediate benefit. 

• This approach is mainly used to solve optimization problems.

• Finding the shortest path between two vertices using Dijkstra’s algorithm.

• Examples
• Coin exchange problem
• Prim’s
• Kruskal’s algorithm
• Travelling salesman problem
• Graph - map coloring

03/13/2023 153
BRANCH AND BOUND
• Branch and bound is an algorithm design paradigm which is generally used for solving
combinatorial optimization problems.
• These problems are typically exponential in terms of time complexity and may require
exploring all possible permutations in worst case. 
• Examples :
Travelling Salesman problem

03/13/2023 154
HOME ASSIGNMENT

• Calculate the time complexity of binary search.

03/13/2023 155
Asymptotic Analysis

03/13/2023 156
Session Learning Outcome-SLO
• Estimate algorithmic complexity
• Learn approximation tool
• Specify the behaviour of algorithm

03/13/2023 157
Asymptotic Analysis
• the time required by an algorithm falls under three types −
• Best Case − Minimum time required for program execution.
• Average Case − Average time required for program execution.
• Worst Case − Maximum time required for program execution.

03/13/2023 158
Asymptotic Analysis
• Asymptotic analysis of an algorithm refers to defining the
mathematical boundation/framing of its run-time performance.
• Dervive the best case, average case, and worst case scenario of an
algorithm.
• Asymptotic analysis is input bound.
• Specify the behaviour of the algorithm when the input size
increases
• Theory of approximation.
• Asymptote of a curve is a line that closely approximates a curve but
does not touch the curve at any point of time.
03/13/2023 159
Asymptotic notations
• Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.
• Asymptotic order is concerned with how the running time of an
algorithm increases with the size of the input, if input increases from
small value to large values

1. Big-Oh notation (O)


2. Big-Omega notation (Ω)
3. Theta notation (θ)
4. Little-oh notation (o)
5. Little-omega notation (ω)
03/13/2023 160
Big-Oh Notation (O)
• Big-oh notation is used to define the worst-case running time of an
algorithm and concerned with large values of n.

• Definition: A function t(n) is said to be in O(g(n)), denoted as t(n) ϵ


O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all
large n. i.e., if there exist some positive constant c and some non-negative
integer n0 such that

t(n) ≤ cg(n) for all n ≥ n0

• O(g(n)): Class of functions t(n) that grow no faster than g(n).


• Big-oh puts asymptotic upper bound on a function.
03/13/2023 161
Big-Oh Notation (O)

03/13/2023 162
Big-Oh Notation (O)
1 < log n < √n < n < n logn < n2 < n3 < ……………< 2n < 3n < …. <nn

• Let t(n) = 2n + 3 upper bound


2n + 3 ≤ _____??

2n + 3 ≤ 5n n≥1
here c = 5 and g(n) = n

t(n) = O(n)

2n + 3 ≤ 5n2 n≥1
here c = 5 and g(n) = n2

t(n) = O(n2 )

03/13/2023 163
Big-Omega notation (Ω)
• This notation is used to describe the best case running time of
algorithms and concerned with large values of n.

• Definition: A function t(n) is said to be in Ω(g(n)), denoted as t(n) ϵ


Ω(g(n)), if t(n) is bounded below by some positive constant multiple
of g(n) for all large n. i.e., there exist some positive constant c and
some non-negative integer n0. Such that

t(n) ≥cg(n) for all n ≥ n0

• It represents the lower bound of the resources required to solve a


problem.
03/13/2023 164
Big-Omega notation (Ω)

03/13/2023 165
Big-Omega notation (Ω)
1 < log n < √n < n < n logn < n2 < n3 < ……………< 2n < 3n < …. <nn

• Let t(n) = 2n + 3 lower bound


2n + 3 ≥ _____??

2n + 3 ≥ 1n n≥1
here c = 1 and g(n) = n

t(n) = Ω(n)

2n + 3 ≥ 1log n n≥1
here c = 1 and g(n) = log n

t(n) = Ω(log n)
03/13/2023 166
Theta notation (θ)
• Definition: A function t(n) is said to be in θ(g(n)), denoted t(n) ϵ θ(g(n)),
if t(n) is bounded both above and below by some positive constant
multiples of g(n) for all large n. i.e., if there exist some positive constant
c1 and c2 and some non-negative integer n0 such that

c2g(n) ≤ t(n) ≤ c1g(n) for all n > n0


θ(g(n)) = O(g(n)) ∩ Ω(g(n))

03/13/2023 167
Theta notation (θ)

03/13/2023 168
Theta notation (θ)
1 < log n < √n < n < n logn < n2 < n3 < ……………< 2n < 3n < …. <nn

• Let t(n) = 2n + 3 average bound


_c2.g(n)___ ≤ 2n + 3 ≤ _c1. g(n)____??

1n ≤ 2n + 3 ≤ 5n n ≥ 1
here c1 = 5 , c2 = 1 and g(n) = n

t(n) = θ(n)

03/13/2023 169
Little-oh notation (o)
• This notation is used to describe the worst case analysis of algorithms and concerned
with small values of n.

• Definition : A function t(n) is said to be in o(g(n)), denoted t(n) ϵ o(g(n)), if there


exist some positive constant c and some non-negative integer such that
t(n) ≤ cg(n)

03/13/2023 170
Little-omega notation (ω)
• This notation is used to describe the best case analysis of algorithms and
concerned with small values of n.
• The function t(n) = ω(g(n)) iff

03/13/2023 171
Asymptotic Analysis of Insertion sort
• Time Complexity:

• Best Case: the best case occurs if the array is already sorted, tj=1
for j=2,3…n.

• Linear running time: O(n)

03/13/2023 172
Asymptotic Analysis of Insertion sort
• Worst case : If the array is in reverse sorted order

• Quadratic Running time. O(n2)

03/13/2023 173
Properties of O, Ω and θ
General property:
If t(n) is O(g(n)) then a * t(n) is O(g(n)). Similar for Ω and θ
Transitive Property :
If f (n) ϵ O(g(n)) and g(n) ϵ O(h(n)), then f (n) ϵ O(h(n)); that is O is
transitive. Also Ω, θ, o and ω are transitive.
Reflexive Property
If f(n) is given then f(n) is O(f(n))
Symmetric Property
If f(n) is θ(g(n)) then g(n) is θ(f(n))
Transpose Property
If f(n) = O(g(n)) then g(n) is Ω(f(n))

03/13/2023 174
Asymptotic Notation and its intuition
Notation What it means In terms of limit Representation Mathematically
equivalent to

Big oh Growth of t(n) is ≤ the t(n) = O(g(n)) t(n) ≤ g(n)


(O) growth of g(n)

Big omega Growth of t(n) is ≥ the t(n) = Ω(g(n)) t(n) ≥ g(n)


(Ω) growth of g(n)

Theta notation Growth of t(n) is ≈ the t(n) = θ(g(n)) t(n) ≈ g(n)


(θ) growth of g(n)

Little oh Growth of t(n) is < the t(n) = o(g(n)) t(n) < g(n)
(o) growth of g(n)

Little omega Growth of t(n) is > the t(n) = ω(g(n)) t(n) > (g(n)
(ω) growth of g(n)

03/13/2023 178
Activity
• Find the upper bound, lower bound and tight bound range for the
following functions
– 2n + 5
– 3n + 2
– 3n + 3
– n2 log n
– 10 n2 + 4 n + 2
– 20 n2 + 80 n + 10
– n!
– log n!

03/13/2023 179
Orders of Growth
• Measuring the performance of an algorithm in relation with the
input size, n is called order of growth.
• Order of growth rates are:
• Constant
• Logarithmic
• Quadratic and
• Exponential

03/13/2023 180
Order of growth of functions

03/13/2023 181
Order of growth of functions

03/13/2023 182
Summary
• Asymptotic analysis estimate an algorithmic complexity
• Based on theory of approximation.
• Effective in specifying the behaviour of algorithm when the input size
increases
• Big – Oh notation – upper bound
• Big – Omega notation – lower bound
• Little – oh notation – tight bound

03/13/2023 183
Mathematical Analysis

03/13/2023 184
Induction
• Induction is a method for proving universally quantified
propositions—statements about all elements of a (usually
infinite) set.
• Induction is also the single most useful tool for reasoning about,
developing, and analyzing algorithms.
• Steps:
• 1. Basis Step
• 2. Inductive Step

03/13/2023 185
Induction- Example
• Use induction to prove each of the following for all natural
numbers n.
4 + 9+14+19+…+ (5n-1)=n/2(3+5n)
a) Basis Step: n=1
5(1)-1=1/2(3+5)
4=4 (true)

03/13/2023 186
Induction
• b) Inductive step: Assume true for n=k, show that it is true for
n=k+1
• Assume: 4+9+14+19+…+(5k-1)=k/2(3+5k)
• Show: 4+9+14+19+..+(5k-1)+(5(k+1)-1)=k+1/2(3+5(k+1))

• k/2(3+5k)+(5(k+1)-1)=k+1/2(3+5(k+1))
• k/2(3+5k)+(5k+4)=k+1/2(3+5k+5))
• 3k/2+5k2 + 5k+4 = k+1/2(8+5k)
• 13k/2+5k2+ +4=8(k+1)/2+5k(k+1)/2
• 13k/2+5k2+4 = 4k+4+5k2 +5k/2
• 13k/2+5k2+4 = 13k/2+5k2+4 (true)
03/13/2023 187
Recurrence Relation

03/13/2023 188
Recurrence
• Any problem can be solved either by writing recursive algorithm or by writing
non-recursive algorithm.

• A recursive algorithm is one which makes a recursive call to itself with smaller
inputs. We often use a recurrence relation to describe the running time of a
recursive algorithm.

• Recurrence relations often arise in calculating the time and space complexity of
algorithms

189
Recurrences and Running Time
• An equation or inequality that describes a function in terms of its value on
smaller inputs.
T(n) = T(n-1) + n
• Recurrences arise when an algorithm contains recursive calls to itself

• What is the actual running time of the algorithm?


• Need to solve the recurrence
– Find an explicit formula of the expression
– Bound the recurrence by an expression that involves n

190
Recurrence Examples

191
Example Recurrences
• T(n) = T(n-1) + n Θ(n2)
– Recursive algorithm that loops through the input to eliminate one item

• T(n) = T(n/2) + c Θ(lgn)


– Recursive algorithm that halves the input in one step

• T(n) = T(n/2) + n Θ(n)


– Recursive algorithm that halves the input but must examine every item
in the input

• T(n) = 2T(n/2) + 1 Θ(n)


– Recursive algorithm that splits the input into 2 halves and does a
constant amount of other work
192
Recursive Function and Tracing tree
Running time: Test(3)
3 (printf), 4 (recursive call)
n (printf), n+1 (recursive call)

T(n)
n times print executes and n+1 function
call occurs.

 The amount of work done depends


on the number of call so the time
complexity is n+1

 f(n)=n+1

 Time complexity in notation= O(n)


 Big O(n), Omega(n),Theta(n)

03/13/2023 193
Recurrence Relation – Example 1
• 1 n=0
• T(n) =
• T(n-1)+1 n>0

For this time value cannot be


zero, so having some constant
value

03/13/2023 194
Recurrence Relation – Example 2
• T(n) = T(n-1) +2n+2

• T(n) = T(n-1) + n

1 n=0
T(n) =
T(n-1) + n n>0

03/13/2023 196
Recurrence Relation – Example 3
int factorial(unsigned int n)
{ T(n)= 1+1+(n-1)+1= n-1
    if (n == 0) ------1
T(n)= O(n)
        return 1; --------1
    return n * factorial(n - 1); -------1 + T(n-1)
}

T(n)=T(n-1) +1,
Where T(n-1) is the number of multiplications required to compute the F(n-1)
1 is one multiplication to multiply the F(n-1) by n.

Recurrence Relation is

T(n)= 1 n=0

T(n-1) +1 n>0

03/13/2023 198
Home Assignment
• Find the recurrence relation of the algorithm for
1. Fibonacci series.
2. Linear search
3. Binary Search
4. Insertion sort

03/13/2023 199
Solution of Recurrence Relations
There are four methods for solving Recurrence:
• Substitution Method
• Iteration Method
• Recursion Tree Method
• Master Method

200
Substitution Method
• The Substitution Method Consists of two main steps:
1. Guess the Solution.
2. Use the mathematical induction to find the boundary condition and
shows that the guess is correct.
The substitution method can be used to establish either upper or lower
bounds on a recurrence.

Examples:
T(n) = 2T(n/2) + Θ(n)
T(n) = Θ(n lg n)
T(n) = 2T(⎣n/2⎦) + n

201
Substitution Method
Forward Substitution
• Take the Recurrence equation and initial condition.
• Put the initial condition in equation and look for the pattern
• Guess the pattern
• Prove that the guess pattern is correct using induction.

03/13/2023 202
Substitution Method
Forward Substitution
1. Take the equation and initial condition
T(n) = T(n-1) + n
T(1)=1
2. Look for the pattern
T(1) = 1
T(2) = T(2-1)+2 =T(1) +2=1+2=3
T(3) = T(3-1)+3 =T(2) +3=3+3=6
T(4)= T(4-1) + 4= T(3)+4= 6+4= 10
T(5)= T(5-1) + 5= T(4)+5= 10+5= 15
…….
T(n)=1+3+6+10+…….n(n+1)/2n(n+1)/2 (summation of n numbers)
= n2/2 +n/2 O(n2)

03/13/2023 203
Forward Substitution
Substitution Method
Forward Substitution
3. Guess the pattern as the above step.
T(n) = n( n+1)/2
4. Prove T(n) = n( n+1)/2 using induction

03/13/2023 205
Substitution Method
Forward Substitution
• This method make use of an initial condition in the initial term and value for
the next term is generated.
• This process is continued until some formula is guessed.
1. T(n)=T(n-1) + 1
4. Proof by Induction
2. T(1)=T(0)+1=1+1=2
T(n) = T(n-1) + 1
T(2)=T(1)+ 1=2+1=3 1. n=1, T(1)=T(1-1)+1
T(3)=T(2)+1=3+1=4 …. T(1)=T(0)+1=1+1=2
2. Assume T(n) is true for T(n-1)
By observing the above generated equations,
T(n-1)= (n-1) +1 =n (rule 2)
we can derive a formula, Prove T(n) is true for T(n) also
3. 1+2+3+…+(n+1)T(n)=n+1 T(n)=T(n-1)+1
T(n) =n+1 (true)

03/13/2023 T(n)=O(n) 206


Substitution Method
Backward Substitution
Steps:
1. Take the recursive equation and initial condition
2. Guess the pattern
3. Prove that guess pattern using induction

03/13/2023 207
Substitution Method
Backward Substitution
1. Take the recursive equation and initial condition
3. Prove that guess pattern using induction
T(n)=T(n-1) + 1
2.
Substitute T(n-1)
Guess the pattern T(n)= T(n-1) + 1 T(n) = T(n-k) + k
T(n-1) = T(n-2) +1
T(n)= [T(n-2) + 1] +1 T(n-2) = T(n-3) +1
n-k=0n=k
T(n) = T(n-2) + 2 T(n)=T(n-n)+n
T(n)=T(0) +n
T(n)= [ T(n-3) + 1] + 2 T(n)=1+ n
T(n)= T(n-3) + 3 T(n)=O(n)
.
. Continue for k times
.
03/13/2023 T(n) = T(n-k) + K 210
Substitution method
• Guess a solution
– T(n) = O(g(n))
– Induction goal: apply the definition of the asymptotic notation

• T(n) ≤ d g(n), for some d > 0 and n ≥ n0 (strong induction)


– Induction hypothesis: T(k) ≤ d g(k) for all k < n

• Prove the induction goal


– Use the induction hypothesis to find some values of the constants d and n0 for which the
induction goal holds

211
Example: Binary Search
T(n) = c + T(n/2)
• Guess: T(n) = O(lgn)
– Induction goal: T(n) ≤ d lgn, for some d and n ≥ n0
– Induction hypothesis: T(n/2) ≤ d lg(n/2)
• Proof of induction goal:
T(n) = T(n/2) + c ≤ d lg(n/2) + c
= d lgn – d + c ≤ d lgn
if: – d + c ≤ 0, d ≥ c
• Base case? 215
Example 2
T(n) = T(n-1) + n
• Guess: T(n) = O(n2)
– Induction goal: T(n) ≤ c n2, for some c and n ≥ n0
– Induction hypothesis: T(n-1) ≤ c(n-1)2 for all k < n

• Proof of induction goal:


T(n) = T(n-1) + n ≤ c (n-1)2 + n
= cn2 – (2cn – c - n) ≤ cn2
if: 2cn – c – n ≥ 0 ⇔ c ≥ n/(2n-1) ⇔ c ≥ 1/(2 – 1/n)
– For n ≥ 1 ⇒ 2 – 1/n ≥ 1 ⇒ any c ≥ 1 will work

216
Example
Recurrence relation for Merge sort, T(n) = 2T(n/2) + n
• Guess: T(n) = O(nlgn)T(n)<=c.nlgn , for some c>0 and for all
n>=n0
• Assume that it is true for all m<n
• To prove:
• T(n)<= c n lgn assuming T(m)<= c mlgm, for every m<n
if m=n/2<n
Assuming T(n/2)<= c. n/2 lg n/2

03/13/2023 219
Example
• T(n)= 2 T(n/2) + n
• <=2. c. n/2 lg n/2 +n
• =c. nlgn/2 +n
• = cn{ lgn- lg2} +1
• =cn lgn- cn +n
• <=cnlgn , -(c-1)n

03/13/2023 220
Incorrect Guess
T(n)= 2T(n/2) +n
Guess T(n)=O(n) T(n)<=c.n
Assume T(m) =O(m) <= c.m, for all m <n
To prove: T(n)<=cn assuming T(m)<= c.m, for all m <n
If m=n/2<n, T(n/2)<=cn/2
T(n)=2.T(n/2)+n
<= 2. c.n/2 +n
=cn+n
T(n)<=cn+n is not equal to T(n)<= cn (false,not proved) guess incorrect

03/13/2023 221
Substitution method
• Easy to prove
• Very fast prone to mistakes

03/13/2023 222
Solving Recurrence Relation Using
Recursion Tree
Step- 1:
• Draw a recursion tree based on the given recurrence relation.

Step- 2: 
Determine-
• Cost of each level
• Total number of levels in the recursion tree
• Number of nodes in the last level
• Cost of the last level

Step- 3:
• Add cost of all the levels of the recursion tree and simplify the expression so obtained in
terms of asymptotic notation.

03/13/2023 223
Solving Recurrence Relation Using Recursion
Tree

03/13/2023 224
Solving Recurrence Relation Using
Recursion Tree • Total steps is k.
• Cost of each step is n
• Total cost is n *k
• k?
• Assume

• 
• k=log n
• Total cost=n*k=n*log n
• O(n log n)
03/13/2023 225
Example 2

03/13/2023 226
GENERAL FORMS
Iterative Method
• To solve recurrence relation:
• convert the recurrence into a summation by iterating the recurrence
until the initial condition is reached.
• break T(n) into T(n/2) and then into T(n/4) and so on.

03/13/2023 228
Iterative Method
• convert the recurrence into a
summation. We do so by
iterating the recurrence until the
initial condition is reached.

03/13/2023 229
Steps:
. break down the problem into n→n/2→n/4→n/8→...
. After reaching the base case, we back-substituted the equation (value of k)
to express the equation in the form of n and initial boundary condition

03/13/2023 230
03/13/2023 231
Iterative Method

03/13/2023 232
Master’s Method
• Master Method is a direct way to get the solution. The master method works
only for following type of recurrences or for recurrences that can be
transformed to following type.
T(n) = aT(n/b) + f(n),
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem.
All subproblems are assumed to have the same size.
f(n) = cost of the work done outside the recursive call, which includes the cost of
dividing the problem and cost of merging the solutions Here, a ≥ 1 and b > 1 are
constants, and f(n) >0.

03/13/2023 233
Master’s Theorem
• If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically
positive function, then the time complexity of a recursive relation
is given by
T(n) = aT(n/b) + f(n)
where,
T(n) has the following asymptotic bounds:
1. If f(n) = O(nlogb a-ϵ), then T(n) = Θ(nlogb a).
2. If f(n) = Θ(nlogb a), then T(n) = Θ(nlogb a * log n).
3. If f(n) = Ω(nlogb a+ϵ), then T(n) = Θ(f(n)). ϵ > 0 is a constant.
03/13/2023 234
Master’s Theorem
Each of the above conditions can be interpreted as:
• If the cost of solving the sub-problems at each level increases by a certain
factor, the value of f(n) will become polynomially smaller than nlogb a. Thus,
the time complexity is oppressed by the cost of the last level ie. nlogb a
• If the cost of solving the sub-problem at each level is nearly equal, then the
value of f(n) will be nlogb a. Thus, the time complexity will be f(n) times the
total number of levels ie. nlogb a * log n
• If the cost of solving the subproblems at each level decreases by a certain
factor, the value of f(n) will become polynomially larger than nlogb a. Thus, the
time complexity is oppressed by the cost of f(n).

03/13/2023 235
Master’s Theorem

03/13/2023 236
Simplified three cases of master theorem
Master Theorem Cases-
To solve recurrence relations using Master’s theorem, we compare a with bk.
Case-01:
 If a > bk, or log a
b >k then T(n) = θ (nlogba)

 Case-02:
 If a = bk or logba=k and
If p < -1, then T(n) = θ (nlogba)
If p = -1, then T(n) = θ (nlogba.log2n)
If p > -1, then T(n) = θ (nlogba.logp+1n)

 Case-03: 
If a < bk  or logba<k and
If p < 0, then T(n) = O (nk)
If p >= 0, then T(n) = θ (nklogpn)

03/13/2023 237
Example 1- Case 1
Solve the following recurrence relation using Master’s theorem-
T(n) = 8T(n/2) +n logn
 
Solution-
 
We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).
Then, we have-
a=8 Case-01:
b=2  If a > bk, or logba>k then T(n) = θ (nlogba)
k=1
p=1  Case-02:
   If a = bk or logba=k and
Now, a = 8 and b  = 2  = 1.
k 0
If p < -1, then T(n) = θ (nlogba)
Clearly, a > b .
k
If p = -1, then T(n) = θ (nlogba.log2n)
So, we follow case-01.
  If p > -1, then T(n) = θ (nlogba.logp+1n)
So, we have-
T(n) = θ (nlogba)  Case-03: 
If a < bk  or logba<k and
T(n) = θ (nlog28)
If p < 0, then T(n) = O (nk)
T(n) = θ (n3)
If p >= 0, then T(n) = θ (nklogpn)
 
Thus, T(n) = θ (n3)
03/13/2023 238
Example 1- Case 2
Solve the following recurrence relation using Master’s theorem-
T(n) = 2T(n/2) + nlogn
 
Solution-
 
We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).
Then, we have-
a=2 Case-01:
b=2  If a > bk, or logba>k then T(n) = θ (nlogba)
k=1
p=1  Case-02:
   If a = bk or logba=k and
Now, a = 2 and bk = 21 = 2. If p < -1, then T(n) = θ (nlogba)
Clearly, a = bk. If p = -1, then T(n) = θ (nlogba.log2n)
So, we follow case-02. If p > -1, then T(n) = θ (nlogba.logp+1n)
 
Since p = 1, so we have-  Case-03: 
T(n) = θ (nlogba.logp+1n) If a < bk  or logba<k and
T(n) = θ (nlog22.log1+1n) If p < 0, then T(n) = O (nk)
  If p >= 0, then T(n) = θ (nklogpn)
Thus, T(n) = θ (nlog2n) 

03/13/2023 239
Example 2- Case 2
Solve the following recurrence relation using Master’s theorem- T(n) = 3T(n/3) + n/2
 
Solution-
 We write the given recurrence relation as T(n) = 3T(n/3) + n.
This is because in the general form, we have θ for function f(n) which hides constants in it.
Now, we can easily apply Master’s theorem.

 We compare the given recurrence relation with T(n) = aT(n/b) + θ (n klogpn).
Then, we have-
a=3 Case-01:
b=3  If a > bk, or logba>k then T(n) = θ (nlogba)
k=1
p=0  Case-02:
Now, a = 3 and b  = 3  = 3.
k 1
 If a = bk or logba=k and
Clearly, a = bk. So, we follow case-02. If p < -1, then T(n) = θ (nlogba)
  If p = -1, then T(n) = θ (nlogba.log2n)
Since p = 0, so we have- If p > -1, then T(n) = θ (nlogba.logp+1n)
T(n) = θ (n b .log n)
log a p+1

T(n) = θ (nlog33.log0+1n)  Case-03: 


T(n) = θ (n .log n)
1 1
If a < bk  or logba<k and
  If p < 0, then T(n) = O (nk)
Thus, T(n) = θ (nlogn) If p >= 0, then T(n) = θ (nklogpn)
03/13/2023 240
Example 1- Case 3
Solve the following recurrence relation using Master’s theorem-
T(n) = 3T(n/2) + n2
compare the given recurrence relation with T(n) = aT(n/b) + θ (n klogpn).
Then, we have-
a=3
b=2
k=2 Case-01:
p=0  If a > bk, or logba>k then T(n) = θ (nlogba)
 
Now, a = 3 and bk = 22 = 4.  Case-02:
Clearly, a < bk.  If a = bk or logba=k and
So, we follow case-03. If p < -1, then T(n) = θ (nlogba)
  If p = -1, then T(n) = θ (nlogba.log2n)
Since p = 0, so we have- If p > -1, then T(n) = θ (nlogba.logp+1n)
T(n) = θ (nklogpn)
 Case-03: 
T(n) = θ (n2log0n) If a < bk  or logba<k and
  If p < 0, then T(n) = O (nk)
Thus, T(n) = θ (n ) 2
If p >= 0, then T(n) = θ (nklogpn)

03/13/2023 241
Example 2-Case 3
Solve the following recurrence relation using Master’s theorem-
T(n) = 2T(n/4) + n0.51
 
Solution-
 
We compare the given recurrence relation with T(n) = aT(n/b) + θ (n klogpn).
Then, we have-
a=2 Case-01:
b=4  If a > bk, or logba>k then T(n) = θ (nlogba)
k = 0.51
p=0  Case-02:
   If a = bk or logba=k and
Now, a = 2 and b  = 4  = 2.0279.
k 0.51
If p < -1, then T(n) = θ (nlogba)
Clearly, a < b .
k
If p = -1, then T(n) = θ (nlogba.log2n)
So, we follow case-03.
  If p > -1, then T(n) = θ (nlogba.logp+1n)
Since p = 0, so we have-
T(n) = θ (nklogpn)  Case-03: 
T(n) = θ (n0.51log0n) If a < bk  or logba<k and
  If p < 0, then T(n) = O (nk)
Thus, T(n) = θ (n0.51) If p >= 0, then T(n) = θ (nklogpn)

03/13/2023 242
Home Assignments
• T(n)=2T(n/2)+1
• T(n)=4T(n/2)+n
• T(n)=8T(n/2)+
• T(n)=9T(n/3)+
• T(n)=2T(n/2)+n
• T(n)= 4T(n/2)+
• T(n)= 4T(n/2)+ log n
• T(n)= 8T(n/2) +
• T(n)= T(n/2)+

03/13/2023 243

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy