0% found this document useful (0 votes)
2 views32 pages

Module 2 ADA

The document discusses various algorithm design techniques, focusing on brute force approaches, decrease-and-conquer, and divide-and-conquer methods. It includes detailed explanations of the Traveling Salesman Problem and the Knapsack Problem, along with their analyses and time complexities. Additionally, it covers topological sorting methods and the merge sort algorithm, providing insights into their applications and efficiencies.

Uploaded by

noksha910
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views32 pages

Module 2 ADA

The document discusses various algorithm design techniques, focusing on brute force approaches, decrease-and-conquer, and divide-and-conquer methods. It includes detailed explanations of the Traveling Salesman Problem and the Knapsack Problem, along with their analyses and time complexities. Additionally, it covers topological sorting methods and the merge sort algorithm, providing insights into their applications and efficiencies.

Uploaded by

noksha910
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Analysis & Design of Algorithms BCS401

MODULE 2

BRUTE FORCE APPROACHES


Many important problems require finding an element with a special property in a domain that grows
exponentially (or faster) with an instance size. Typically, such problems arise in situations that
involve—explicitly or implicitly—combinatorial objects such as permutations, combinations, and
subsets of a given set.

Many such problems are optimization problems: they ask to find an element that maximizes
or minimizes some desired characteristic such as a path length or an assignment cost.

Exhaustive search
• It is simply a brute-force approach to combinatorial problems.
• It suggests generating each and every element of the problem domain, selecting those of
them that satisfy all the constraints, and then finding a desired element (e.g., the one that
optimizes some objective function).
We illustrate exhaustive search by applying it to two important problems:
1. The traveling salesman problem,
2. The knapsack problem,

1. Traveling Salesman Problem


• In layman’s terms, the problem asks to find the shortest tour through a given set of n
cities that visits each city exactly once before returning to the city where it started.

• The problem can be modelled by a weighted graph, with the graph’s vertices representing
the cities and the edge weights specifying the distances.

• Then the problem can be stated as the problem of finding the shortest Hamiltonian circuit
of the graph.
(A Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph
exactly once. It is named after the Irish mathematician Sir William Rowan Hamilton (1805–
1865), who became interested in such cycles as an application of his algebraic discoveries.)

Dept., of CSE Page 1


Analysis & Design of Algorithms BCS401

• An inspection of Figure 3.7 reveals three pairs of tours that differ only by their direction.
Hence, we could cut the number of vertex permutations by half.
• We could, for example, choose any two intermediate vertices, say, b and c, and then
consider only permutations in which b precedes c.

Analysis:
1. Size of input is number of cities
2. Basic Operation is to find cost of each path operation count
3. Total number of path that we need to find for n cities is T(n) € Θ (n-1)!

• In general for n cities, number of routes is (n-1)!


The time complexity F(n)=(n-1)!
i.e T(n) € Θ (n-1)!

Dept., of CSE Page 2


Analysis & Design of Algorithms BCS401

2. Knapsack Problem
Given n items of known weights w1, w2, . . . , wn and values v1, v2, . . . , vn and a knapsack of
capacity W, find the most valuable subset of the items that fit into the knapsack. Figure 3.8a
presents a small instance of the knapsack problem.

Analysis
1. No of items is the measure of input size ‘n’
2. Basic Operation is to find total weight, profit and check whether it is feasible or not.
3. The time complexity of these algorithms is T(n) € Θ(2n)

Dept., of CSE Page 3


Analysis & Design of Algorithms BCS401

DECREASE-AND-CONQUER
1. Discuss decrease and conquer algorithmic technique. Explain its variations 06M
• The decrease-and-conquer technique is based on exploiting the relationship between a solution
to a given instance of a problem and a solution to its smaller instance.
• Once such a relationship is established, it can be exploited either
1. top down or
2. bottom up.
There are three major variations of decrease-and-conquer:
1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease

1. DECREASE BY A CONSTANT

In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant on
each iteration of the algorithm. Typically, this constant is equal to one (Figure 4.1), although other
constant size reductions do happen occasionally.

Consider, as an example, the exponentiation problem of computing an where,


a ≠0 and n is a nonnegative integer.
The relationship between a solution to an instance of size n and an instance of size n − 1 is obtained
by the obvious formula an = an−1 . a.
So the function f (n) = an can be computed either “top down” by using its recursive definition,

Dept., of CSE Page 4


Analysis & Design of Algorithms BCS401

or “bottom up” by multiplying 1 by a n times.


Disadvantages:
1. Not possible to choose the constant size,
2. Each step the problem size is reduced by 1.
3. Time consuming
4. If the problem size is large, then the number of steps increases.

REFER CLASS NOTES FOR EXAMPLES

2. THE DECREASE-BY-A-CONSTANT-FACTOR
• This technique suggests reducing a problem instance by the same constant factor on each
iteration of the algorithm. In most applications, this constant factor is equal to two. The
decrease-by-half idea is illustrated in Figure 4.2.

• If the instance of size n is to compute an, the instance of half its size is to compute an/2, with
the obvious relationship between the two: an = (an/2)2.

• But since we consider here instances with integer exponents only, the former does not work
for odd n. If n is odd, we have to compute an−1 by using the rule for even-valued exponents
and then multiply the result by a. To summarize, we have the following formula:

Dept., of CSE Page 5


Analysis & Design of Algorithms BCS401

If we compute an recursively according to formula (4.2) and measure the algorithm’s efficiency
by the number of multiplications, we should expect the algorithm to be in θ(log n) because, on each
iteration, the size is reduced by about a half at the expense of one or two multiplications.

Disadvantages

1. Once the constant is fixed we carry forward that value till end.

3. VARIABLE-SIZE-DECREASE
a. Finally, in the variable-size-decrease variety of decrease-and-conquer, the size-
reduction pattern varies from one iteration of an algorithm to another.
b. Euclid’s algorithm for computing the greatest common divisor provides a good
example of such a situation.
c. Recall that this algorithm is based on the formula,

gcd(m, n) = gcd(n, m mod n). Refer Class Notes

Dept., of CSE Page 6


Analysis & Design of Algorithms BCS401

Examples: Insertion Sort


(1. Design an insertion sort algorithm and obtain its time complexity. Apply insertion sort on
these elements. 25,75,40,10,20 10M)

• Let us consider an application of the decrease-by-one technique to sorting an array


A[0…….n − 1].
• Following the technique’s idea, we assume that the smaller problem of sorting the array
A[0…..n −2] has already been solved to give us a sorted array of size n − 1:
A[0]≤ .. . . . ≤ A[n − 2].

The operation of the algorithm is illustrated in Figure 4.4.

Dept., of CSE Page 7


Analysis & Design of Algorithms BCS401

Analysis:
1. The size of the input is n

2. The Basic Operation is Comparison

3. The number of key comparisons in this algorithm obviously depends on the nature of
the input.

4. In the best case,

The comparison A[j ]> v is executed only once on every iteration of the outer loop. It
happens if and only if A[i − 1]≤ A[i] for every i = 1, . . . , n − 1, i.e., if the input array is
already sorted in nondecreasing order. Thus, for sorted arrays, the number of key
comparisons is,

5. In the worst case,


A[j ]> v is executed the largest number of times, i.e., for every j = i − 1, . . . , 0. The worst-
case input is an array of strictly decreasing values. The number of key comparisons for
such an input is,

6. In the Average Case,


On randomly ordered arrays, insertion sort makes on average half as many comparisons as on
decreasing arrays,

Dept., of CSE Page 8


Analysis & Design of Algorithms BCS401

Topological Sorting

(1. Define topological sorting. List the two approaches of topological


sorting and illustrate with examples 10M)

(2. Obtain Topological sorting for the graph using (a) DFS method
(b)source removal method 10M)

3. Apply Topological sorting for the graph and find the topological
sequence. 06M

4. Discuss Topological Sorting. 08M

5. Apply topological sorting on the following graph using source removal


and DFS based methods 10M

• Definition: Topological Sorting of a directed acyclic graph(DAG) G=(V,E) is a linear


ordering of all the vertices such that for every edge (u,v) in graph (G), vertex U
appears before V.
• A directed graph, or digraph for short, is a graph with directions specified for all its edges
(Figure 4.5a is an example).
• A Directed graph can be represented using
i. Adjacency matrix
Dept., of CSE Page 9
Analysis & Design of Algorithms BCS401

ii. Adjacency list

• A Directed Acyclic Graph(DAG) is a directed graph, that contains no cycles.

• Figure 4.5a, the depth-first search forest (Figure 4.5b) exhibits all four types of edges
possible in a DFS forest of a directed graph: tree edges (ab, bc, de), back edges (ba) from
vertices to their ancestors, forward edges (ac) from vertices to their descendants in the tree
other than their children, and cross edges (dc), which are none of the aforementioned types.

Example:
• Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to take
in some degree program. The courses can be taken in any order as long as the following
course prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1 and C2, C4
requires C3, and C5 requires C3 and C4. The student can take only one course per term. In
which order should the student take the courses?
• The situation can be modeled by a digraph in which vertices represent courses and directed
edges indicate prerequisite requirements (Figure 4.6).
• In terms of this digraph, the question is whether we can list its vertices in such an order that
for every edge in the graph, the vertex where the edge starts is listed before the vertex where
the edge ends. (Can you find such an ordering of this digraph’s vertices?) This problem is
called topological sorting.

Dept., of CSE Page 10


Analysis & Design of Algorithms BCS401

Thus, for topological sorting to be possible, a digraph in question must be a DAG. There are two
efficient algorithms that both verify whether a digraph is a DAG and, if it is, produce an ordering
of vertices that solves the topological sorting problem.
a) Topological sorting using DFS based method
b) Topological sorting using Source removal method.

a) The first algorithm is a simple application of depth-first search: Perform a DFS


traversal and note the order in which vertices become dead- ends (i.e., popped off the traversal
stack). Reversing this order yields a solution to the topological sorting problem, provided, of
course, no back edge has been encountered during the traversal. If a back edge has been
encountered, the digraph is not a dag, and topological sorting of its vertices is impossible. Following
figure illustrates an application of this algorithm.

b) Source Removal Method:


The second algorithm is based on a direct implementation of the decrease- (by one)-and-conquer
technique: repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming
edges, and delete it along with all the edges outgoing from it. The order in which the vertices are
deleted yields a solution to the topological sorting problem. The application of this algorithm to
the same digraph representing the five courses is given in Figure 4.8.

Dept., of CSE Page 11


Analysis & Design of Algorithms BCS401

Note: that the solution obtained by the source-removal algorithm is different from the one
obtained by the DFS-based algorithm.

Dept., of CSE Page 12


Analysis & Design of Algorithms BCS401

Divide-and-Conquer
(1. Explain divide and conquer technique with general algorithm 06M
2. Discuss the general method of divide and conquer along with control
abstraction 06M
3. What are the disadvantages of Divide and conquer approach 04M)

• Divide-and-conquer is probably the best-known general algorithm design technique. Divide-


and-conquer algorithms work according to the following general plan:
1. A problem is divided into several subproblems of the same type, ideally of about equal
size.
2. The subproblems are solved (typically recursively, though sometimes a different
algorithm is employed, especially when subproblems become small enough).
3. If necessary, the solutions to the subproblems are combined to get a solution to the
original problem.

The divide-and-conquer technique is diagrammed in Figure 5.1, which depicts the case of dividing
a problem into two smaller subproblems.

Dept., of CSE Page 13


Analysis & Design of Algorithms BCS401

• In the most typical case of divide-and-conquer a problem’s instance of size n is divided into two
instances of size n/2. More generally, an instance of size n can be divided into b instances of
size n/b, with a of them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.)
• Assuming that size n is a power of b to simplify our analysis, we get the following recurrence
for the running time T (n): T(n) = aT (n/b) + f (n)

where f (n) is a function that accounts for the time spent on dividing an instance.
• The above Recurrence is called the general divide-and-conquer recurrence. Obviously, the
order of growth of its solution T (n) depends on the values of the constants a and b and the
order of growth of the function f(n).
• The efficiency analysis of many divide-and-conquer algorithms is greatly simplified by the
following theorem:

• For example, the recurrence for the number of additions A(n) made by the divide-and-conquer
sum-computation algorithm (see above) on inputs of size n = 2k is:

Dept., of CSE Page 14


Analysis & Design of Algorithms BCS401

a) Merge-sort:
(1. Explain the concept of divide and conquer. Design an algorithm for merge
sort and derive its time complexity 10M)
2. Write merge sort algorithm for sorting using divide and conquer 06M)
3. Write merge sort algorithm with examples also calculate efficiency 12M)
4. Write an algorithm for merge sort. Also demonstrate the applicability of
Master’s theorem to compute time complexity 06M
5. Design an algorithm for performing merge sort. Analyze its time efficiency.
Apply the same to sort the following set of numbers 4,9,0,-1,6,8,9,2,3,12. 10M
• Merge-sort is a perfect example of a successful application of the divide- and-conquer
technique.
• It sorts a given array A[0…..n − 1] by dividing it into two halves A[0…...n/2 − 1] and
A[n/2…...n − 1], sorting each of them recursively, and then merging the two smaller sorted

arrays into a single sorted one.

Dept., of CSE Page 15


Analysis & Design of Algorithms BCS401

Example 2: Let us apply the merge-sort algorithm to sort the array elements:
8, 3, 2, 9, 7, 1, 5, 4

Dept., of CSE Page 16


Analysis & Design of Algorithms BCS401

Example 2: Sort the given elements using merge-sort: 6, 5, 12, 10, 9, 1

Analysis of Merge-sort:
• n is the measure of input size.
• Basic operation is comparison.
• The number of key comparisons are C(n) is,

C(1)=0 for n=1


C(n) is the number of key comparisons performed during the merging stage.

Dept., of CSE Page 17


Analysis & Design of Algorithms BCS401

• worst-case: It happens when elements at each step exactly one comparison is made, until neither of
the two arrays becomes empty before the other one contains just one element. Therefore, number
of comparisons are n-1. ic, in worst case Cmerge(n)=n-1
Cworst(n) = C(n/2) + C(n/2) + n-1 for n>1

• Best-case: It happens when elements are already sorted, then Cmerge(n)=n/2.


Cbest(n)= C(n/2) + C(n/2) +1/2 .n

In general, T(n)= T(n/2) +T(n/2)+c.n where c is a constant


T(n) =2T(n/2)+c.n ........................................................................(2)

• Using Master’s Theorem, T(n)=aT(n/b)+F(n) ................................................... (1)


Comparing equation (1) and (2) we get, a=2, b=2, F(n)=n=nd =>d=1 Comparing a & bd we get
a= =bd Hence, T(n) ϵ Ɵ(nd log n)
ϵ Ɵ(n log n)
Therefore time-complexity of mergesort is Ɵ(n log n) in all cases. Advantage: Merge sort time
complexity is Ɵ(n log n).
Drawback: Merge sort need linear amount of extra storage. Hence it not in-place.

Dept., of CSE Page 18


Analysis & Design of Algorithms BCS401

c) Quick sort
1. What is divide and conquer? Develop the quick sort algorithm and write its
bestcase. Make use of this algorithm to sort the list of characters: E, X, A,M,P,L,E
10M)
2. Design an algorithm for Quick sort algorithm. Apply quick sort on these elements
25,75, 40, 10, 20, 05, 15 10M)
3. Apply merge sort and quick sort algorithm to sort the characters VTUBELAGAVI
10M)
4. Sort the following keyword “ALGORITHM” by applying quick sort method 06M)
5. Write Quick sort algorithm with example. Also calculate efficiency. 12M)
6. Sort the below given array of elements using quick sort. Mention time complexity

08M
7. Design an algorithm for performing quick sort, apply the same to sort the following
set of numbers 5, 3, 1, 9,8,2,4,7. 10M
o Quicksort is the other important sorting algorithm that is based on the divide-and conquer
approach. Unlike merge-sort, which divides its input elements according to their position in the
array, quicksort divides them according to their value.
o A partition is an arrangement of the array’s elements so that all the elements to the left of some
element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater than
or equal to it:

Obviously, after a partition is achieved, A[s] will be in its final position in the sorted array, and we
can continue sorting the two subarrays to the left and to the right of A[s] independently.

ALGORITHM Partition(A[l..r])

Dept., of CSE Page 19


Analysis & Design of Algorithms BCS401

//Partitions a subarray, using the first element as a pivot


//Input: Subarray of array A[0..n − 1], defined by its left and right indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as this function’s value

Example: Let us sort the given elements using Quicksort. 5, 3, 1, 9, 8, 2, 4, 7

Dept., of CSE Page 20


Analysis & Design of Algorithms BCS401

The recursive calls to Quicksort with input values l and r of subarray bounds and split position s of
a partition obtained can be represented by Recursive-Tree as shown below:

Example 2: Sort E, X, A, M, P, L, E in alphabetical order

Dept., of CSE Page 21


Analysis & Design of Algorithms BCS401

Analysis:

• The basic operation comparison


• Number of key comparisons made before a partition is n+1 if the scanning indices cross over. It is
n if they coincide.
• Best Case: This occurs, if all the splits happen in the middle of the corresponding sub-array. The
number of key comparisons in the best case will satisfy the recurrence,

Therefore, T(n)=2 T(n/2) + n................ (2)

Using Master’s theorem, T(n) = a T(n/b) +F(n) .......................... (1)

Comparing (1) and (2) we get, a=2, b=2, F(n)=n=n1=nd=> d=1

Comparing a & bd, we get a=bd Therefore, T(n) ϵ Ɵ(nd log n)


ϵ Ɵ (n log n)

• Worst case: Occurs when, at each invocation of the procedure, the current array is
partitioned into 2 sub arrays with one of them being empty. This situation occurs if
all the elements are arranged in ascending order or descending order.
Ex: 20, 30, 40, 50
[20], [30, 40, 50]
20, [30], [40, 50]
20, 30, 40, [50]

Dept., of CSE Page 22


Analysis & Design of Algorithms BCS401

• Average case: The pivot element may be placed at any arbitrary position in the
array ranging from 0 to n-1 with probability 1/n.
Time complexity is T(n) ϵ Ɵ (n log n)
Dept., of CSE Page 23
Analysis & Design of Algorithms BCS401

Advantage:
• Quicksort is in-place
• Time Complexity is T(n) ϵ Ɵ (n log n)
Disadvantage:
1. It is not stable
2. At worst-case time complexity is O(n2)

Note: Two properties of sorting algorithms are Stable and In-order.

• A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input. In other words, if an input list contains two equal elements in
positions i and j where i < j, then in the sorted list they have to be in positions i and
j, respectively, such that
i’ < j’.

• An algorithm is said to be in-place if it does not require extra memory, except,


possibly, for a few memory units.

Dept., of CSE Page 24


Analysis & Design of Algorithms BCS401

Binary Tree Traversals and Related Properties


(1. Show the number of element comparisons with example and show the proof of
binary search time for best case, average case and worst case analysis 8M)
• A binary tree T is defined as a finite set of nodes that is either empty or consists of a root and
two disjoint binary trees TL and TR called, respectively, the left and right subtree of the root.
• Since the definition itself divides a binary tree into two smaller structures of the same type,
the left subtree and the right subtree, many problems about binary trees can be solved by
applying the divide-and-conquer technique.
• As an example, let us consider a recursive algorithm for computing the height of a binary
tree. Hence, it can be computed as the maximum of the heights of the root’s left and right
subtrees plus 1. Also note that it is convenient to define the height of the empty tree as −1.
Thus, we have the following recursive algorithm:

ALGORITHM Height (T)


//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T = ∅ return −1
else return max {Height (Tleft), Height (Tright)} + 1.

Analysis:
• Measure of input size is the number of nodes n(T ) in a given binary tree T
• The number of comparisons made to compute the maximum of two numbers and the
number of additions A(n(T )) made by the algorithm are the same.
• We have the following recurrence relation for A(n(T )):
A(n(T )) = A(n(Tleft)) + A(n(Tright)) + 1 for n(T ) > 0, A(0) = 0.
• Basic operations are addition and comparison.
• For the empty tree, the comparison T = ∅ is executed once but there are no additions, and
for a single-node tree, the comparison and addition numbers are 3 and 1, respectively.
• In general, the number of comparisons to check nodes of the tree is ,
C(n) = n + x = 2n + 1,
and the number of additions is, A(n) = n.

Dept., of CSE Page 25


Analysis & Design of Algorithms BCS401

The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder.
• All three traversals visit nodes of a binary tree recursively, i.e., by visiting the tree’s root and its
left and right subtrees. They differ only by the timing of the root’s visit:

➢ In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
➢ In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
➢ In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order).

Dept., of CSE Page 26


Analysis & Design of Algorithms BCS401

Strassen’s Matrix Multiplication

1. Explain Strassen’s matrix multiplication approach with an


example and derive its time complexity 10M
2. Explain Strassen’s matrix multiplication and derive its time
complexity 10M
3. Apply Strassen’s algorithm for matrix multiplication to multiply
the following matrices and justify how Strassen algorithm is better
4 3 1 2
5 2 6 5 10M
4. Discuss Strassen’s matrix multiplication 8M
5. Apply Strassen’s algorithm for matrix multiplication to multiply
the following matrices and Show the details of computation.
• 4 5 0 2
1 3 1 3 10M)

The time complexity is Ɵ(n3)

The divide-and-conquer strategy suggest us another way to compute the product of two n x n
matrices.
If we assume that n is a power of 2, ic, n= 2k, where k is a non-negative integer, A & B can be

Dept., of CSE Page 27


Analysis & Design of Algorithms BCS401

portioned into four square sub-matrices, each having dimensions n/2 x n/2.
For n=2,
The product A.B can be computed using the formula,

Here we do 8 multiplications and 4 additions.


For n>2,
The elements of C can be computed using matrix multiplication and addition operations
applied to matrices of n/2 x n/2.
Analysis:
to compute A.B, we need to perform 8 multiplications and 4 additions of,
n/2 x n/2 matrices. Then overall computing time in case of divide-and-conquer algorithm is,

Dept., of CSE Page 28


Analysis & Design of Algorithms BCS401

Hence no improvement over the conventional method.

Since matrix multiplication (O(n3) is more expensive than matrix addition (O(n2)), we can attempt
to reformulate the equations for Cij to have fewer multiplications and possibly more additions.
• Volker Strassen has discovered a way to compute the Cij’s using only 7 multiplications and 18
additions or subtractions.
• His method involves, first computing the 7 n/2 x n/2 matrices P, Q, R, S, T, U, & V as follows:
P= (A11 + A22) (B11 +B22)
Q= B11(A21+A22)
R= A11 (B12 -B22)
S= A22(B21 – B11)
T= B22(A11 +A12)
U= (A21-A11) (B11 + B12)
V= (A12-A22) (B21 +B22)

The Cij’s are computed as follows:

Dept., of CSE Page 29


Analysis & Design of Algorithms BCS401

C11= P + S – T+V C12= R + T


C21= Q + S
C22 = P + R – Q + U

The time-complexity can be calculated using the recurrence,

Example: Apply Strassen’s matrix multiplication to multiply the following


matrices

Dept., of CSE Page 30


Analysis & Design of Algorithms BCS401

Dept., of CSE Page 31


Analysis & Design of Algorithms BCS401

Feature Decrease and Conquer Divide and Conquer


Reduces the problem size Splits the problem into multiple
Approach
gradually smaller subproblems
Number of Typically one smaller
Multiple independent subproblems
Subproblems instance
Solution Often direct, using a simple Requires merging of solutions from
Combination recurrence subproblems
Recursion Depth Usually shallower Can be deep due to multiple splits
Binary Search, Insertion Merge Sort, Quick Sort, Strassen’s
Key Examples
Sort, Euclidean Algorithm Matrix Multiplication
Generally lower recursion Often logarithmic or exponential
Time Complexity
overhead due to multiple recursive calls

**********************************

Dept., of CSE Page 32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy