0% found this document useful (0 votes)
27 views

This Document Has Been Classified As CONFIDENTIAL-EXTERNAL by Centenary Bank

This document discusses several algorithms and techniques including greedy algorithms, divide and conquer algorithms, and dynamic programming. Greedy algorithms make locally optimal choices at each step to find a global optimum. Divide and conquer algorithms break problems into subproblems, solve the subproblems, and combine the solutions. Dynamic programming solves problems by breaking them into overlapping subproblems and storing results to avoid recomputing them, applying the principle of optimality. Examples of each type of algorithm are given such as Kruskal's algorithm for greedy, binary search for divide and conquer, and knapsack problem for dynamic programming.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

This Document Has Been Classified As CONFIDENTIAL-EXTERNAL by Centenary Bank

This document discusses several algorithms and techniques including greedy algorithms, divide and conquer algorithms, and dynamic programming. Greedy algorithms make locally optimal choices at each step to find a global optimum. Divide and conquer algorithms break problems into subproblems, solve the subproblems, and combine the solutions. Dynamic programming solves problems by breaking them into overlapping subproblems and storing results to avoid recomputing them, applying the principle of optimality. Examples of each type of algorithm are given such as Kruskal's algorithm for greedy, binary search for divide and conquer, and knapsack problem for dynamic programming.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

An algorithm ,named for the ninth century Persian mathematician al-khowarizmi is simply a set of rules

used to perform some calculations, either by hand or more usually on a machine.

 Basically an algorithm is a finite set of instructions that can be used to perform certain task.

 Definition of algorithm

The algorithm is defined as a collection of unambiguous instructions occurring in some specific sequence
and such an algorithm should produce for given input in finite amount of time

The efficiency can be measured by computing time complexity of each algorithm.

 Asymptotic notation is a shorthand way to represent the time complexity.

Various notations such as big oh,omega,theta are called asymptotic notations.

 Big oh notation

The big oh notation is denoted by ‘O’ .It is a method of representing the upper bound of algorithm’s
running time.

 Omega notation It is denoted by Ω.

This notation is used to represent the lower bound of algorithm’s running time.

 O Notation

The theta notation is denoted by O. This methode the running time is between upper bound and lower
bound.

Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an
immediate benefit.

 This approach is mainly used to solve optimization problems. (that is minimum optimization or
maximum optimization).

Application of the greedy method

 1)knapsack problem

Given a set of items, each with a weight and a value, determine a subset of items to include in a
collection so that the total weight is less than or equal to a given limit and the total value is as large as
possible.

 2) job sequencing with deadlines

Goal is to schedule jobs to maximize the total profit.  At a time one job is executed.

 3)minimum spanning trees

A spanning tree of a graph is any tree that includes every vertex in the graph. Calculated as n n-2

We shall learn about two most important spanning tree algorithms here –

This document has been classified as CONFIDENTIAL-EXTERNAL by Centenary Bank.


 Kruskal's Algorithm

This algorithm is discovered by joseph Kruskal. In this algorithm always the minimum cost edge has to be
selected.

 Prim's Algorithm

We will consider all the vertices first. Then we will select an edge with minimum weight. The algorithm
proceeds by selecting adjacent edges with minimum weight. Care should be taken for not forming
circuit.

 4)single source shortest path problem /Dijkstra’s greedy algorithm

The V0 is then called as source and last vertex is called destination

 Arrange all edges in the increasing order of weight.

 Select an edge with minimum weight. This is the first edge of spanning tree T to be constructed.

 Select the next edge with minimum weight that do not form a cycle with the edges.

Divide and conquer

In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then
each problem is solved independently.  The solution of all sub-problems is finally merged in order to
obtain the solution of an original problem

The following computer algorithms are based on divide-and-conquer programming approach –

 Binary Search

For a binary search to work, it is mandatory for the target array to be sorted.

first, we shall determine half of the array by using this formula − mid = low + high/ 2

Compare x with the middle element.  If x matches with the middle element, we return the mid index.
 Else If x is greater than the mid element, then x can only lie in the right half subarray after the mid
element. So we recur for the right half.  Else (x is smaller) recur for the left half.

 Merge Sort

 Quick Sort

Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller
arrays. A large array is partitioned into two arrays one of which holds values smaller than the specified
value, say pivot, based on which the partition is made and another array holds values greater than the
pivot value.

 Strassen's Matrix Multiplication

In greedy method and dynamic programming both are used to solve optimization problems.

This document has been classified as CONFIDENTIAL-EXTERNAL by Centenary Bank.


 Optimization problems are those which gives maximum result or minimum result.

 Both have different strategies to solve.

 In greedy method we follow predefined procedures.ex kruskal algorithm

 In dynamic programming we take all possible solutions and pick up the best solution.

 So it is time taking process.

 Dynamic programming follows principle of optimality.

 It says that a problem is solved by taking a sequence of decisions.

 In greedy method a decision is taken and followed here in dynamic programming decision depends on
on one stage to another.

Dynamic programming is used where we have problems, which can be divided into similar subproblems,
so that their results can be re-used.

 This technique is invented by a U.s mathematician Richard bellman in 1950.

In the word dynamic programming the word programming stands for planning and it does not mean
computer programming.

 Dynamic programming is technique for solving problems with overlapping subproblems.

Principle of optimality

 The dynamic programming algorithm obtains the solution using principle of optimality.

 The principle of optimality states that “in an optimal sequence of decisions or choices, each
subsequence must also be optimal

Applications of dynamic programming

 1) 0/1 knapsack problem  Multistage graph  2)optimal binary search trees  4)All pairs shortest
path problem  5)Travelling sales man problem

This document has been classified as CONFIDENTIAL-EXTERNAL by Centenary Bank.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy