0% found this document useful (0 votes)
22 views

Basic Two

An algorithm is a set of steps to solve a problem. Key aspects include time complexity, which is how long an algorithm takes, and space complexity, which is how much memory it uses. Asymptotic notation like Big O describes an algorithm's running time. Common algorithms include greedy algorithms like Kruskal's algorithm that find local optima, divide and conquer algorithms like merge sort that break problems into subproblems, and dynamic programming algorithms like Fibonacci that store results of subproblems.

Uploaded by

Oporajita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Basic Two

An algorithm is a set of steps to solve a problem. Key aspects include time complexity, which is how long an algorithm takes, and space complexity, which is how much memory it uses. Asymptotic notation like Big O describes an algorithm's running time. Common algorithms include greedy algorithms like Kruskal's algorithm that find local optima, divide and conquer algorithms like merge sort that break problems into subproblems, and dynamic programming algorithms like Fibonacci that store results of subproblems.

Uploaded by

Oporajita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Algorithms

An algorithm is an unambiguous specification of how to solve a class of


problems. The performance of an algorithm is measured on the basis of
following properties :

 Time Complexity - The total amount of time required by an algorithm to


complete its execution.
 Space Complexity - Space complexity is an amount of memory used by
the algorithm, to execute it completely.

Asymptotic Notation is used to describe the running time of an algorithm - how


much time an algorithm takes with a given input, n.

There are three different notations:

 big O - for the worst case running time


 big Theta (Θ) - the running time is the same for all cases (average case).
 big Omega (Ω) - for the best case running time

Following is a list of some common asymptotic notations −

constant Ο(1)

logarithmic Ο(log n)

linear Ο(n)

quadratic Ο(n2)

cubic Ο(n3)

polynomial nΟ(1)

exponential 2Ο(n)
How to Write an Algorithm?

There are no well-defined standards for writing algorithms. Rather, it is problem


and resource dependent. Algorithms are never written to support a particular
programming code.

As we know that all programming languages share basic code constructs like
loops (do, for, while), flow-control (if-else), etc. These common constructs can
be used to write an algorithm.

Problem − Design an algorithm to add two numbers and display the result.

Algorithms tell the programmers how to code the program. Alternatively, the
algorithm can be written as −

In design and analysis of algorithms, usually the second method is used to


describe an algorithm. It makes it easy for the analyst to analyze the algorithm
ignoring all unwanted definitions. He can observe what operations are being
used and how the process is flowing. Writing step numbers, is optional.
We design an algorithm to get a solution of a given problem. A problem can be
solved in more than one ways.

Algorithm Analysis

Efficiency of an algorithm can be analyzed at two different stages, before


implementation and after implementation. They are the following –

 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency


of an algorithm is measured by assuming that all other factors, for
example, processor speed, are constant and have no effect on the
implementation.

 A Posterior Analysis − This is an empirical analysis of an algorithm. The


selected algorithm is implemented using programming language. This is
then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.
Greedy Algorithms

Greedy algorithms try to find a localized optimum solution, which may


eventually lead to globally optimized solutions. However, generally greedy
algorithms do not provide globally optimized solutions.

To solve a problem based on the greedy approach, there are two stages

 Feasible solution
 Optimization

Examples of Greedy Algorithms

 Kruskal’s Minimum Spanning Tree (MST)


 Prim’s Minimum Spanning Tree
 Dijkstra’s Shortest Path
 Huffman Coding
 Knapsack Problem
 Travelling Salesman Problem
 Job Scheduling Problem
Divide and Conquer

In divide and conquer approach, the problem in hand, is divided into smaller
sub-problems and then each problem is solved independently. When we keep
on dividing the subproblems into even smaller sub-problems, we may
eventually reach a stage where no more division is possible. Those "atomic"
smallest possible sub-problem (fractions) are solved. The solution of all sub-
problems is finally merged in order to obtain the solution of an original
problem.

Examples of Divide and Conquer Algorithms

 Merge Sort
 Quick Sort
 Binary Search
 Strassen's Matrix Multiplication
 Closest pair (points)
Dynamic Programming

Dynamic programming approach is similar to divide and conquer in breaking


down the problem into smaller and yet smaller possible sub-problems. But
unlike, divide and conquer, these sub-problems are not solved independently.
Rather, results of these smaller sub-problems are remembered and used for
similar or overlapping sub-problems.

Dynamic programming is used where we have problems, which can be divided


into similar sub-problems, so that their results can be re-used. Mostly, these
algorithms are used for optimization. Before solving the in-hand sub-problem,
dynamic algorithm will try to examine the results of the previously solved sub-
problems. The solutions of sub-problems are combined in order to achieve the
best solution.

So we can say that −

 The problem should be able to be divided into smaller overlapping sub-


problem.
 An optimum solution can be achieved by using an optimum solution of
smaller sub-problems.
 Dynamic algorithms use Memoization.

Examples of Dynamic Algorithms

 Fibonacci number series


 0/1 Knapsack problem
 Tower of Hanoi
 All pair shortest path by Floyd-Warshall

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy