0% found this document useful (0 votes)
2 views11 pages

Alg Design Techniques

Algorithm design techniques are unique approaches for creating algorithms to solve problems efficiently. The document outlines nine common techniques, including sorting, greedy algorithms, backtracking, divide and conquer, brute force, recursive algorithms, searching, dynamic programming, and randomized algorithms, each with specific use cases and examples. Understanding these techniques is essential for selecting the appropriate method based on the nature of the problem at hand.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views11 pages

Alg Design Techniques

Algorithm design techniques are unique approaches for creating algorithms to solve problems efficiently. The document outlines nine common techniques, including sorting, greedy algorithms, backtracking, divide and conquer, brute force, recursive algorithms, searching, dynamic programming, and randomized algorithms, each with specific use cases and examples. Understanding these techniques is essential for selecting the appropriate method based on the nature of the problem at hand.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

What Is Algorithm Design?

An algorithm design technique means a unique approach or


mathematical method for creating algorithms and solving
problems. While multiple algorithms can solve a problem, not
all algorithms can solve it efficiently. Therefore, we must create
algorithms using a suitable algorithm design method based on
the nature of the problem. An algorithm created with the right
design technique can solve the problem much more efficiently
with respect to the computational power required.
9 Algorithm Design Techniques to Get Started With
The nine most commonly used algorithm design techniques and
the nature of algorithms they help create are:
 Sorting: Sorting input in an increasing or decreasing order
 Greedy: Selecting each part of a solution only because it is
immediately beneficial
 Backtracking: Solving all possible combinations then
backtracking if the current solution doesn't look desirable
 Divide and conquer: Solving the problem by dividing it into
sub-problems
 Brute Force: Finding all the possible solutions and trying
each one-by-one
 Recursive: Breaking the problem into small pieces, finding
the answer, and using that solution to solve a larger
problem
 Searching: Searching for an element in a collection
 Dynamic Programming: Solving problems with overlapping
sub-problems
 Randomized Algorithms: Using randomness to help solve a
problem more efficiently
1. Sorting
Sorting algorithms accept a collection of elements as input and
sort the collection according to a particular characteristic. For
example, a collection of numbers can be sorted according to
their value or their difference from some other number.
Similarly, a collection of string values can be sorted based on
their lengths or the number of specific letters in them.
The sorting can be in an increasing or decreasing arrangement.
It can also be in a logical or lexicographical order. Ultimately,
the sorting algorithm returns the sorted arrangement of the
input collection.
Here are some of the most widely known sorting algorithms:
 Selection Sort
 Bubble Sort
 Insertion Sort
 Merge Sort
 Quick Sort
 Heap Sort
 Radix Sort
 Bucket Sort
 Comb Sort
Example
Let's look at Merge Sort. The algorithm sorts input in the
following way:
 Step 1 – Divides the input into halves
 Step 2 – Sorts each half
 Step 3 – Combines both halves in a sorted manner
In the second step, the algorithm calls itself to sort each half. It
keeps doing this until it reaches a single element. Then, it starts
returning smaller sorted collections and keeps combining them
to return the sorted input.
Merge Sort Algorithm
2. Greedy Algorithm
Greedy algorithms craft a solution piece by piece, and their
selection criteria when selecting the next piece is that it should
be instantly fruitful. Hence, the algorithm evaluates all the
options at each step and chooses the best one at the moment.
However, they aren't beneficial in all situations.
A greedy algorithm solution isn't necessarily an overall optimal
solution since it only goes from one best solution to the next.
Additionally, there is no backtracking involved if it chooses the
wrong option or step.
Example
Greedy algorithms are the best option for certain problems. A
popular example of greedy algorithm is sending some
information to the closest node in a network. Some other
graph-based greedy algorithm examples are: Dijkstra's
Algorithm Prim and Kruskal's Algorithm Huffman Coding Tree.

When attempting to find the largest sum of numbers in a node,


a greedy algorithm (blue) lacks the foresight to pick a
suboptimal (green) in order to eventual find the optimal
solution
3. Backtracking
A backtracking algorithm finds all the possible combinations of
a solution and evaluates if it isn't optimal. If it isn't, the
algorithm backtracks and starts evaluating other solutions.
Backtracking algorithms share a common approach with the
brute force algorithm design technique. However, they are
much faster than brute-force algorithms.
There are different kinds of backtracking algorithms based on
the kind of problems they solve:
 Decision Problem – Find a feasible solution
 Optimization Problem – Find the most optimal solution
 Enumeration Problem – Find all feasible solutions
Example
Backtracking algorithms are the most optimal for problems
where we may need to go back a few steps and make different
decisions. For example, one of the most famous backtracking
algorithm examples is the one for solving crossword puzzles.
Similarly, the eight queens puzzle also requires going back if
the current solution isn't the right one.

Backtracking Algorithm
4. Divide and Conquer
A divide and conquer algorithm breaks down the complexity of
its problem so it can solve smaller and easier sub-problems. It
involves three major steps:
 Divide – Divide the problem into multiple sub-problems of
the same nature
 Solve – Solve each resulting sub-problem
 Combine – Combine the solutions to the sub-problems to
get the solution to the starting problem
A divide and conquer algorithm handles each sub-problem
separately. Such algorithms give the most optimal solution for
problems like efficiently sorting a collection of elements.
Example
Thanks to their simple approach, it isn't hard to understand
divide and conquer algorithms. There are many divide and
conquer algorithm examples in the real world. For example,
take the common problem of looking for a lost item in a huge
space. It is easier to divide the space into smaller sections and
search in each separately.

Divide and Conquer Algorithm


5. Brute Force
A brute force algorithm uses the most straightforward way of
achieving a problem's solution: keep trying until you find the
right one. One example of a brute force algorithm is having
multiple keys and trying to open a lock.
Such algorithms create all solutions from the input and try each
to solve the problem. In principle, brute force and backtracking
use the same approach. The only difference is that the latter
backtracks if they find a solution unsuitable.
Example
Cracking the password of an application is a popular brute force
algorithm example. Given that there are unlimited retries, the
only way is to try every possible password combination until we
find the right one. Another example is visiting multiple locations
and finding the shortest routes. Such examples show that brute
force algorithms rely on having plenty of computational power.

Brute Force Algorithm for password cracking


6. Recursive Algorithm
Recursive algorithms solve a problem by first breaking it down
into smaller parts. The algorithm solves that smaller problem
and then the recursive algorithm starts solving the bigger
problem it branched off from. It keeps happening until it
reaches the main problem.
Recursive algorithms are easily understandable but have
pitfalls such as infinite recursion calls and high computational
power usage. Some types of recursion algorithms are:
 Direct Recursion
 Indirect Recursion
 Tailed Recursion
 Non-tail Recursion
Example
One of the most famous recursive algorithm examples is for
generating a Fibonacci sequence. The sequence starts from 0
and 1, and the next number is generated using the sum of the
previous two. The recursive algorithm to generate the n-th
Fibonacci number calls itself twice to find the n-1th and n-2th
Fibonacci numbers and add them. Here, it solves the smaller
problem (finding n-1th and n-2th Fibonacci numbers) and uses
that to solve the main problem. The most common
implementation of Merge Sort also uses recursion. It recursively
sorts the two halves of the input and combines them.

Recursive Algorithm
7. Searching
A searching algorithm retrieves information about an element's
existence in a collection. Here are different types of searching
algorithms based on their approach:
 Linear Search: Checks each element in the collection
 Binary Search: Searches the first or latter half of a sorted
collection depending on the element's value
 Hashing: Searches using the hash value obtained through
a hashing algorithm
Example
There are different search algorithms, each searching for the
element in a certain data structure. For example, some popular
searching algorithms for graphs are:
 Breadth-first Search
 Depth-first Search
 A* search
Similarly, hash-based searching uses unique values called hash
values generated by a hashing algorithm. Linear and binary
search are the common options for searching an element in a
collection.

Binary Search Algorithm


8. Dynamic Programming
Dynamic programming is a class of algorithms that solve
problems that have overlapping sub-problems. Therefore, they
are well-suited for problems where certain sub-problems get
solved repeatedly. Hence, a dynamic programming algorithm
optimizes the solution by storing the answers to sub-problems
in an optimal structure and retrieving them when needed.
Example
The problem of generating a Fibonacci sequence is one of the
popular dynamic programming algorithm examples. After all,
we keep solving the sub-problems repeatedly. For example, if
we found the 5th number, we must have found all the ones
before. Therefore, they are handy for finding the 6th number.
def FibSequence(n):
fib = {}
# Calculating Fibonacci sequence
fib[0] = 0
fib[1] = 1
for i in range (2,n):
fib[i] = fib[i-1] + fib[i-2]
return fib
print(FibSequence(10))
Copy
{0: 0, 1: 1, 2: 1, 3: 2, 4: 3, 5: 5, 6: 8, 7: 13, 8: 21, 9: 34}
Copy
9. Randomized Algorithm
Randomized algorithms use random numbers as part of their
logic to decide what to do next. A randomized algorithm can
help speed up an otherwise brute force approach and improve
efficiency by getting us a momentarily, if not overall, optimal
solution.
Example
One of the most popular randomized algorithm examples is
Quicksort. The algorithm divides the input into two halves on a
randomly chosen pivot point. All elements on the left of the
pivot are smaller, and all on the right are greater. The random
pivot helps improve Quicksort's time complexity.
Quicksort Algorithm with Pivot points marked in orange

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy