0% found this document useful (0 votes)
14 views

CMP 452 Merged

Uploaded by

geehustle06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

CMP 452 Merged

Uploaded by

geehustle06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 299

CMP 452 - DESIGN & ANALYSIS OF

ALGORITHMS (3 UNITS)
 LECTURE NOTE

 M. O. ODIM (PHD.)

 DEPARTMENT OF COMPUTER SCIENCE


 REDEEMER’S UNIVERSITY, EDE
Acknowledgement
 I acknowledge all materials across the globe, so
numerous to list, which has been very useful for useful in
preparing this lecture note.

2 Analysis of Algorithm
Monday, March 9, 2020
Course Contents
 Analysis of algorithms using Big oh concepts.
 Computational complexity theory, P, NP, NP-hard, NP –
complete problems
 Cryptographic algorithms,
 advanced recursion theory.
 Analysis of Various forms of algorithms .
 Halstead measure, lines –of-code, cyclomatic etc.

1-3
Lecture Hours/Grading Policy
 Lecture hours
 Mondays, 9:00-10:00 AM, Venue: LR3
 Tuesdays, 10:00am-12:00 Noon, Venue: LR5
 Attendance: 5%
 Mid Semester Tests: 15%
 Assignments, mini project, practical, term paper, review articles,
etc.: 20%
 End of Semester Exam: 60%
 NOTE: 80% Attendance is mandatory for sitting for the exam
Recommended Texts
 Textbook
 Introduction to Design & Analysis of Algorithms
Anany Levitin, 3rd Ed., Pearson, 2012

 Others
 Introduction to Design & Analysis Computer Algorithm 3rd, Sara Baase,
Allen Van Gelder, Adison-Wesley, 2000.
 Algorithms, Richard Johnsonbaugh, Marcus Schaefer, Prentice Hall, 2004.

5 Analysis of Algorithm
Monday, March 9, 2020
Course Objectives
 This course introduces students to the analysis and
design of computer algorithms. On completion of this
course, students should be able to:
 Analyze the asymptotic performance of algorithms.
 Demonstrate a familiarity with major algorithms and data
structures.
 Apply important algorithmic design paradigms and methods of
analysis.

6 Analysis of Algorithm
Monday, March 9, 2020
What is an Algorithm?
 Algorithm
 is any well-defined computational procedure that takes some
value, or set of values, as input and produces some value, or set
of values, as output.
 is thus a sequence of computational steps that transform the
input into the output.
 is a tool for solving a well - specified computational problem.
 Any special method of solving a certain kind of problem
(Webster Dictionary)
 a step-by-step procedure, which defines a set of instructions to
be executed in a certain order to get the desired output.

7 Analysis of Algorithm
Monday, March 9, 2020
What is an algorithm?
 Recipe, process, method, technique, procedure, routine,…
with the following requirements:
1. Finiteness
b terminates after a finite number of steps
2. Definiteness
b rigorously and unambiguously specified
3. Clearly specified input
b valid inputs are clearly specified
4. Clearly specified/expected output
b can be proved to produce the correct output given a valid input
5. Effectiveness
b steps are sufficiently simple and basic

1-8
What is an algorithm?
(working definition)
An algorithm is a sequence of unambiguous instructions for solving
a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.

problem

algorithm

input “computer” output

1-9
Algorithm
 An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for
any legitimate input in a finite amount of time.

• Can be represented various forms


• Unambiguity/clearness
• Effectiveness
•The range of inputs for which an algorithms works has to be
specified carefully
•Several algorithms for solving the same problems may exist.
• Finiteness/termination
• Correctness
1-10
Characteristics of an Algorithm
 Unambiguous − Algorithm should be clear and unambiguous. Each of
its steps (or phases), and their inputs/outputs should be clear and must
lead to only one meaning.
 Input − An algorithm should have 0 or more well-defined inputs.
 Output − An algorithm should have 1 or more well-defined outputs,
and should match the desired output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions,
which should be independent of any programming code.
What is a program?
 A program is the expression of an algorithm in a
programming language
 a set of instructions which the computer will follow to solve
a problem
 Algorithms are generally created independent of underlying
languages, i.e. an algorithm can be implemented in more than one
programming language.

12
Monday, March 9, 2020
Why study algorithms?
 Theoretical importance
 the core of computer science
 Practical importance
 A practitioner’s toolkit of known algorithms
 Framework for designing and analyzing algorithms for new
problems

Example: Google’s PageRank Technology

1-13
Basic Issues Related to Algorithms
 How to design algorithms
 Various design techniques exist
 How to express algorithms
 Natural language, such as English-like expression
 Flowchart
 Pseudocode
 Proving correctness
 Mathematical induction
 Could be very difficult for approximate algorithms
 Efficiency (or complexity) analysis
 Theoretical (Mathematical) analysis
 Empirical analysis
 Algorithm Visualization
 Optimality
 Determining a feasible algorithmic solution
 Obtaining a global solution

1-14
Algorithm design strategies
(techniques)

 Brute force  Greedy approach

 Dynamic programming
 Divide and conquer

 Backtracking and branch-and-bound


 Decrease and conquer
 Space and time tradeoffs
 Transform and conquer

1-15
Forms of expressing algorithms
 Natural (Human written) Language,
 e.g., English Like expression
 Flowchart:
 a collection of connected geometric shapes containing
description of the algorithms steps
 Pseudocode:
 A mixture of a natural language and programming language-like
constructs.

16 Analysis of Algorithm Monday, March 9, 2020


How to Write an Algorithm?
 No well-defined standards for writing algorithms. Rather, it is
problem and resource dependent.
 Never written to support a particular programming code.
 All programming languages share basic code constructs like loops
(do, for, while), flow-control (if-else), etc.
 These common constructs can be used to write an algorithm.
 Algorithms are written in a step-by-step manner, but it is not
always the case.
 Algorithm writing is a process and is executed after the problem
domain is well-defined. That is, we should know the problem
domain, for which we are designing a solution.
Example: Problem − Design an algorithm to add two
numbers and display the result.

Natural Alternative: Pseudocode


 step 1 − START  step 1 − START ADD
 step 2 − declare three  step 2 − get values of a & b
integers a, b & c
 step 3 − c ← a + b
 step 3 − define values of
a&b  step 4 − display c
 step 4 − add values of a  step 5 − STOP
&b
 step 5 − store output of
step 4 to c Note: Writing step numbers,
 step 6 − print c is optional.
 step 7 − STOP
Flowchart
 A Flowchart is a
diagrammatic
representation of
an algorithm.
 Flowchart are very
helpful in writing
program and
explaining program
to others (but may
be too complex).
 Symbols Used in
Flowchart are
shown on the right
side of the slide
Draw a flowchart to add two numbers entered
by user
Draw flowchart to find the largest among three different numbers
entered by user.
Draw a flowchart to find all the roots of a quadratic equation
ax2+bx+c=0
Fundamentals of Algorithmic Problem Solving: a procedural
solution to problem

1-23 Figure 2: Algorithm design and analysis process.


Important problem types
 sorting

 searching

 string processing

 graph problems

 combinatorial problems

 geometric problems

 numerical problems
1-24
Sorting (I)
 Rearrange the items of a given list in ascending order.
 Input: A sequence of n numbers <a1, a2, …, an>
 Output: A reordering <a´1, a´2, …, a´n> of the input sequence such that a´1≤ a´2 ≤ … ≤
a´n.
 Why sorting?
 Help searching
 Algorithms often use sorting as a key subroutine.
 Sorting key
 A specially chosen piece of information used to guide sorting. E.g., sort student records
by names.

1-25
Sorting (II)
 Examples of sorting algorithms
 Selection sort
 Bubble sort
 Insertion sort
 Merge sort
 Heap sort …
 Evaluate sorting algorithm complexity: the number of key comparisons.
 Two properties
 Stability: A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input.
 In place : A sorting algorithm is in place if it does not require extra memory, except, possibly
for a few memory units.

1-26
Selection Sort

Algorithm SelectionSort(A[0..n-1])
//The algorithm sorts a given array by selection sort
//Input: An array A[0..n-1] of orderable elements
//Output: Array A[0..n-1] sorted in ascending order
for i  0 to n – 2 do
min  i
for j  i + 1 to n – 1 do
if A[j] < A[min]
min  j
swap A[i] and A[min]

1-27
Searching
 Find a given value, called a search key, in a given set.
 Examples of searching algorithms
 Sequential search
 Binary search …
Input: sorted array a_i < … < a_j and key x;
m (i+j)/2;
while i < j and x != a_m do
if x < a_m then j  m-1
else i  m+1;
if x = a_m then output a_m;

Time: O(log n)

1-28
String Processing
 A string is a sequence of characters from an alphabet.
 Text strings: letters, numbers, and special characters.
 String matching: searching for a given word/pattern in a
text.
Examples:
(i) searching for a word or phrase on WWW or in a Word document
(ii) searching for a short read in the reference genomic sequence

1-29
Graph Problems
 Informal definition
 A graph is a collection of points called vertices, some of which are
connected by line segments called edges.
 Modeling real-life problems
 Modeling WWW
 Communication networks
 Project scheduling …
 Examples of graph algorithms
 Graph traversal algorithms
 Shortest-path algorithms
 Topological sorting

1-30
Fundamental data structures

 list b graph
 array b tree and binary tree
 linked list b set and dictionary
 string

 stack

 queue

 priority queue/heap

1-31
Linear Data Structures
 Arrays
◼ Arrays
 A sequence of n items of the same data type ◼ fixed length (need preliminary
that are stored contiguously in computer reservation of memory)
memory and made accessible by specifying a ◼ contiguous memory locations
value of the array’s index.
◼ direct access
 Linked List
◼ Insert/delete
 A sequence of zero or more nodes each
containing two kinds of information: some ◼ Linked Lists
data and one or more links called pointers to ◼ dynamic length
other nodes of the linked list. ◼ arbitrary memory locations
 Singly linked list (next pointer)
◼ access by following links
 Doubly linked list (next + previous pointers)
◼ Insert/delete

… .
a1 a2 an
1-32
Stacks and Queues

 Stacks
 A stack of plates
 insertion/deletion can be done only at the top.
 LIFO
 Two operations (push and pop)
 Queues
 A queue of customers waiting for services
 Insertion/enqueue from the rear and deletion/dequeue from the front.
 FIFO
 Two operations (enqueue and dequeue)

1-33
Priority Queue and Heap

◼ Priority queues (implemented using heaps)


◼A data structure for maintaining a set of elements, each associated
with a key/priority, with the following operations
◼ Finding the element with the highest priority
◼ Deleting the element with the highest priority
◼ Inserting a new element
◼ Scheduling jobs on a shared computer
9
6 8
5 2 3

9 6 8 5 2 3
1-34
Graphs
 Formal definition
 A graph G = <V, E> is defined by a pair of two sets: a finite set
V of items called vertices and a set E of vertex pairs called
edges.
 Undirected and directed graphs (digraphs).
 What’s the maximum number of edges in an undirected
graph with |V| vertices?
 Complete, dense, and sparse graphs
 A graph with every pair of its vertices connected by an edge is
called complete, K|V|
1 2

3 4
1-35
Graph Representation
 Adjacency matrix
 n x n boolean matrix if |V| is n.
 The element on the ith row and jth column is 1 if there’s an
edge from ith vertex to the jth vertex; otherwise 0.
 The adjacency matrix of an undirected graph is symmetric.
 Adjacency linked lists
 A collection of linked lists, one for each vertex, that contain all
the vertices adjacent to the list’s vertex.
 Which data structure would you use if the graph is a 100-node star shape?
0111 2 3 4
0001
0001 4
0000
4
1-36
Weighted Graphs
 Weighted graphs
 Graphs or digraphs with numbers assigned to the edges.

5
1 2
6 7
9
3 4
8

1-37
Graph Properties -- Paths and Connectivity
 Paths
 A path from vertex u to v of a graph G is defined as a sequence of
adjacent (connected by an edge) vertices that starts with u and ends
with v.
 Simple paths: All edges of a path are distinct.
 Path lengths: the number of edges, or the number of vertices – 1.
 Connected graphs
 A graph is said to be connected if for every pair of its vertices u and v
there is a path from u to v.
 Connected component
 The maximum connected subgraph of a given graph.

1-38
Graph Properties -- Acyclicity

 Cycle
 A simple path of a positive length that starts and ends a the same
vertex.
 Acyclic graph
 A graph without cycles
 DAG (Directed Acyclic Graph)

1 2

3 4

1-39
Trees
 Trees
 A tree (or free tree) is a connected acyclic graph.
 Forest: a graph that has no cycles but is not necessarily connected.
 Properties of trees

 For every two vertices in a tree there always exists exactly one simple
path from one of these vertices to the other. Why?
 Rooted trees:The above property makes it possible to select an arbitrary
vertex in a free tree and consider it as the root of the so called rooted tree.
 Levels in a rooted tree.
rooted

|E| = |V| - 1 3
◼ 1 3 5
4 1 5
2 4
2
1-40
Rooted Trees (I)
 Ancestors
 For any vertex v in a tree T, all the vertices on the simple path
from the root to that vertex are called ancestors.
 Descendants
 All the vertices for which a vertex v is an ancestor are said to be
descendants of v.
 Parent, child and siblings
 If (u, v) is the last edge of the simple path from the root to vertex v,
u is said to be the parent of v and v is called a child of u.
 Vertices that have the same parent are called siblings.
 Leaves
 A vertex without children is called a leaf.
 Subtree
 A vertex v with all its descendants is called the subtree of T rooted
at v.
1-41
Ordered Trees
 Ordered trees
 An ordered tree is a rooted tree in which all the children of each vertex are
ordered.
 Binary trees
 A binary tree is an ordered tree in which every vertex has no more than
two children and each children is designated s either a left child or a right
child of its parent.
 Binary search trees
 Each vertex is assigned a number.
 A number assigned to each parental vertex is larger than all the numbers in
its left subtree and smaller than all the numbers in its right subtree.
 log2n  h  n – 1, where h is the height of a binary tree and n the size.

9 6
6 8 3 9
5 2 3 2 5 8
1-42
43 Analysis of Algorithm Monday, March 9, 2020
CMP 452 - DESIGN & ANALYSIS
OF ALGORITHMS (3 UNITS

 LECTURE NOTE II
 Fundamentals of the Analysis of Algorithm Efficiency

 M. O. ODIM

 BASED ON DESIGN AND ANALYSIS OF ALGORITHMS BY ANNANY


LEVITIN
Fundamentals of the Analysis of Algorithm Efficiency
 The American Heritage Dictionary:
 “analysis” is “the separation of an intellectual or substantial
whole into its constituent parts for individual study.”

 Analysis Issues/Concerns:
 How good is the algorithm?
 Correctness
 time efficiency
 space efficiency
 Does there exist a better algorithm?
 Lower bounds
 Optimality
The Analysis Framework
 Gives a general framework for analyzing the efficiency of algorithms.

 two kinds of efficiency:


 time efficiency and space efficiency. Time efficiency, also called
time complexity, indicates how fast an algorithm in question
runs.
 Space efficiency, also called space complexity, concerned with the
amount of memory units required by the algorithm in addition to
the space needed for its input and output.
 Early days of electronic computing - both resources—time
and space—were at a premium.
Time Complexity
 Today, technological innovations have improved the
computer’s speed and memory size by many orders of
magnitude.
 Now the amount of extra space required by an algorithm is
typically not of as much concern, with the caution that there
is still, of course, a difference between the fast main memory,
the slower secondary memory, and the cache.
 the research experience has shown that for most problems,
we can achieve much more spectacular progress in speed
than in space.
 Therefore, we primarily concentrate on time efficiency, but
the analytical framework introduced here is applicable to
analyzing space efficiency as well.
Analysis of algorithms
 Approaches:
 Theoretical analysis
 For Mathematical Analysis (Non recursive & Recursive )
Algorithms
 Empirical analysis
 Algorithm Visualization
What do we analyze about them?
 Correctness
 Does the input/output relation match algorithm requirement?
 Amount of work done (aka complexity)
 Basic operations to do task finite amount of time
 Amount of space used
 Memory used

49 Analysis of Algorithm Monday, March 9, 2020


Which algorithm is better?
The algorithms are correct, but
which is the best?
 Measure the running time
(number of operations needed).
 Measure the amount of
memory used.
 Note that the running time of
the algorithms increase as the
size of the input increases.

50 Analysis of Algorithm
Monday, March 9, 2020
Measuring an Input’s Size
 Obvious Observation
 Almost all algorithms run longer on larger inputs.
 E.g. , it takes longer to sort larger arrays, multiply larger matrices,
and so on.
 Therefore, an algorithm’s efficiency is investigated as a
function of some parameter n indicating the algorithm’s input
size
 Simply put, the larger the input size, the longer it takes the
algorithm to run, all things being equal.
 However, there are situations where the choice of a parameter
indicating an input size does matter.
 E.g. computing the product of two n × n matrices
 The choice of an appropriate size metric can be influenced by
operations of the algorithm in question.
Units for Measuring Running Time
 use some standard unit of time measurement—a second, or
millisecond, and so on—to measure the running time of a program
implementing the algorithm.
 Drawbacks to such an approach
 dependence on the speed of a particular computer,
 dependence on the quality of a program implementing the algorithm and of
the compiler used in generating the machine code,
 and the difficulty of clocking the actual running time of the program
 Need a metric that does not depend on these extraneous factors.
 A possible approach: count the number of times each of the algorithm’s
operations is executed.
 This approach is both excessively difficult and, usually unnecessary.
 What to do: identify the most important operation of the algorithm,
called the basic operation, the operation contributing the most to the total
running time, and compute the number of times the basic operation is
executed.
Identifying the basic operation of an algorithm
 usually the most time-consuming operation in the algorithm’s innermost loop
 E.g. For most sorting algorithms: the basic operation is a key comparison of
element being sorted.
 As another example, algorithms for mathematical problems typically involve
some or all of the four arithmetical operations:
 addition,
 subtraction,
 multiplication, and
 division.
 Of the four, the most time-consuming operation is division, followed by
multiplication and then addition and subtraction, with the last two usually
considered together.
 Thus, the established framework for the analysis of an algorithm’s time efficiency
suggests measuring it by counting the number of times the algorithm’s basic
operation is executed on inputs of size n.
Definition
 Let cop be the execution time of an algorithm’s
 basic operation on a particular computer, and let C(n) be the number of
times this operation needs to be executed for this algorithm.
 Then we can estimate the running time T (n) of a program implementing this
algorithm on that computer by the formula
T (n) ≈ copC(n).
 Drawbacks
 the count C(n) does not contain any information about operations that are
not basic
 The count itself is often computed only approximately.
 the constant cop is also an approximation whose reliability is not always easy
p

to assess.
 Nevertheless, the formula can give a reasonable estimate of the
algorithm’s running time.
 It also makes it possible to answer such questions as “How much faster
would this algorithm run on a machine that is x times faster than the one
we have?”
Consider the following example:
, how much longer will the algorithm
Assuming that run if we double its input?
size?

Note
▪ the question was answered without actually knowing the value of cop: it
was neatly cancelled out in the ratio.
▪Also note that ½ , the multiplicative constant in the formula for the count
C(n), was also cancelled out.
▪Thus, the efficiency analysis framework ignores multiplicative constants
and concentrates on the count’s order of growth to within a constant
multiple for large-size inputs.
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size

 Basic operation: the operation that contributes the most towards


the running time of the algorithm
input size

T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
or cost executed

Note: Different basic operations may cost differently!


Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Best-case, average-case, worst-case

For some algorithms, efficiency depends on form of input:

 Worst case: Cworst(n) – maximum over inputs of size n

 Best case: Cbest(n) – minimum over inputs of size n

 Average case: Cavg(n) – “average” over inputs of size n


 Number of times the basic operation will be executed on typical input
 NOT the average of worst and best case
 Expected number of basic operations considered as a random variable
under some assumption about the probability distribution of all possible
inputs. So, avg = expected under uniform distribution.
Example: Sequential search

 Worst case
n key comparisons

 Best case 1 comparisons

(n+1)/2, assuming K is in A
 Average case
Determining the worst case
 The worst-case efficiency of an algorithm is its efficiency for the worst-case
input of size n, which is an input (or inputs) of size n for which the
algorithm runs the longest among all possible inputs of that size.
 To determine the worst case:
 analyze the algorithm to see what kind of inputs yield the largest
value of the basic operation’s count C(n) among all possible inputs of
size n and then compute this worst-case value Cworst(n).
 The worst-case analysis provides very important information about
an algorithm’s efficiency by bounding its running time from above.
i.e.,
 it guarantees that for any instance of size n, the running time will not
exceed Cworst(n), its running time on the worst-case inputs.
Determining the best case
 The best case does not mean the smallest input; it means the input
of size n for which the algorithm runs the fastest.
 To determine this case:
 First, we determine the kind of inputs for which the count C(n)
will be the smallest among all possible inputs of size n.
 Then ascertain the value of C(n) on these most convenient inputs.
 For example, the best-case inputs for sequential search are lists of
size n with their first element equal to a search key; accordingly, Cbest(n) =
1 for this algorithm.
 The analysis of the best-case efficiency is not nearly as important
as that of the worst-case efficiency.
 Note that neither the worst-case analysis nor its best-case
counterpart yields the necessary information about an algorithm’s
behaviour on a “typical” or “random” input. This is the information
that the average-case efficiency seeks to provide.
 To analyze the algorithm’s averagecase efficiency, we must make
some assumptions about possible inputs of size n.
 Consider again sequential search.
 The standard assumptions:
 (a) the probability of a successful search is equal to p (0 ≤ p ≤ 1)
 The probability of the first match occurring in the ith position of the
list is the same for every i.
 we therefore can find the average number of key comparisons
Cavg(n) as follows.
 In the case of a successful search, the probability of the first match
occurring in the ith position of the list is p/n for every i,
 the number of comparisons made by the algorithm in such a
situation is obviously i.
 In the case of an unsuccessful search, the number of comparisons
will be n with the probability of such a search being (1− p).
 Recall
 E(x) = Σxi p(xi) = x1p(x1)+x2p(x2) + ... + xnp(xn), for i =1 to n
 Thus,
if p = 1 (the search must be successful), the average number of key
comparisons made by sequential search is (n + 1)/2; that is, the algorithm will
inspect, on average, about half of the list’s elements.
If p = 0 (the search must be unsuccessful), the average number of key
comparisons will be n because the algorithm will inspect all n elements on all
such inputs.
Types of formulas for basic operation’s count

 Exact formula
e.g., C(n) = n(n-1)/2

 Formula indicating order of growth with specific


multiplicative constant
e.g., C(n) ≈ 0.5 n2

 Formula indicating order of growth with unknown


multiplicative constant
e.g., C(n) ≈ cn2
Orders of Growth
 The efficiency analysis framework concentrates on the
count’s order of growth to within a constant multiple for large-
size inputs.
 Why this emphasis on the count’s order of growth for large
input sizes?
 A difference in running times on small inputs is not what
really distinguishes efficient algorithms from inefficient ones.
 For large values of n, it is the function’s order of growth that
counts:
 Table 2.1, contains values of a few functions particularly
important for analysis of algorithms.
Values of some important functions as n → 
Order of growth
 The magnitude of the numbers in Table 2.1 has a profound
significance for the analysis of algorithms.
 The function growing the slowest among these is the
 logarithmic function.
 On the other end of the spectrum are the exponential function 2n
and the factorial function n!
 Both these functions grow so fast that their values become astronomically
large even for rather small values of n.
 There is a tremendous difference between the orders of growth of
the functions 2n and n!, yet both are often referred to as “exponential-
growth functions” (or simply “exponential”) despite the fact that,
strictly speaking, only the former should be referred to as such.
Order of growth
 Most important: Order of growth within a constant multiple as
n→∞

 Example:
 How much faster will algorithm run on computer that is twice as fast?

 How much longer does it take to solve problem of double input size?
Asymptotic Notations and Basic Efficiency Classes
➢ Computer scientists use three notations:
 O (big oh),
 Ω(big omega), and
 Θ (big theta)
 to compare and rank the order of growth of an algorithm’s basic
operation count as the principal indicator of the algorithm’s efficiency.
 A way of comparing functions that ignores constant factors and small
input sizes.
 O(g(n)): class of functions t(n) that grow no faster than g(n)
 Θ(g(n)): class of functions t(n) that grow at same rate as g(n)
 Ω(g(n)): class of functions t(n) that grow at least as fast as g(n)
Big-oh
Big-omega
Big-theta
Establishing order of growth using the definition

O-notation
Formal Definition:
t(n) is in O(g(n)), denoted f(n)  O(g(n)), if t(n) is bounded above by
some constant multiple of g(n) for all large n, i.e., if order of growth
of t(n) ≤ order of growth of g(n) (within constant multiple),
implies, there exist positive constant c and non-negative integer n0
such that
t(n) ≤ c g(n) for every n ≥ n0
Examples:
 10n is in O(n2)
 5n+20 is in O(n)
 100n + 5 ∈ O(n2).
O-notation
 formal proof of one of the assertions: 100n + 5 ∈ O(n2). Indeed,

 100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2.

 Thus, the values of the constants c and n0 required by the definition,


are 101 and 5, respectively.
 Note that the definition gives us a lot of freedom in choosing
specific values for constants c and n0. For example, we could also
reason that

 100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n to complete the


proof with c = 105 and n0 = 1.
-notation
 Formal definition
 A function t(n) is said to be in (g(n)), denoted t(n)  (g(n)), if
t(n) is bounded below by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n)  cg(n) for all n  n0

 Exercises: prove the following using the above definition


 10n2  (n2)
 0.3n2 - 2n  (n2)
 0.1n3  (n2)
-notation

 Formal definition
 A function t(n) is said to be in (g(n)), denoted t(n)  (g(n)), if
t(n) is bounded both above and below by some positive constant
multiples of g(n) for all large n, i.e., if there exist some positive
constant c1 and c2 and some nonnegative integer n0 such that
c2 g(n)  t(n)  c1 g(n) for all n  n0
 Exercises: prove the following using the above definition
 10n2  (n2)
 0.3n2 - 2n  (n2)
 (1/2)n(n+1)  (n2)
>=
(g(n)), functions that grow at least as fast as g(n)

=
(g(n)), functions that grow at the same rate as g(n)
g(n)

<=
O(g(n)), functions that grow no faster than g(n)
Some properties of asymptotic order of growth

 f(n)  O(f(n))

 f(n)  O(g(n)) iff g(n) (f(n))

 If f (n)  O(g (n)) and g(n)  O(h(n)) , then f(n)  O(h(n))

Note similarity with a ≤ b

 If f1(n)  O(g1(n)) and f2(n)  O(g2(n)) , then


f1(n) + f2(n)  O(max{g1(n), g2(n)})

Also, 1in (f(i)) =  (1in f(i))


Theorem
 If t1(n)  O(g1(n)) and t2(n)  O(g2(n)), then
t1(n) + t2(n)  O(max{g1(n), g2(n)}).
 The analogous assertions are true for the -notation and -
notation.

 Implication: The algorithm’s overall efficiency will be determined by the part with a
larger order of growth, i.e., its least efficient part.
 For example, 5n2 + 3nlogn  O(n2)
Proof. There exist constants c1, c2, n1, n2 such that
t1(n)  c1*g1(n), for all n  n1
t2(n)  c2*g2(n), for all n  n2
Define c3 = c1 + c2 and n3 = max{n1,n2}. Then
t1(n) + t2(n)  c3*max{g1(n), g2(n)}, for all n  n3
Using Limits for Comparing Orders of Growth
 The asymptotic notations, rarely used in practice
comparing the order of growth of two functions.
 A more convenient approach is computing the
limit of the ratio of the functions.
 Three fundamental case are

81 Analysis of Algorithm Monday, March 9, 2020


Establishing order of growth using limits

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞

∞ order of growth of T(n) > order of growth of g(n)

Examples:
• 10n vs. n2

• n(n+1)/2 vs. n2
L’Hôpital’s rule and Stirling’s formula

L’Hôpital’s rule: If limn→ f(n) = limn→ g(n) =  and


the derivatives f´, g´ exist, then

f(n) f ´(n)
lim = lim
g(n) g ´(n)
n→ n→

Example: log n vs. n

Stirling’s formula: n!  (2n)1/2 (n/e)n

Example: 2n vs. n!
Orders of growth of some important functions

 All logarithmic functions loga n belong to the same class


(log n) no matter what the logarithm’s base a > 1 is
because
log a n = log b n / log b a
 All polynomials of the same degree k belong to the same class:

aknk + ak-1nk-1 + … + a0  (nk)

 Exponential functions an have different orders of growth for different a’s

 order log n < order n (>0) < order an < order n! < order nn
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial
Basic asymptotic efficiency classes

86 Analysis of Algorithm Monday, March 9, 2020


Mathematical Analyzing of the Time
Efficiency of Nonrecursive Algorithms
 General Plan:
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the
innermost loop.)
3. Check whether the number of times the basic operation is executed depends
only on the size of an input. If it also depends on some additional property, the
worst-case, average-case, and, if necessary, best-case efficiencies have to be
investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is
executed.
5. Using standard formulas and rules of sum manipulation, either find a closed
form formula for the count or, at the very least, establish its order of growth.

87 Analysis of Algorithm Monday, March 9, 2020


Example 1: Maximum element

T(n) = 1in-1 1 = n-1 = (n) comparisons


Setting up a sum expressing the number of times the algorithm’s basic
operation is executed.

 Let C(n) the number of times this comparison is


executed;
 Now find a formula expressing it as a function of size
n.
 The algorithm makes one comparison on each
execution of the loop, which is repeated for each
value of the
 loop’s variable i within the bounds 1 and n − 1,
inclusive.
 Therefore, the sum for C(n):

91 Analysis of Algorithm Monday, March 9, 2020


(1)

 Manipulating the sum form:


 The sum is nothing order than 1 repeated n-1 times.
Thus

(2)

92 Analysis of Algorithm Monday, March 9, 2020


Example 2: Element uniqueness problem

T(n) = 0in-2 (i+1jn-1 1)


= 0in-2 n-i-1 = (n-1+1)(n-1)/2
= ( ) comparisons
n2
Analysis
 natural measure of the input’s size is again n, the number of elements
in the array.
 innermost loop contains a single operation (the comparison of two
elements), we should consider it as the algorithm’s basic operation.
 Note, however, that the number of element comparisons depends
not only on n but also
 on whether there are equal elements in the array and, if there are,
which array positions they occupy.
 limit our investigation to the worst case only. 2 cases
 arrays with no equal elements and
 arrays in which the last two elements are the only pair of equal
elements.
94 Analysis of Algorithm Monday, March 9, 2020
n − 2 n −1 n−2 n−2
Cworst (n) =   1 = [(n − 1) − (i + 1) + 1] =  (n − 1 − i)
i = 0 j =i +1 i =0 i =0
n−2 n−2
(n − 2)(n − 1)
n−2
=  (n − 1) −  i = (n − 1)1 −
i =0 i =0 i =0 2
(n − 2)(n − 1) n(n − 1)
= (n − 1) −
2
=  ( n 2 ) (3)
2 2
Alternatively,
n−2
n(n − 1)

i =0
(n − 1 − i) = (n − 1) + (n − 2) + ... + 1 =
2
(4)

the last equality is obtained by applying summation formula.

95 Analysis of Algorithm Monday, March 9, 2020


Example 3: Matrix multiplication

T(n) = 0in-1 0in-1 n


= 0in-1 ( ) 2
= (
n
) multiplications
n3
 We measure an input’s size by matrix order n.
 two arithmetical operations in the innermost loop here—multiplication and
addition
 Actually, we do not have to choose between them, because on each repetition
of the innermost loop each of the two is executed exactly once.
 But consider multiplication as the basic operation
 set up a sum for the total number of multiplications M(n) executed by the
algorithm.
 Since this count depends only on the size of the input matrices, we do not
have to investigate the worst-case, average-case, and best-case efficiencies
separately.)
 there is just one multiplication executed on each repetition of the algorithm’s
innermost loop, which is governed by the variable k ranging from the lower
bound 0 to the upper bound n − 1.
 Therefore, the number of multiplications made for every pair of
specific values of variables i and j is
Example 4: Gaussian elimination
Algorithm GaussianElimination(A[0..n-1,0..n])
//Implements Gaussian elimination on an n-by-(n+1) matrix A
for i  0 to n - 2 do
for j  i + 1 to n - 1 do
for k  i to n do
A[j,k]  A[j,k] - A[i,k]  A[j,i] / A[i,i]

Find the efficiency class and a constant factor improvement.


for i  0 to n - 2 do
for j  i + 1 to n - 1 do
B  A[j,i] / A[i,i]
for k  i to n do
A[j,k]  A[j,k] – A[i,k] * B
Example 5: Counting binary digits

It cannot be investigated the way the previous examples are.


The halving game: Find integer i such that n/ ≤ 1. 2i
Answer: i ≤ log n. So, T(n) = (log n) divisions.
Another solution: Using recurrence relations.
Some Setbacks of the general plan
 erroneous impression that the plan outlined above always
succeeds in analyzing a nonrecursive algorithm.
 An irregular change in a loop variable,
 a sum too complicated to analyze, and
 the difficulties intrinsic to the average case analysis
 are just some of the obstacles that can prove to be
insurmountable.
 These notwithstanding, the plan does work for many simple
nonrecursive algorithms.

101 Analysis of Algorithm Monday, March 9, 2020


Excerises
 Review the analysis of matrix multiplication in the main text
 Practice the exercises

102 Analysis of Algorithm Monday, March 9, 2020


Plan for Analysis of Recursive Algorithms

 Decide on a parameter indicating an input’s size.

 Identify the algorithm’s basic operation.

 Check whether the number of times the basic op. is executed may
vary on different inputs of the same size. (If it may, the worst,
average, and best cases must be investigated separately.)

 Set up a recurrence relation with an appropriate initial condition


expressing the number of times the basic op. is executed.

 Solve the recurrence (or, at the very least, establish its solution’s
order of growth) by backward substitutions or another method.
 Example 1: Compute the factorial function F(n) = n! for an
arbitrary nonnegative integer n.

Solution:
By definition
n!= 1 . . . . . (n − 1) . n = (n − 1)! . n for n ≥ 1 and
0!= 1
Therefore, the function computing n! could be expressed as
F(n) = F(n − 1) . n
with the following recursive algorithm

104 Analysis of Algorithm Monday, March 9, 2020


 ALGORITHM F(n)
 //Computes n! recursively
 //Input: A nonnegative integer n
 //Output: The value of n!
 if n = 0 return 1
 else return F(n − 1) ∗ n

n itself is an indicator of this algorithm’s input size

105 Analysis of Algorithm Monday, March 9, 2020


 n itself is an indicator of this algorithm’s input size
 The basic operation of the is multiplication
 Lets M(n) denote the number of executions
 F(n) is computed according to the formula
 F(n) = F(n − 1) . n for n > 0,
 the number of multiplications M(n) needed to compute it must
satisfy the equality
 M(n) = M(n − 1) + 1 for n > 0.
 M(n − 1) {to compute F(n−1) } + 1 ( to multiply F(n−1) by n)
 M(n − 1) multiplications are spent to compute F(n − 1), and one
more multiplication is needed to multiply the result by n.
 M(n) is defined implicitly as a function of its value at another point,
namely n − 1.
 Such equations are called recurrence relations or, recurrences (a very
brief tutorial is provided in Appendix B of the main text.)
106 Analysis of Algorithm Monday, March 9, 2020
 The task ahead:
 to solve the recurrence relation
 M(n) = M(n − 1) + 1
 i.e., to find an explicit formula for M(n) in terms of n only.
 To determine a solution uniquely, an initial condition is required
that tells us the value with which the sequence starts.
 The initial condition can be obtained by inspecting the condition
that makes the algorithm stop its recursive calls:
 if n = 0 return 1.
 the calls stop when n = 0 (no multiplication is performed),
and hence M(n) defined is 0.

107 Analysis of Algorithm Monday, March 9, 2020


 Therefore, the initial condition we are after is
 M(0) = 0.
 the calls stop when n = 0 and no multiplication performed
when n = 0
 Thus, the recurrence relation and initial condition for the
algorithm’s number of multiplications M(n):
 M(n) = M(n − 1) + 1 for n > 0,
 M(0) = 0.
 We now solve the recurrence relations.
 method of backward substitutions.

108 Analysis of Algorithm Monday, March 9, 2020


 M(n) = M(n − 1) + 1 substitute M(n − 1) = M(n − 2) + 1
 = [M(n − 2) + 1]+ 1= M(n − 2) + 2 substitute M(n − 2) = M(n − 3) + 1
 = [M(n − 3) + 1]+ 2 = M(n − 3) + 3.
 After inspecting the first three lines, we see an emerging pattern, which makes
it possible to predict not only the next line but also a general
 formula for the pattern:
 M(n) = M(n − i) + i…
 taking advantage of the initial condition given, for n =0, substitute i=n in the
pattern,
 M(n) = M(n − 1) + 1= . . . = M(n − i) + i = . . . = M(n − n) + n = n.

109 Analysis of Algorithm Monday, March 9, 2020


Example 2: The Tower of Hanoi Puzzle

1 3

Recurrence for number of moves:


M(n) = 2M(n-1) + 1
Solving recurrence for number of moves

M(n) = 2M(n-1) + 1, M(1) = 1


M(n) = 2M(n-1) + 1
= 2(2M(n-2) + 1) + 1 = 2^2*M(n-2) + 2^1 + 2^0
= 2^2*(2M(n-3) + 1) + 2^1 + 2^0
= 2^3*M(n-3) + 2^2 + 2^1 + 2^0
=…
= 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0
= 2^n -1
Tree of calls for the Tower of Hanoi Puzzle

n-1 n-1

n-2 n-2 n-2 n-2


... ... ...
2 2 2 2

1 1 1 1 1 1 1 1
Example 3: Counting #bits

A(n) = A(
n /)2+1, A(1) = 0

A( 2 k) = A( 2)k+−1,
1 A( ) = 1 20
(using the Smoothness Rule)
= (A( ) + 1) + 1 = A( ) + 2 k −2
k −2 2
= A(
2) + i
= A( 2 k) −+ik = k + 0
=
2 k −k
log 2 n
Smoothness Rule
 Let f(n) be a nonnegative function defined on the set of natural
numbers. f(n) is call smooth if it is eventually nondecreasing and
f(2n) ∈ Θ (f(n))
 Functions that do not grow too fast, including logn, n, nlogn, and n
where >=0 are smooth.
 Smoothness rule
Let T(n) be an eventually nondecreasing function and f(n) be a
smooth function. If
T(n) ∈ Θ (f(n)) for values of n that are powers of b,
where b>=2, then
T(n) ∈ Θ (f(n)) for any n.
Fibonacci numbers
The Fibonacci numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, …

The Fibonacci recurrence:


F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1

General 2nd order linear homogeneous recurrence with


constant coefficients:
aX(n) + bX(n-1) + cX(n-2) = 0
Solving aX(n) + bX(n-1) + cX(n-2) = 0

 Set up the characteristic equation (quadratic)


ar2 + br + c = 0

 Solve to obtain roots r1 and r2

 General solution to the recurrence


if r1 and r2 are two distinct real roots: X(n) = αr1n + βr2n
if r1 = r2 = r are two equal real roots: X(n) = αrn + βnr n

 Particular solution can be found by using initial conditions


Application to the Fibonacci numbers

F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0

Characteristic equation: r2 - r -1 = 0

Roots of the characteristic equation:


r1, 2 = (1  5 ) / 2

General solution to the recurrence:


  r1 +   r2
n n

Particular solution for F(0) =0, F(1)=1:  + =0


  r1 +   r2 = 1
Computing Fibonacci numbers
1. Definition-based recursive algorithm

2. Nonrecursive definition-based algorithm

3. Explicit formula algorithm

4. Logarithmic algorithm based on formula:


n
F(n-1) F(n) 0 1
=
F(n) F(n+1) 1 1

for n≥1, assuming an efficient way of computing matrix


powers.
Important Recurrence Types
 Decrease-by-one recurrences
 A decrease-by-one algorithm solves a problem by exploiting a relationship between a
given instance of size n and a smaller size n – 1.
 Example: n!
 The recurrence equation for investigating the time efficiency of such algorithms typically
has the form
T(n) = T(n-1) + f(n)
 Decrease-by-a-constant-factor recurrences
 A decrease-by-a-constant-factor algorithm solves a problem by dividing its given instance
of size n into several smaller instances of size n/b, solving each of them recursively, and
then, if necessary, combining the solutions to the smaller instances into a solution to the
given instance.
 Example: binary search.
 The recurrence equation for investigating the time efficiency of such algorithms typically
has the form
T(n) = aT(n/b) + f (n)
Decrease-by-one Recurrences

 One (constant) operation reduces problem size by one.


T(n) = T(n-1) + c T(1) = d
Solution:
T(n) = (n-1)c + d linear
 A pass through input reduces problem size by one.
T(n) = T(n-1) + c n T(1) = d
Solution:

T(n) = [n(n+1)/2 – 1] c + d quadratic


Decrease-by-a-constant-factor recurrences –
The Master Theorem
T(n) = aT(n/b) + f (n), where f (n) ∈ Θ(nk) , k>=0

1. a < bk T(n) ∈ Θ(nk)


2. a = bk T(n) ∈ Θ(nk log n )
3. a > bk T(n) ∈ Θ(nlog a)
b

 Examples:
 T(n) = T(n/2) + 1
 T(n) = 2T(n/2) + n Θ(log n)

 T(n) = 3T(n/2) + n Θ(nlog n)


 T(n) = T(n/2) + n Θ(nlog23)
Θ(n)
Empirical Analysis of Algorithms
 General Plan for the Empirical Analysis of Algorithm Time Efficiency
1. Understand the experiment’s purpose.
2. Decide on the efficiency metric M to be measured and the
measurement unit (an operation count vs. a time unit).
3. Decide on characteristics of the input sample (its range, size, and so
on).
4. Prepare a program implementing the algorithm (or algorithms) for the
experimentation.
5. Generate a sample of inputs.
6. Run the algorithm (or algorithms) on the sample’s inputs and record
the data observed.
7. Analyze the data obtained.

122 Analysis of Algorithm Monday, March 9, 2020


Purpose
 Different purposes or goals:
 checking the accuracy of a theoretical assertion about the
algorithm’s efficiency,
 comparing the efficiency of several algorithms for solving the
same problem or different implementations of the same
algorithm
 developing a hypothesis about the algorithm’s efficiency class,
and ascertaining the efficiency of the program implementing the
algorithm on a particular machine.
 the goal of the experiment should influence, if not dictate,
how the algorithm’s efficiency is to be measured.

123 Analysis of Algorithm Monday, March 9, 2020


Measuring the algorithm’s efficiency
 insert a counter (or counters) into a program
implementing the algorithm to count the number of
times the algorithm’s basic operation is executed.
 to time the program implementing the algorithm in
question.
 measure the running time of a code by asking for the
system time right before the fragment’s start (tstart) and
just after its completion (tfinish), and then computing the
difference between the two (tfinish− tstart).

124 Analysis of Algorithm Monday, March 9, 2020


several facts to keep in mind
 First, a system’s time is typically not very accurate, and you might
get somewhat different results on repeated runs of the same
program on the same inputs.
 An obvious remedy is to make several such measurements and then
take their average (or the median) as the sample’s observation point.
 Second, given the high speed of modern computers, the running
time may fail to register at all and be reported as zero.
 The standard trick to overcome this obstacle is to run the program in
an extra loop many times, measure the total running time, and then
divide it by the number of the loop’s repetitions.

125 Analysis of Algorithm Monday, March 9, 2020


Deciding on a sample of inputs for the experiment
 Typically, you will have to make decisions about the sample
size (it is sensible to start with a relatively small sample and
increase it later if necessary),
 the range of instance sizes (typically neither trivially small
nor excessively large),
 and a procedure for generating instances in the range chosen.
 The instance sizes can either adhere to some pattern (e.g.,
1000, 2000, 3000, . . . , 10,000 or 500, 1000, 2000, 4000, .
. . , 128,000) or be generated randomly within the range
chosen.

126 Analysis of Algorithm Monday, March 9, 2020


 Much more often than not, an empirical analysis requires
generating random numbers. Even if you decide to use a pattern
for input sizes, you will typically want instances themselves
generated randomly.
 Generating random numbers on a digital computer is known to
present a difficult problem because, in principle,
 the problem can be solved only approximately. This is the reason
computer scientists prefer to call such numbers pseudorandom.
 As a practical matter, the easiest and most natural way of getting
such numbers is to take advantage of a random number generator
available in computer language libraries.

127 Analysis of Algorithm Monday, March 9, 2020


 Alternatively, you can implement one of several known
algorithms for generating (pseudo)random numbers.
 The most widely used and thoroughly studied of such
algorithms is the linear congruential method.

128 Analysis of Algorithm Monday, March 9, 2020


 ALGORITHM Random(n, m, seed, a, b)
 //Generates a sequence of n pseudorandom numbers according to the
linear congruential method
 //Input: A positive integer n and positive integer parameters m, seed, a, b
 //Output: A sequence r1, . . . , rn of n pseudorandom integers uniformly
distributed among integer values between // 0 and m − 1
 //Note: Pseudorandom numbers between 0 and 1 can be //obtained
by treating the integers generated as digits after //the decimal point
 r0←seed
 for i ←1 to n do
 ri ←(a ∗ ri−1 + b) mod m

129 Analysis of Algorithm


Monday, March 9, 2020
Recording the Results
 Data can be presented numerically in a table or graphically in
a scatterplot,
 One of the possible applications of the empirical analysis is to
predict the algorithm’s performance on an instance not
included in the experiment sample.
 Mathematicians call such predictions extrapolation, as opposed
to interpolation, which deals with values within the sample
range.

130 Analysis of Algorithm Monday, March 9, 2020


Strengths and Weaknesses of Mathematical and Empirical
Analysis
 The principal strength of the mathematical analysis is its
independence of specific inputs; its principal weakness is its
limited applicability, especially for investigating the average-
case efficiency.
 The principal strength of the empirical analysis lies in its
applicability to any algorithm, but its results can depend on
the particular sample of instances and the computer used in
the experiment.

131 Analysis of Algorithm Monday, March 9, 2020


Mini Project.
Conduct an empirical analysis to compare the run time
efficiency of the following algorithm
a. Linear and Binary Search
b. Bubble Sort and Quick Sort
c. Selection and Mega sort
d. Conventional and Strassen’s Matrix Multiplication

132 Analysis of Algorithm Monday, March 9, 2020


Instructions and Guide
 Apply the general plan
 Generate a pseudo Random number and vary the inputs form 100 –
5000
 For the matrices, use a square matrix and vary the size of the
elements from 2 to 100
 Insert a counter for the number of basic operations in the program
 Implement on the same language, compiler and system for the
comparison
 Report your results both in tabulated and graphical forms
 Max of 4 per group, two groups to work independently on one of the
problems

133 Analysis of Algorithm Monday, March 9, 2020


Format/Submission
 Format
 Abstract
 Introduction
 Related Work
 Methodology
 Result and Discussion
 Conclusion
 References
 Submission Date: 29th March 2016
 Presentation: Monday 4th April 2016

134 Analysis of Algorithm Monday, March 9, 2020


Algorithm Visualization
 images are used of to convey some useful information
about algorithms.
 The information can be
 a visual illustration of an algorithm’s operation,
 of its performance on different kinds of inputs, or
 of its execution speed versus that of other algorithms for
the same problem.
 an algorithm visualization uses graphic elements—
points, line segments, two- or three-dimensional bars,
and so on—to represent some “interesting events” in the
algorithm’s operation
135 Analysis of Algorithm Monday, March 9, 2020
two principal variations of algorithm visualization

 Static algorithm visualization


 Dynamic algorithm visualization, also called
algorithm animation
 Static algorithm visualization shows an algorithm’s
progress through a series of still images.
 Algorithm animation, on the other hand, shows a
continuous, movie-like presentation of an algorithm’s
operations.
 Animation is an more sophisticated option, which, of
course, is much more difficult to implement.

136 Analysis of Algorithm Monday, March 9, 2020


137 Analysis of Algorithm Monday, March 9, 2020
138 Analysis of Algorithm Monday, March 9, 2020
two principal applications of algorithm
visualization
 research and education.
 Potential benefits for researchers are based on expectations that
algorithm visualization may help uncover some unknown
features of algorithms.
 The application of algorithm visualization to education seeks to
help students learning algorithms.
 although some successes in both research and education have
been reported in the literature, they are not as impressive as
one might expect. A deeper understanding of human
perception of images will be required before the true
potential of algorithm visualization is fulfilled.

139 Analysis of Algorithm Monday, March 9, 2020


Algorithm Design Techniques
 Brute force
 Decrease – and – Conquer
 Divide – and – Conquer
 Transform – and – Conquer
 Dynamic Programming

140 Analysis of Algorithm Monday, March 9, 2020


Brute force
 a straightforward approach to solving a problem, usually
directly based on the problem statement and definitions of
the concepts involved.
 The “force” is that of a computer and not that of one’s
intellect. "Just do it!”
 often, the strategy indeed that is easiest to apply.
 Examples: Selection Sort and Bubble Sort

141 Analysis of Algorithm Monday, March 9, 2020


Selection Sort
 scanning the entire given list to find its smallest element and
exchange it with the first element, putting the smallest
element in its final position in the sorted list.
 Then we scan the list, starting with the second element, to
find the smallest among the last n − 1 elements and exchange
it with the second element, putting the second smallest
element in its final position.
 Generally, on the ith pass through the list, which we number
from 0 to n − 2, the algorithm searches for the smallest item
among the last n − i elements and swaps it with Ai :

142 Analysis of Algorithm Monday, March 9, 2020


 After n − 1 passes, the list is sorted.

 ALGORITHM SelectionSort(A[0..n − 1])


 //Sorts a given array by selection sort
 //Input: An array A[0..n − 1] of orderable elements
 //Output: Array A[0..n − 1] sorted in nondecreasing order
 for i ←0 to n − 2 do
 min←i
 for j ←i + 1 to n − 1 do
 if A[j ]<A[min] min←j
 swap A[i] and A[min]

143 Analysis of Algorithm Monday, March 9, 2020


An Example of the selection sort
 As an example, the action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is
illustrated below:

 Each line corresponds to one iteration of the algorithm, i.e., a pass through the
list’s tail to the right of the vertical bar; an element in bold indicates the
smallest element found. Elements to the left of the vertical bar are in their final
positions and are not considered in this and subsequent iterations.
 Practice: Obtain the number of counts of the basic operation of this algorithm
and determine its order of growth.

144 Analysis of Algorithm


Bubble Sort
 Another brute-force application to the sorting problem is to compare
adjacent elements of the list and exchange them if they are out of order.
 By doing it repeatedly, we end up “bubbling up” the largest element to
the last position on the list.
 The next pass bubbles up the second largest element, and so on, until
after n − 1 passes the list is sorted.
 Pass i (0 ≤ i ≤ n − 2) of bubble sort can be represented by the following:

145 Analysis of Algorithm Monday, March 9, 2020


Bubble Sort
 ALGORITHM BubbleSort(A[0..n − 1])
 //Sorts a given array by bubble sort
 //Input: An array A[0..n − 1] of orderable elements
 //Output: Array A[0..n − 1] sorted in nondecreasing order
 for i ←0 to n − 2 do
 for j ←0 to n − 2 − i do
 if A[j + 1]<A[j ] swap A[j ] and A[j + 1]

 The action of the algorithm on the list 89, 45, 68, 90, 29, 34,
17 is illustrated as an example

146 Analysis of Algorithm Monday, March 9, 2020


Bubble Sort

 First two passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new line is shown
after a swap of two elements is done. The elements to the right of the vertical bar are in
their final positions and are not considered in subsequent iterations of the algorithm.
 Exercise: complete the n-I passes for sorting of the list, obtain the number of key swaps
and determine its order of growth. Also, practice the other numerous applications
examples on the text.
147 Monday, March 9, 2020
Decrease-and-Conquer
 based on exploiting the relationship between a solution to a given
instance of a problem and a solution to its smaller instance.
 Once such a relationship is established, it can be exploited either top
down or bottom up.
 The former leads naturally to a recursive implementation, although, an
ultimate implementation may well be nonrecursive. The bottom-up
variation is usually implemented iteratively, starting with a solution to
the smallest instance of the problem; it is called sometimes the
incremental approach.
 There are three major variations of decrease-and-conquer:
 decrease by a constant
 decrease by a constant factor
 variable size decrease

148 Analysis of Algorithm Monday, March 9, 2020


 In the decrease-by-a-constant variation, the size of an instance is
reduced by the same constant on each iteration of the
algorithm.
 Typically, this constant is equal to one (see fig. below),
although other constant size reductions do happen
occasionally.

149 Analysis of Algorithm Monday, March 9, 2020


150 Analysis of Algorithm Monday, March 9, 2020
 Thedecrease-by-a-constant-factor technique suggests reducing a
problem instance by the same constant factor on each
iteration of the algorithm.
 In most applications, this constant factor is equal to two. The
decrease-by-half idea is illustrated in the fig. below

151 Analysis of Algorithm Monday, March 9, 2020


152 Analysis of Algorithm Monday, March 9, 2020
Divide – and - Conquer
 Divide-and-conquer algorithms work according to the
 following general plan:
 1. A problem is divided into several subproblems of the same
type, ideally of about equal size.
 2. The subproblems are solved (typically recursively, though
sometimes a different algorithm is employed, especially when
subproblems become small enough).
 3. If necessary, the solutions to the subproblems are
combined to get a solution to the original problem.

153 Analysis of Algorithm Monday, March 9, 2020


 In the variable-size-decrease variety of decrease-and-conquer,
the size-reduction pattern varies from one iteration of an
algorithm to another.

154 Analysis of Algorithm Monday, March 9, 2020


155 Analysis of Algorithm Monday, March 9, 2020
156 Analysis of Algorithm Monday, March 9, 2020
Transform- and -Conquer
 a group of design methods that are based on the idea of
transformation, generally called transform-and-conquer because
these methods work as two-stage procedures.
I. First, in the transformation stage, the problem’s instance is
modified to be, for one reason or another, more amenable to
solution.
II. Then, in the second or conquering stage, it is solved.
 There are three major variations of this idea that differ by what we transform a
given instance to
I. Transformation to a simpler or more convenient instance of the
same problem— called instance simplification.
II. Transformation to a different representation of the same
instance— called representation change.
III. Transformation to an instance of a different problem for which an
algorithm is already available— called problem reduction.

157 Analysis of Algorithm Monday, March 9, 2020


Computational Complexity
 Complexity theory seeks to classify problems according to their
computational complexity.
 The principal split is between tractable and intractable problems—
problems that can and cannot be solved in polynomial time,
respectively.
 complexity theory concentrates on decision problems, which are
problems with yes/no answers. computational complexity is an
extensive theory which seeks to classify problems according to
their inherent difficulty.
 And according to this theory, a problem’s intractability remains
the same for all principal models of computations and all
reasonable input-encoding schemes for the problem under
consideration.

158 Analysis of Algorithm Monday, March 9, 2020


ASIGNMENT
 Present a discourse and submit a report on Cryptography
 Introduction
 Why cryptography
 Basic terminology
 Uses
 Algorithms (Discuss at least two algorithms)
 The principles
 Strengthens
 Weaknesses
 Sample applications
 Conclusion
 References
 Mode of submission: Hard and soft copies
 Deadline 6th April 2018
159 Analysis of Algorithm Monday, March 9, 2020
Polynomially reducible Problem
 Polynomially reducible Problem. A decision problem D1 is said to be polynomially reducible
to a decision problem D2 if there exist a function t that transforms instances of D1 to
instances of D2 such that:
I. t maps all yes instances of D1 to yes instances of D2 and all no instances of
D1 to no instances of D2.
II. T is computable by a polynomial – time algorithm.
 A nondeterministic algorithm is a two stage procedure that takes as its input an instance I of a
decision problem and does the following:
I. Non deterministically (“guessing”) stage: An arbitrary string S is generated
that can be thought of as a candidate to the given instance l (but may be
complete as well).
II. Deterministically (“verification”) stage: A deterministic algorithm takes
both I and S as its input and outputs yes if S represents a solution to
instance I. (If S is not a solution to instance l, the algorithm either returns
160 Analysis of Algorithm Monday, March 9, 2020
P, NP and NP-Complete
 Class P is a class of decision problems that can be solved in
polynomially time by (deterministic) algorithm. This class of
problem is called polynomial.
 Class NP is the class of decision problems that can be solved
by nondeterministic polynomial algorithms. This class of
problem is called nondeterministic polynomial.
 A decision problem D is said to be NP-complete if
I. It belong to class NP;
II. Every problem in NP is polynominially reducible to D.

161 Analysis of Algorithm Monday, March 9, 2020


Space and Time Trade-Offs
 Things which matter most must never be at the mercy of things which
matter less.
 —Johann Wolfgang von G‥ oethe (1749–1832)
 Some technique that exploits space-for-time trade-offs simply uses
extra space to facilitate faster and/or more flexible access to the
data.
 We call this approach prestructuring.This name highlights two facets of
this variation of the space-for-time trade-off: some processing is
done before a problem in question is actually solved but, unlike
the input-enhancement variety, it deals with access structuring.
 Examples: hashing (Section 7.3) and indexing with B-trees
(Section 7.4)

162 Analysis of Algorithm Monday, March 9, 2020


Space and Time Trade-Offs

 Two more comments about the interplay between time and


space in algorithm design need to be made.
 First, the two resources—time and space—do not have to
compete with each other in all design situations. In fact, they
can align to bring an algorithmic solution that minimizes
both the running time and the space consumed. Such a
situation arises, in particular, when an algorithm uses a
spaceefficient data structure to represent a problem’s input,
which leads, in turn, to a faster algorithm.

163 Analysis of Algorithm Monday, March 9, 2020


Space and Time Trade-Offs
 Another algorithm design technique related to the space-for-
time trade-off idea: dynamic programming.
 This strategy is based on recording solutions to overlapping
subproblems of a given problem in a table from which a
solution to the problem in question is then obtained.
 To be discussed.

164 Analysis of Algorithm Monday, March 9, 2020


Space and Time Trade-Offs
 Second, one cannot discuss space-time trade-offs
without mentioning the hugely important area of
data compression.
 Note, however, that in data compression, size
reduction is the goal rather than a technique for
solving another problem.
 We discuss just one data compression algorithm, in
the next chapter. The reader interested in this topic
will find a wealth of algorithms in such books as
 Sayood, K. Introduction to Data Compression, 3rd ed. Morgan
Kaufmann Publishers, 2005.

165 Analysis of Algorithm Monday, March 9, 2020


Dynamic Programming
 An idea, like a ghost . . . must be spoken to a little before it will explain
itself. —Charles Dickens (1812–1870)
 an algorithm design technique
 invented by a prominent U.S. mathematician, Richard Bellman, in
the 1950s as a general method for optimizing multistage decision
processes.
 Dynamic programming is a technique for solving problems with
overlapping subproblems.
 Typically, these subproblems arise from a recurrence relating a
given problem’s solution to solutions of its smaller subproblems.
 Rather than solving overlapping subproblems again and again,
dynamic programming suggests solving each of the smaller
subproblems only once and recording the results in a table from
 which a solution to the original problem can then be obtained.
166 Analysis of Algorithm Monday, March 9, 2020
Fundamentals of
the Analysis of
Algorithm
Efficiency

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Fundamentals of the Analysis of Algorithm Efficiency

The American Heritage Dictionary:


• “analysis” is “the separation of an intellectual or substantial whole
into its constituent parts for individual study.”

Analysis Issues/Concerns:
• How good is the algorithm?
– Correctness
– time efficiency
– space efficiency
• Does there exist a better algorithm?
– Lower bounds
– Optimality

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-1
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different
stages, before implementation and after implementation.
They are the following −
A Priori Analysis −
• A theoretical analysis of an algorithm. Efficiency of an algorithm is
measured by assuming that all other factors, for example, processor
speed, are constant and have no effect on the implementation.
A Posterior Analysis −
• An empirical analysis of an algorithm. The selected algorithm is
implemented using programming language. This is then executed on
target computer machine. In this analysis, actual statistics like
running time and space required, are collected.

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-2
We shall learn about a priori algorithm analysis. Algorithm
analysis deals with the execution or running time of various
operations involved.
The running time of an operation can be defined as the
number of computer instructions executed per operation.

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-3
The Analysis Framework
Gives a general framework for analyzing the efficiency of algorithms.

two kinds of efficiency:


time efficiency and space efficiency. Time
efficiency, also called time complexity, indicates how
fast an algorithm in question runs.
Space efficiency, also called space complexity,
concerned with the amount of memory units required
by the algorithm in addition to the space needed for
its input and output.
Early days of electronic computing - both
resources—time and space—were at a premium.

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-4
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data,
the time and space used by the algorithm X are the two
main factors, which decide the efficiency of X.
Time Factor – Time is measured by counting the number of
key operations such as comparisons in the sorting
algorithm.
Space Factor − Space is measured by counting the
maximum memory space required by the algorithm.
The complexity of an algorithm f(n) gives the running time
and/or the storage space required by the algorithm in terms
of n as the size of input data.

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-5
Space Complexity
Space complexity of an algorithm represents the amount of
memory space required by the algorithm in its life cycle.
The space required by an algorithm is equal to the sum of
the following two components −
A fixed part that is a space required to store certain data
and variables, that are independent of the size of the
problem.
• For example, simple variables and constants used, program size,
etc.
A variable part is a space required by variables, whose size
depends on the size of the problem.
• For example, dynamic memory allocation, recursion stack space,
etc.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-6
Space Complexity

Space complexity S(P) of Example


any algorithm P is S(P) = Algorithm: SUM(A, B)
C + SP(I), where C is the
Step 1 - START
fixed part and S(I) is the
variable part of the Step 2 - C ← A + B + 10
algorithm, which depends Step 3 – Stop
on instance characteristic three variables A, B, and C and
I. one constant. Hence S(P) = 1+3.
example that tries to space depends on data types of
explain the concept − given variables and constant
types and it will be multiplied
accordingly.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-7
Time Complexity
Today, technological innovations have improved the
computer’s speed and memory size by many orders of
magnitude.
Now the amount of extra space required by an
algorithm is typically not of as much concern, with the
caution that there is still, of course, a difference
between the fast main memory, the slower secondary
memory, and the cache.
the research experience has shown that for most
problems, we can achieve much more spectacular
progress in speed than in space.
Therefore, we primarily concentrate on time efficiency,
but the analytical framework introduced here is
applicable to analyzing space efficiency as well.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-8
Analysis of algorithms

Approaches:
• Theoretical analysis
– For Mathematical Analysis (Non recursive & Recursive )
Algorithms
• Empirical analysis
• Algorithm Visualization

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-9
Time Complexity
Time complexity of an algorithm represents the amount of
time required by the algorithm to run to completion. Time
requirements can be defined as a numerical function T(n),
where T(n) can be measured as the number of steps,
provided each step consumes constant time.
• For example, addition of two n-bit integers takes n steps.
Consequently, the total computational time is T(n) = c*n, where c is
the time taken for the addition of two bits.
T(n) grows linearly as the input size increases.

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-10
What do we analyze about them?

Correctness
• Does the input/output relation match algorithm requirement?
Amount of work done (aka complexity)
• Basic operations to do task finite amount of time
Amount of space used
• Memory used

Monday, May 31, 2021


1 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-11
Which algorithm is better?

The algorithms are correct,


but which is the best?
Measure the running time
(number of operations
needed).
Measure the amount of
memory used.
Note that the running
time of the algorithms
increase as the size of the
input increases.

1 Monday, May
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. 31, 2021
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-12
Measuring an Input’s Size
Obvious Observation
• Almost all algorithms run longer on larger inputs.
• E.g. , it takes longer to sort larger arrays, multiply larger
matrices, and so on.
Therefore, an algorithm’s efficiency is investigated
as a function of some parameter n indicating the
algorithm’s input size
Simply put, the larger the input size, the longer it
takes the algorithm to run, all things being equal.
However, there are situations where the choice of a
parameter indicating an input size does matter.
• E.g. computing the product of two n × n matrices
The choice of an appropriate size metric can be
influenced by operations of the algorithm in
question.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-13
Units for Measuring Running Time
use some standard unit of time measurement—a second, or
millisecond, and so on—to measure the running time of a
program implementing the algorithm.
Drawbacks to such an approach
• dependence on the speed of a particular computer,
• dependence on the quality of a program implementing the algorithm
and of the compiler used in generating the machine code,
• and the difficulty of clocking the actual running time of the program
Need a metric that does not depend on these extraneous
factors.
A possible approach: count the number of times each of the
algorithm’s operations is executed.
This approach is both excessively difficult and, usually
unnecessary.
What to do: identify the most important operation of the
algorithm, called the basic operation, the operation contributing
the most to the total running time, and compute the number of
times the basic operation is executed.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-14
Identifying the basic operation of an algorithm

usually the most time-consuming operation in the algorithm’s


innermost loop
E.g. For most sorting algorithms: the basic operation is a key
comparison of element being sorted.
As another example, algorithms for mathematical problems
typically involve some or all of the four arithmetical operations:
• addition,
• subtraction,
• multiplication, and
• division.
Of the four, the most time-consuming operation is division,
followed by multiplication and then addition and subtraction, with
the last two usually considered together.
Thus, the established framework for the analysis of an algorithm’s
time efficiency suggests measuring it by counting the number of
times the algorithm’s basic operation is executed on inputs of size
n.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-15
Definition
Let cop be the execution time of an algorithm’s
basic operation on a particular computer, and let C(n) be the
number of times this operation needs to be executed for this
algorithm.
Then we can estimate the running time T (n) of a program
implementing this algorithm on that computer by the formula

T (n) ≈ copC(n).
Drawbacks
• the count C(n) does not contain any information about operations that
are not basic
• The count itself is often computed only approximately.
• the constant cop is also an approximation whose reliability is not always
p

easy to assess.
Nevertheless, the formula can give a reasonable estimate of the
algorithm’s running time.
It also makes it possible to answer such questions as “How much
faster would this algorithm run on a machine that is x times faster
than the one we have?”
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-16
Consider the following example:
, how much longer will the
Assuming that
algorithm run if we double its
input?
size?

Note
▪ the question was answered without actually knowing the value of cop: it was
neatly cancelled out in the ratio.
▪Also note that ½ , the multiplicative constant in the formula for the count C(n),
was also cancelled out.
▪Thus, the efficiency analysis framework ignores multiplicative constants and
concentrates on the count’s order of growth to within a constant multiple for
Monday, May 31, 2021
large-size inputs.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2 ed., Ch. 2
nd 2-17
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size

Basic operation: the operation that contributes the most


towards the running time of the algorithm
input size

T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
or cost executed

Note: Different basic operations may cost differently!


Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-18
Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-19
Best-case, average-case, worst-case

For some algorithms, efficiency depends on form of input:

Worst case: Cworst(n) – maximum over inputs of size n

Best case: Cbest(n) – minimum over inputs of size n

Average case: Cavg(n) – “average” over inputs of size n


• Number of times the basic operation will be executed on typical input
• NOT the average of worst and best case
• Expected number of basic operations considered as a random variable
under some assumption about the probability distribution of all
possible inputs. So, avg = expected under uniform distribution.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-20
Example: Sequential search

Worst case
n key comparisons
Best case
1 comparisons

Average case
Monday, May 31, 2021
(n+1)/2, assuming K is in A
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-21
Determining the worst case
The worst-case efficiency of an algorithm is its efficiency for
the worst-case input of size n, which is an input (or inputs) of
size n for which the algorithm runs the longest among all
possible inputs of that size.
To determine the worst case:
analyze the algorithm to see what kind of inputs yield the
largest value of the basic operation’s count C(n) among all
possible inputs of size n and then compute this worst-case value
Cworst(n).
The worst-case analysis provides very important information
about an algorithm’s efficiency by bounding its running time
from above. i.e.,
it guarantees that for any instance of size n, the running time
will not exceed Cworst(n), its running time on the worst-case
Monday, May 31, 2021
inputs. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2 ed., Ch. 2
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
nd 2-22
Determining the best case
The best case does not mean the smallest input; it means
the input of size n for which the algorithm runs the fastest.
To determine this case:
First, we determine the kind of inputs for which the count
C(n) will be the smallest among all possible inputs of size n.
Then ascertain the value of C(n) on these most convenient
inputs.
For example, the best-case inputs for sequential search are
lists of size n with their first element equal to a search key;
accordingly, Cbest(n) = 1 for this algorithm.
The analysis of the best-case efficiency is not nearly as
important as that of the worst-case efficiency.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-23
Note that neither the worst-case analysis nor its best-case
counterpart yields the necessary information about an
algorithm’s behaviour on a “typical” or “random” input.
This is the information that the average-case efficiency
seeks to provide.

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-24
Analysing an algorithm’s average case
efficiency,

some assumptions about possible inputs of size


n need to be made.
Consider again sequential search.
The standard assumptions:
(a) the probability of a successful search is
equal to p (0 ≤ p ≤ 1)
The probability of the first match occurring in
the ith position of the list is the same for every i.
Therefore the average number of key
comparisons Cavg(n) as computed as follows.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-25
In the case of a successful search, the probability of the first
match occurring in the ith position of the list is
• p/n for every i,
• the number of comparisons made by the algorithm in such a situation
is obviously i.
In the case of an unsuccessful search, the number of
comparisons will be n with the probability of such a search
being
• (1− p).
Recall
E(x) = Σxi p(xi) = x1 p(x1)+x2 p(x2) + ... + xn p(xn), for i =1 to n
Thus,

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-26
if p = 1 (the search must be successful), the average number of
key comparisons made by sequential search is (n + 1)/2; that is,
the algorithm will inspect, on average, about half of the list’s
elements.
If p = 0 (the search must be unsuccessful), the average number of
key comparisons will be n because the algorithm will inspect all n
elements on all such inputs.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-27
Types of formulas for basic operation’s count

Exact formula
e.g., C(n) = n(n-1)/2

Formula indicating order of growth with specific


multiplicative constant
e.g., C(n) ≈ 0.5 n2

Formula indicating order of growth with unknown


multiplicative constant
e.g., C(n) ≈ cn2

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-28
Orders of Growth
The efficiency analysis framework concentrates on the
count’s order of growth to within a constant multiple for
large-size inputs.
Why this emphasis on the count’s order of growth for large
input sizes?
A difference in running times on small inputs is not what
really distinguishes efficient algorithms from inefficient
ones.
For large values of n, it is the function’s order of growth that
counts:
Table 2.1, contains values of a few functions particularly
important for analysis of algorithms.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-29
Values of some important functions as n → 

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-30
Order of growth
The magnitude of the numbers in Table 2.1 has a profound
significance for the analysis of algorithms.
The function growing the slowest among these is the
logarithmic function.
On the other end of the spectrum are the exponential
function 2n and the factorial function n!
Both these functions grow so fast that their values become
astronomically large even for rather small values of n.
There is a tremendous difference between the orders of
growth of the functions 2n and n!, yet both are often referred
to as “exponential-growth functions” (or simply
“exponential”) despite the fact that, strictly speaking, only
the former should be referred to as such.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-31
Order of growth
Most important: Order of growth within a constant multiple
as n→∞

Example:
• How much faster will algorithm run on computer that is
twice as fast?

• How much longer does it take to solve problem of double


input size?

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-32
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to
• defining the mathematical boundation (limits not of physical form)
• /framing of its run-time performance.
Using asymptotic analysis, we can very well conclude the
best case, average case, and worst case scenario of an
algorithm.
The main idea of asymptotic analysis is to have a measure
of efficiency of algorithms that doesn’t depend on machine
specific constants, and doesn’t require algorithms to be
implemented and time taken by programs to be compared.
Asymptotic analysis is input bound i.e., if there's no input
to the algorithm, it is concluded to work in a constant time.
Other than the "input" all other factors are considered
Monday, May 31, 2021
constant. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2 ed., Ch. 2
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
nd 2-33
Asymptotic analysis
Asymptotic analysis refers to computing the running time
of any operation in mathematical units of computation.
Used to compare and rank the order of growth of an
algorithm’s basic operation count as the principal indicator
of the algorithm’s efficiency.
For example, the running time of one operation is
computed as f(n) and may be for another operation it is
computed as g(n2).
This means the first operation running time will increase
linearly with the increase in n and the running time of the
second operation will increase exponentially when n
increases.
Similarly, the running time of both operations will be
nearly
Monday, the same if n is significantly small.
May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-34
Asymptotic Notations
Usually, the time required by an algorithm falls under
three types −
• Best Case − Minimum time required for program execution.
• Average Case − Average time required for program execution.
• Worst Case − Maximum time required for program execution.
Asymptotic notations are mathematical tools to represent
time complexity of algorithms for asymptotic analysis.
The following 3 asymptotic notations are mostly used to
represent time complexity of algorithms.
• Ο Notation
• Ω Notation
• θ Notation

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-35
Asymptotic Notations and Basic Efficiency
Classes

➢ Computer scientists use three notations:


• O (big oh),
• Ω(big omega), and
• Θ (big theta)
to compare and rank the order of growth of an algorithm’s basic
operation count as the principal indicator of the algorithm’s
efficiency.
A way of comparing functions that ignores constant factors and
small input sizes.
O(g(n)): class of functions t(n) that grow no faster than g(n)
Θ(g(n)): class of functions t(n) that grow at same rate as g(n)
Ω(g(n)): class of functions t(n) that grow at least as fast as g(n)

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-36
Big-oh

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-37
Big-omega

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-38
Big-theta

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-39
Establishing order of growth using the definition

O-notation
Formal Definition:
t(n) is in O(g(n)), denoted t(n)  O(g(n)), if t(n) is bounded
above by some constant multiple of g(n) for all large n, i.e., if
order of growth of t(n) ≤ order of growth of g(n) (within
constant multiple), implies, there exist positive constant c
and non-negative integer n0 such that
t(n) ≤ c g(n) for every n ≥ n0
Examples:
10n is in O(n2)
5n+20 is in O(n)
100n + 5
Monday, May 31, 2021
∈ O(n 2).

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-40
O-notation
formal proof of one of the assertions: 100n + 5 ∈ O(n2).
Indeed,

100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2.

Thus, the values of the constants c and n0 required by the


definition, are 101 and 5, respectively.
Note that the definition gives us a lot of freedom in
choosing specific values for constants c and n0. For example,
we could also reason that

100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n to complete the


proof
Monday, May 31, with
2021 c = 105 and n0 = 1.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-41
-notation

Formal definition
• A function t(n) is said to be in (g(n)), denoted t(n) 
(g(n)), if t(n) is bounded below by some constant
multiple of g(n) for all large n, i.e., if there exist some
positive constant c and some nonnegative integer n0
such that
t(n)  cg(n) for all n  n0

Exercises: prove the following using the above definition


• 10n2  (n2)
• 0.3n2 - 2n  (n2)
• 0.1n3  (n2)
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-42
-notation

Formal definition
• A function t(n) is said to be in (g(n)), denoted t(n) 
(g(n)), if t(n) is bounded both above and below by
some positive constant multiples of g(n) for all large
n, i.e., if there exist some positive constant c1 and c2
and some nonnegative integer n0 such that
c2 g(n)  t(n)  c1 g(n) for all n  n0
Exercises: prove the following using the above definition
• 10n2  (n2)
• 0.3n2 - 2n  (n2)
• (1/2)n(n+1)  (n2)
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-43
>=
(g(n)), functions that grow at least as fast as g(n)

=
(g(n)), functions that grow at the same rate as g(n)
g(n)

<=
O(g(n)), functions that grow no faster than g(n)

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-44
Basic asymptotic efficiency classes
1 constant Ο(1)

log n logarithmic Ο(log n)

n linear Ο(n)

n log n-log-n Ο(n log n)


n
n2 quadratic Ο(n2)

n3 cubic Ο(n3)

2n exponential 2Ο(n)

n! factorial 2Ο(n!)
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-45
Basic asymptotic efficiency classes

Monday, May 31, 2021


4 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-46
Some properties of asymptotic order of growth

f(n)  O(f(n))

f(n)  O(g(n)) iff g(n) (f(n))

If f (n)  O(g (n)) and g(n)  O(h(n)) , then f(n)  O(h(n))

Note similarity with a ≤ b

If f1(n)  O(g1(n)) and f2(n)  O(g2(n)) , then


f1(n) + f2(n)  O(max{g1(n), g2(n)})

Also, 1in (f(i)) =  (1in f(i))

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-47
Theorem
If t1(n)  O(g1(n)) and t2(n)  O(g2(n)), then
t1(n) + t2(n)  O(max{g1(n), g2(n)}).
• The analogous assertions are true for the -notation
and -notation.

Implication: The algorithm’s overall efficiency will be determined by


the part with a larger order of growth, i.e., its least efficient part.
• For example, 5n2 + 3nlogn  O(n2)
Proof. There exist constants c1, c2, n1, n2 such that
t1(n)  c1*g1(n), for all n  n1
t2(n)  c2*g2(n), for all n  n2
Define c3 = c1 + c2 and n3 = max{n1,n2}. Then
t1(n) + t2(n)  c3*max{g1(n), g2(n)}, for all n  n3
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-48
Using Limits for Comparing Orders of Growth

The asymptotic notations, rarely used in


practice comparing the order of growth of
two functions.
A more convenient approach is computing
the limit of the ratio of the functions.
Three fundamental case are

Monday, May 31, 2021


4 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-49
Establishing order of growth using limits

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞
∞ order of growth of T(n) > order of growth of g(n)

Examples:
• 10n vs. n2

• n(n+1)/2 vs. n2

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-50
L’Hôpital’s rule and Stirling’s formula

L’Hôpital’s rule: If limn→ f(n) = limn→ g(n) =  and


the derivatives f´, g´ exist, then

lim f(n) lim f ´(n)


=
n→ g(n) n→ g ´(n)
Example: log n vs. n

Stirling’s formula: n!  (2n)1/2 (n/e)n


Example: 2n vs. n!

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-51
Orders of growth of some important functions

All logarithmic functions loga n belong to the same class


(log n) no matter what the logarithm’s base a > 1 is
because loga n = logb n / logb a
All polynomials of the same degree k belong to the same class:

aknk + ak-1nk-1 + … + a0  (nk)

Exponential functions an have different orders of growth for different a’s

order log n < order n (>0) < order an < order n! < order nn

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-52
Mathematical Analyzing of the Time
Efficiency of Nonrecursive Algorithms
General Plan:
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in
the innermost loop.)
3. Check whether the number of times the basic operation is executed
depends only on the size of an input. If it also depends on some
additional property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic
operation is executed.
5. Using standard formulas and rules of sum manipulation, either
find a closed form formula for the count or, at the very least,
establish its order of growth.
Monday, May 31, 2021
5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-53
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-54
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-55
Example 1: Maximum element

T(n) = 1in-1 1 = n-1 = (n) comparisons

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-56
Setting up a sum expressing the number of times the algorithm’s
basic operation is executed.

Let C(n) the number of times this comparison is


executed;
Now find a formula expressing it as a function
of size n.
The algorithm makes one comparison on each
execution of the loop, which is repeated for
each value of the
loop’s variable i within the bounds 1 and n − 1,
inclusive.
Therefore, the sum for C(n):

Monday, May 31, 2021


5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-57
(1)

Manipulating the sum form:


• The sum is nothing order than 1 repeated n-1 times.
Thus

(2)

Monday, May 31, 2021


5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-58
Example 2: Element uniqueness problem

T(n) = 0in-2 (i+1jn-1 1)


= 0in-2 n-i-1 = (n-1+1)(n-1)/2
= ( n 2 ) comparisons
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-59
Analysis
natural measure of the input’s size is again n, the number of
elements in the array.
innermost loop contains a single operation (the comparison of
two elements), we should consider it as the algorithm’s basic
operation.
Note, however, that the number of element comparisons
depends not only on n but also
on whether there are equal elements in the array and, if there
are, which array positions they occupy.
limit our investigation to the worst case only. 2 cases
• arrays with no equal elements and
• arrays in which the last two elements are the only pair of
equal elements.
Monday, May 31, 2021
6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-60
n − 2 n −1 n−2 n−2
Cworst (n) =   1 = [(n − 1) − (i + 1) + 1] =  (n − 1 − i)
i = 0 j =i +1 i =0 i =0
n−2
(n − 2)(n − 1)
n−2 n−2
=  (n − 1) −  i = (n − 1)1 −
i =0 i =0 i =0 2
(n − 2)(n − 1) n(n − 1)
= (n − 1) −
2
=  ( n 2 ) (3)
2 2
Alternatively,
n−2
n(n − 1)

i =0
(n − 1 − i) = (n − 1) + (n − 2) + ... + 1 =
2
(4)

the last equality is obtained by applying summation


formula.
Monday, May 31, 2021
6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-61
Example 3: Matrix multiplication

T(n) = 0in-1 0in-1 n


= 0in-1 ( n 2 )
= ( n 3 ) multiplications
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-62
We measure an input’s size by matrix order n.
two arithmetical operations in the innermost loop here—
multiplication and addition
Actually, we do not have to choose between them, because
on each repetition of the innermost loop each of the two is
executed exactly once.
But consider multiplication as the basic operation
set up a sum for the total number of multiplications M(n)
executed by the algorithm.
Since this count depends only on the size of the input
matrices, we do not have to investigate the worst-case,
average-case, and best-case efficiencies separately.)
there is just one multiplication executed on each repetition
of the algorithm’s innermost loop, which is governed by the
variable k ranging from the lower bound 0 to the upper
bound
Monday, n − 1.
May 31, 2021
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2 ed., Ch. 2
nd 2-63
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Therefore, the number of multiplications made for every
pair of specific values of variables i and j is

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-64
Example 4: Gaussian elimination
Algorithm GaussianElimination(A[0..n-1,0..n])
//Implements Gaussian elimination on an n-by-(n+1) matrix A
for i  0 to n - 2 do
for j  i + 1 to n - 1 do
for k  i to n do
A[j,k]  A[j,k] - A[i,k]  A[j,i] / A[i,i]

Find the efficiency class and a constant factor improvement.


for i  0 to n - 2 do
for j  i + 1 to n - 1 do
B  A[j,i] / A[i,i]
for k  i to n do
A[j,k]  A[j,k] – A[i,k] * B
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-65
Example 5: Counting binary digits

It cannot be investigated the way the previous examples are.


The halving game: Find integer i such that n/2 i ≤ 1.
Answer: i ≤ log n. So, T(n) = (log n) divisions.
Another solution: Using recurrence relations.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-66
Some Setbacks of the general plan
erroneous impression that the plan outlined above
always succeeds in analyzing a nonrecursive algorithm.
• An irregular change in a loop variable,
• a sum too complicated to analyze, and
• the difficulties intrinsic to the average case analysis
are just some of the obstacles that can prove to be
insurmountable.
These notwithstanding, the plan does work for many
simple nonrecursive algorithms.

Monday, May 31, 2021


6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-67
Exercises 2.1, 2.2 and 2.3
1. For each of the following algorithms, indicate (i) a
natural size metric for its inputs, (ii) its basic operation,
and (iii) whether the basic operation count can be different
for inputs of the same size:
a. computing the sum of n numbers
b. computing n!
c. finding the largest element in a list of n numbers

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-68
Excerises
Review the analysis of matrix multiplication in the main
text
Practice the exercises

Monday, May 31, 2021


6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-69
Plan for Analysis of Recursive Algorithms

Decide on a parameter indicating an input’s size.

Identify the algorithm’s basic operation.

Check whether the number of times the basic op. is executed


may vary on different inputs of the same size. (If it may, the
worst, average, and best cases must be investigated
separately.)

Set up a recurrence relation with an appropriate initial


condition expressing the number of times the basic op. is
executed.

Solve the recurrence (or, at the very least, establish its


solution’s order of growth) by backward substitutions or
another method.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-70
Example 1: Compute the factorial function F(n) = n! for an
arbitrary nonnegative integer n.

Solution:
By definition
n!= 1 . . . . . (n − 1) . n = (n − 1)! . n for n ≥ 1 and
0!= 1
Therefore, the function computing n! could be expressed as
F(n) = F(n − 1) . n
with the following recursive algorithm

Monday, May 31, 2021


7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-71
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) ∗ n

n itself is an indicator of this algorithm’s input size

Monday, May 31, 2021


7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-72
n itself is an indicator of this algorithm’s input size
The basic operation of the is multiplication
Lets M(n) denote the number of executions
F(n) is computed according to the formula
• F(n) = F(n − 1) . n for n > 0,
the number of multiplications M(n) needed to compute it
must satisfy the equality
M(n) = M(n − 1) + 1 for n > 0.
• M(n − 1) {to compute F(n−1) } + 1 ( to multiply F(n−1) by n)
M(n − 1) multiplications are spent to compute F(n − 1), and
one more multiplication is needed to multiply the result by n.
M(n) is defined implicitly as a function of its value at another
point, namely n − 1.
Such equations are called recurrence relations or, recurrences
(a very brief tutorial is provided in Appendix B of the main
text.)
Monday, May 31, 2021
7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-73
The task ahead:
• to solve the recurrence relation
• M(n) = M(n − 1) + 1
i.e., to find an explicit formula for M(n) in terms of n only.
To determine a solution uniquely, an initial condition is
required that tells us the value with which the sequence
starts.
The initial condition can be obtained by inspecting the
condition that makes the algorithm stop its recursive calls:
• if n = 0 return 1.
the calls stop when n = 0 (no multiplication is performed),
and hence M(n) defined is 0.
Monday, May 31, 2021
7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-74
Therefore, the initial condition we are after is
• M(0) = 0.
the calls stop when n = 0 and no multiplication performed
when n = 0
Thus, the recurrence relation and initial condition for the
algorithm’s number of multiplications M(n):
• M(n) = M(n − 1) + 1 for n > 0,
• M(0) = 0.
We now solve the recurrence relations.
method of backward substitutions.

Monday, May 31, 2021


7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-75
M(n) = M(n − 1) + 1 substitute M(n − 1) = M(n − 2) + 1
= [M(n − 2) + 1]+ 1= M(n − 2) + 2 substitute M(n − 2) = M(n − 3) + 1
= [M(n − 3) + 1]+ 2 = M(n − 3) + 3.
After inspecting the first three lines, we see an emerging pattern, which
makes it possible to predict not only the next line but also a general
formula for the pattern:
M(n) = M(n − i) + i…
taking advantage of the initial condition given, for n =0, substitute i=n in
the pattern,
M(n) = M(n − 1) + 1= . . . = M(n − i) + i = . . . = M(n − n) + n = n.

Monday, May 31, 2021


7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-76
Example 2: The Tower of Hanoi Puzzle

1 3

Recurrence for number of moves:


M(n) = 2M(n-1) + 1
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-77
Solving recurrence for number of moves

M(n) = 2M(n-1) + 1, M(1) = 1

M(n) = 2M(n-1) + 1
= 2(2M(n-2) + 1) + 1 = 2^2*M(n-2) + 2^1 + 2^0
= 2^2*(2M(n-3) + 1) + 2^1 + 2^0
= 2^3*M(n-3) + 2^2 + 2^1 + 2^0
=…
= 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0
= 2^n -1
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-78
Tree of calls for the Tower of Hanoi Puzzle

n-1 n-1

n-2 n-2 n-2 n-2


... ... ...
2 2 2 2

1 1 1 1 1 1 1 1

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-79
Example 3: Counting #bits

A(n) = A( n / 2 ) + 1, A(1) = 0

A(2 k ) = A( 2 k −1) + 1, A( 20) = 1 (using the Smoothness Rule)


= (A( 2 k −2) + 1) + 1 = A( 2 k −2) + 2
= A(2 k−i ) + i
= A( 2 k −k) + k = k + 0
= log2 n
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-80
Smoothness Rule

Let f(n) be a nonnegative function defined on the set of


natural numbers. f(n) is call smooth if it is eventually
nondecreasing and
f(2n) ∈ Θ (f(n))
• Functions that do not grow too fast, including logn, n, nlogn,
and n where >=0 are smooth.
Smoothness rule
Let T(n) be an eventually nondecreasing function and f(n) be
a smooth function. If
T(n) ∈ Θ (f(n)) for values of n that are powers of b,
where b>=2, then
T(n) ∈ Θ (f(n)) for any n.
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-81
Fibonacci numbers
The Fibonacci numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, …

The Fibonacci recurrence:


F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1

General 2nd order linear homogeneous recurrence with


constant coefficients:
aX(n) + bX(n-1) + cX(n-2) = 0
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-82
Solving aX(n) + bX(n-1) + cX(n-2) = 0

Set up the characteristic equation (quadratic)


ar2 + br + c = 0

Solve to obtain roots r1 and r2

General solution to the recurrence


if r1 and r2 are two distinct real roots: X(n) = αr1n + βr2n
if r1 = r2 = r are two equal real roots: X(n) = αrn + βnr n

Particular solution can be found by using initial conditions

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-83
Application to the Fibonacci numbers

F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0

Characteristic equation: r 2 - r -1 = 0

Roots of the characteristic equation: r1, 2 = (1  5 ) / 2

General solution to the recurrence:   r +   r2


1
n n

Particular solution for F(0) =0, F(1)=1:  + =0


  r1 +   r2 = 1

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-84
Computing Fibonacci numbers
1. Definition-based recursive algorithm

2. Nonrecursive definition-based algorithm

3. Explicit formula algorithm

4. Logarithmic algorithm based on formula:


F(n-1) F(n) 0 1 n
=
F(n) F(n+1) 1 1

for n≥1, assuming an efficient way of computing matrix powers.


Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-85
Important Recurrence Types
Decrease-by-one recurrences
• A decrease-by-one algorithm solves a problem by exploiting a relationship
between a given instance of size n and a smaller size n – 1.
• Example: n!
• The recurrence equation for investigating the time efficiency of such
algorithms typically has the form
T(n) = T(n-1) + f(n)
Decrease-by-a-constant-factor recurrences
• A decrease-by-a-constant-factor algorithm solves a problem by dividing its
given instance of size n into several smaller instances of size n/b, solving
each of them recursively, and then, if necessary, combining the solutions to
the smaller instances into a solution to the given instance.
• Example: binary search.
• The recurrence equation for investigating the time efficiency of such
algorithms typically has the form
T(n) = aT(n/b) + f (n)

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-86
Decrease-by-one Recurrences

One (constant) operation reduces problem size by one.


T(n) = T(n-1) + c T(1) = d
Solution:
T(n) = (n-1)c + d linear
A pass through input reduces problem size by one.
T(n) = T(n-1) + c n T(1) = d
Solution:

T(n) = [n(n+1)/2 – 1] c + d quadratic

Monday, May 31, 2021


Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-87
Decrease-by-a-constant-factor recurrences –
The Master Theorem

T(n) = aT(n/b) + f (n), where f (n) ∈ Θ(nk) , k>=0

1. a < bk T(n) ∈ Θ(nk)


2. a = bk T(n) ∈ Θ(nk log n )
3. a > bk T(n) ∈ Θ(nlogb a)

Examples:
• T(n) = T(n/2) + 1 Θ(log n)
• T(n) = 2T(n/2) + n Θ(nlog n)
• T(n) = 3T(n/2) + n Θ(nlog23)
• T(n) = T(n/2) + n Θ(n)
Monday, May 31, 2021
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-88
Empirical Analysis of Algorithms

General Plan for the Empirical Analysis of Algorithm Time


Efficiency
1. Understand the experiment’s purpose.
2. Decide on the efficiency metric M to be measured and the
measurement unit (an operation count vs. a time unit).
3. Decide on characteristics of the input sample (its range, size,
and so on).
4. Prepare a program implementing the algorithm (or algorithms)
for the experimentation.
5. Generate a sample of inputs.
6. Run the algorithm (or algorithms) on the sample’s inputs and
record the data observed.
7. Analyze the data obtained.

Monday, May 31, 2021


8 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-89
Purpose
Different purposes or goals:
• checking the accuracy of a theoretical assertion about the
algorithm’s efficiency, comparing the
• efficiency of several algorithms for solving the same problem or
different implementations of the same algorithm
• developing a hypothesis about the algorithm’s efficiency class, and
ascertaining the efficiency of the program implementing the
algorithm on a particular machine.
the goal of the experiment should influence, if not dictate,
how the algorithm’s efficiency is to be measured.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-90
Measuring the algorithm’s efficiency

insert a counter (or counters) into a program


implementing the algorithm to count the number
of times the algorithm’s basic operation is
executed.
to time the program implementing the algorithm in
question.
measure the running time of a code by asking for
the system time right before the fragment’s start
(tstart) and just after its completion (tfinish), and
then computing the difference between the two
(tfinish− tstart).
Monday, May 31, 2021
9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-91
several facts to keep in mind
First, a system’s time is typically not very accurate, and you
might get somewhat different results on repeated runs of
the same program on the same inputs.
• An obvious remedy is to make several such
measurements and then take their average (or the
median) as the sample’s observation point.
Second, given the high speed of modern computers, the
running time may fail to register at all and be reported as
zero.
• The standard trick to overcome this obstacle is to run the program
in an extra loop many times, measure the total running time, and
then divide it by the number of the loop’s repetitions.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-92
Deciding on a sample of inputs for the experiment

Typically, you will have to make decisions about the sample


size (it is sensible to start with a relatively small sample and
increase it later if necessary),
the range of instance sizes (typically neither trivially small
nor excessively large),
and a procedure for generating instances in the range
chosen.
The instance sizes can either adhere to some pattern (e.g.,
1000, 2000, 3000, . . . , 10,000 or 500, 1000, 2000, 4000, . . . ,
128,000) or be generated randomly within the range
chosen.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-93
Much more often than not, an empirical analysis requires
generating random numbers. Even if you decide to use a
pattern for input sizes, you will typically want instances
themselves generated randomly.
Generating random numbers on a digital computer is
known to present a difficult problem because, in principle,
the problem can be solved only approximately. This is the
reason computer scientists prefer to call such numbers
pseudorandom.
As a practical matter, the easiest and most natural way of
getting such numbers is to take advantage of a random
number generator available in computer language libraries.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-94
Alternatively, you can implement one of several known
algorithms for generating (pseudo)random numbers.
The most widely used and thoroughly studied of such
algorithms is the linear congruential method.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-95
ALGORITHM Random(n, m, seed, a, b)
//Generates a sequence of n pseudorandom numbers according to
the linear congruential method
//Input: A positive integer n and positive integer parameters m,
seed, a, b
//Output: A sequence r1, . . . , rn of n pseudorandom integers
uniformly distributed among integer values between // 0 and m −
1
//Note: Pseudorandom numbers between 0 and 1 can be
//obtained by treating the integers generated as digits after //the
decimal point
r0←seed
for i ←1 to n do
ri ←(a ∗ ri−1 + b) mod m
9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-96
Monday, May 31, 2021
Recording the Results
Data can be presented numerically in a table or graphically
in a scatterplot,
One of the possible applications of the empirical analysis is
to predict the algorithm’s performance on an instance not
included in the experiment sample.
Mathematicians call such predictions extrapolation, as
opposed to interpolation, which deals with values within the
sample range.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-97
Strengths and Weaknesses of Mathematical and
Empirical Analysis

The principal strength of the mathematical analysis is its


independence of specific inputs; its principal weakness is its
limited applicability, especially for investigating the
average-case efficiency.
The principal strength of the empirical analysis lies in its
applicability to any algorithm, but its results can depend on
the particular sample of instances and the computer used in
the experiment.

Monday, May 31, 2021


9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-98
CMP 452 – Design and Analysis of Algorithms

Lecture Note 3 Main Text

Algorithm Design Techniques


Computational Complexity
Theory

by
M. O. Odim (Ph.D.)
Computer Science Department, Redeemer’s
University
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-1
Algorithm Design Techniques

Brute force
Decrease – and – Conquer
Divide – and – Conquer
Transform – and – Conquer
Dynamic Programming

Analysis of Algorithm 2
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-2
Brute force
a straightforward approach to solving a problem, usually directly based on the
problem statement and definitions of the concepts involved.
The “force” is that of a computer and not that of one’s intellect. "Just do it!”
often, the strategy indeed that is easiest to apply.
Examples: Selection Sort and Bubble Sort

Analysis of Algorithm 3
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-3
Selection Sort
scanning the entire given list to find its smallest element and exchange it
with the first element, putting the smallest element in its final position in
the sorted list.
Then we scan the list, starting with the second element, to find the smallest
among the last n − 1 elements and exchange it with the second element,
putting the second smallest element in its final position.
Generally, on the ith pass through the list, which we number from 0 to n −
2, the algorithm searches for the smallest item among the last n − i
elements and swaps it with Ai :

Analysis of Algorithm 4
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-4
After n − 1 passes, the list is sorted.

ALGORITHM SelectionSort(A[0..n − 1])


//Sorts a given array by selection sort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ←0 to n − 2 do
min←i
for j ←i + 1 to n − 1 do
if A[j ]<A[min] min←j
swap A[i] and A[min]

Analysis of Algorithm 5
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-5
An Example of the selection sort
As an example, the action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is
illustrated below:

Each line corresponds to one iteration of the algorithm, i.e., a pass through the
list’s tail to the right of the vertical bar; an element in bold indicates the
smallest element found. Elements to the left of the vertical bar are in their final
positions and are not considered in this and subsequent iterations.
Practice: Obtain the number of counts of the basic operation of this algorithm
and determine its order of growth.
Analysis of Algorithm 6
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-6
Bubble Sort

Another brute-force application to the sorting problem is to compare


adjacent elements of the list and exchange them if they are out of order.
By doing it repeatedly, we end up “bubbling up” the largest element to
the last position on the list.
The next pass bubbles up the second largest element, and so on, until
after n − 1 passes the list is sorted.
Pass i (0 ≤ i ≤ n − 2) of bubble sort can be represented by the following:

Analysis of Algorithm 7
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-7
Bubble Sort

ALGORITHM BubbleSort(A[0..n − 1])


//Sorts a given array by bubble sort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ←0 to n − 2 do
for j ←0 to n − 2 − i do
if A[j + 1]<A[j ] swap A[j ] and A[j + 1]

The action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is
illustrated as an example

Analysis of Algorithm 8
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-8
Bubble Sort

First two passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new line is
shown after a swap of two elements is done. The elements to the right of the
vertical bar are in their final positions and are not considered in subsequent
iterations of the algorithm.
Exercise: complete the n-I passes for sorting of the list, obtain the number of key
swaps and determine its order of growth. Also, practice the other numerous
applications examples on the text.

9
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-9
Decrease-and-Conquer
based on exploiting the relationship between a solution to a given instance of a problem
and a solution to its smaller instance.
Once such a relationship is established, it can be exploited either top down or bottom
up.
The former leads naturally to a recursive implementation, although, an ultimate
implementation may well be nonrecursive. The bottom-up variation is usually
implemented iteratively, starting with a solution to the smallest instance of the problem;
it is called sometimes the incremental approach.
There are three major variations of decrease-and-conquer:
• decrease by a constant
• decrease by a constant factor
• variable size decrease

1
Monday, June 20, 2022 0
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-10
In the decrease-by-a-constant
variation, the size of an
instance is reduced by the
same constant on each
iteration of the algorithm.
Typically, this constant is
equal to one (see fig. below),
although other constant size
reductions do happen
occasionally.

Analysis of Algorithm 1
Monday, June 20, 2022 1
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-11
Decrease-by-a-Constant-Factor Algorithms
E.g. Binary Search

The most important and well-known of them is binary search.


Decrease-by-a-constant-factor algorithms usually run in logarithmic
time, and, being very efficient, do not happen often; a reduction by a
factor other than two is especially rare.

Analysis of Algorithm 12
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-12
Binary Search
a remarkably efficient algorithm
for searching in a sorted array.
works by comparing a search key
K with the array’s middle element
A[m].
If they match, the algorithm
stops; otherwise, the same
operation is repeated recursively
for the first half of the array if K
<A[m], and for the second half if
K >A[m]:
Design and Analysis of Algorithms 13
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-13
Binary Search

Binary Search – Nonrecursive


Algorithm
Though binary search is clearly
based on a recursive idea, it can
be easily implemented as a
nonrecursive algorithm, too.
Here is pseudocode of this
nonrecursive version.

Analysis of Algorithm 14
Monday, June 20, 2022
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-14
The decrease-by-a-constant-
factor technique suggests
reducing a problem instance
by the same constant factor
on each iteration of the
algorithm.
In most applications, this
constant factor is equal to
two. The decrease-by-half
idea is illustrated in the fig.
below
Analysis of Algorithm 1
Monday, June 20, 2022 5
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-15
In the variable-size-decrease
variety of decrease-and-
conquer, the size-reduction
pattern varies from one
iteration of an algorithm to
another.

Analysis of Algorithm 1
Monday, June 20, 2022 6
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-16
Divide – and - Conquer
Divide-and-conquer algorithms work according to the
following general plan:
1. A problem is divided into several subproblems of the same type, ideally of
about equal size.
2. The subproblems are solved (typically recursively, though sometimes a
different algorithm is employed, especially when subproblems become small
enough).
3. If necessary, the solutions to the subproblems are combined to get a solution to
the original problem.

Analysis of Algorithm 1
Monday, June 20, 2022 7
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-17
Analysis of Algorithm 1
Monday, June 20, 2022 8
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-18
Analysis of Algorithm 1
Monday, June 20, 2022 9
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-19
Computational Complexity Theory

In Computer Science, many problems are solved where the


objective is to maximize or minimize some values, whereas in
other problems we try to find whether there is a solution or not.
Hence, the problems can be categorized as follows −
Optimization Problem
• Optimization problems are those for which the objective is to
maximize or minimize some values. For example,
• Finding the minimum number of colors needed to color a given graph.
• Finding the shortest path between two vertices in a graph.
Decision Problem
• There are many problems for which the answer is a Yes or a No. These
types of problems are known as decision problems. For example,
• Whether a given graph can be colored by only 4-colors.
• Finding Hamiltonian cycle in a graph is not a decision problem,
whereas
Copyright © 2007 Pearson Addison-Wesley. checking a graph is Hamiltonian
All rights reserved. or
A. Levitin “Introduction to thenot
Designis a decision
& Analysis of Algorithms,” 2problem.
nd ed., Ch. 2 2-20
Computational Complexity Theory
Limitations of Algorithm Power
• Algorithm are very powerful instruments, especially
• when they are executed by modern computers.
• But the power of algorithms is not unlimited, and its limits are the
subject of this section
• Some problems cannot be solved by any algorithm
• Other problems can be solved algorithmically but not in
polynomial time.
• And even when a problem can be solved in polynomial time by
some algorithms, there are usually lower bounds on their
efficiency.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-21
COMPUTATIONAL COMPLEXITY THEORY

deals with the question of intractability: which problems can


and cannot be solved in polynomial time
DEFINITION 1 We say that an algorithm solves a problem in
polynomial time
• if its worst-case time efficiency belongs to O(p(n)) where p(n) is a
polynomial of the problem’s input size n. (Note that since we are using
big-oh notation here, problems solvable in, say, logarithmic time are
solvable in polynomial time as well.) Problems that can be solved in
polynomial time are called tractable, and problems that cannot be
solved in polynomial time are called intractable.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-22
Reasons for drawing the intractability line
First, we cannot solve arbitrary Table 3.1 Values (some approximate) of several
instances of intractable problems functions important for analysis of algorithms
in a reasonable amount of time
unless such instances are very
small (See the entries of the
efficiency classes).
Second, although there might be
a huge difference between the
running times n O(p(n)) for
polynomials of drastically
different degrees, there are very
few useful polynomial-time Third, polynomial functions possess
algorithms with the degree of a
polynomial higher than three. In many convenient properties; in
addition, polynomials that bound particular, both the sum and
running times of algorithms do composition of two polynomials are
not usually have extremely large always polynomials too.
coefficients.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-23
Fourth, the choice of this class has led to a development of
an extensive theory called computational complexity, which
seeks to classify problems according to their inherent
difficulty. And according to this theory, a problem’s
intractability remains the same for all principal models of
computations and all reasonable input-encoding schemes
for the problem under consideration.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-24
Basic Notions and Ideas of Complexity Theory
P and NP Problems
Most problems can be solved in polynomial time by some
algorithm. They include
• computing the product and the greatest common divisor of
two integers,
• sorting a list, searching for a key in a list or for a pattern in
a text string,
• checking connectivity and acyclicity of a graph, and
finding a minimum spanning tree and shortest paths in a
weighted graph.
• Informally, we can think about problems that can be
solved in polynomial time as the set that computer science
theoreticians call P. A more formal definition include in P
only decision problems, which are problems with yes/no
answers. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2 ed., Ch. 2
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
nd 2-25
P and NP Problems

DEFINITION 2 Class P is a class of decision problems that can be


solved in polynomial time by (deterministic) algorithms. This class of
problems is called polynomial.
The restriction of P to decision problems can be justified by the
following reasons.
First, it is sensible to exclude problems not solvable in polynomial
time because of their exponentially large output. Such problems do
arise naturally—e.g., generating subsets of a given set or all the
permutations of n distinct items— but it is apparent from the outset
that they cannot be solved in polynomial time.
Second, many important problems that are not decision problems in
their most natural formulation can be reduced to a series of decision
problems that are easier to study

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-26
Some important problems with no polynomial time algorithms
found nor the impossibility of such an algorithm been proved
Hamiltonian circuit problem Determine whether a given graph
has a Hamiltonian circuit—a path that starts and ends at the
same vertex and passes through all the other vertices exactly
once.
Traveling salesman problem Find the shortest tour through n
cities with known positive integer distances between them
(find the shortest Hamiltonian circuit in a complete graph
with positive integer weights).

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-27
Knapsack problem: Find the most valuable subset of n items of
given positive integer weights and values that fit into a
knapsack of a given positive integer capacity.
Partition problem: Given n positive integers, determine whether
it is possible to partition them into two disjoint subsets with
the same sum.
Bin-packing problem: Given n items whose sizes are positive
rational numbers not larger than 1, put them into the smallest
number of bins of size 1.
Graph-coloring problem: For a given graph, find its chromatic
number, which is the smallest number of colors that need to
be assigned to the graph’s vertices so that no two adjacent
vertices are assigned the same color.
Integer linear programming problem: Find the maximum (or
minimum) value of a linear function of several integer-valued
variables subject to a finite set of constraints in the form of
linear equalities and inequalities.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-28
A nondeterministic algorithm
DEFINITION 3 A nondeterministic algorithm is a two-stage procedure that takes
as its input an instance I of a decision problem and does the following.
Nondeterministic (“guessing”) stage: An arbitrary string S is generated that can
be thought of as a candidate solution to the given instance I (but may be complete
gibberish as well).

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-29
Deterministic (“verification”) stage: A deterministic
algorithm takes both I and S as its input and outputs yes if S
represents a solution to instance I. (If S is not a solution to
instance I , the algorithm either returns no or is allowed not
to halt at all.)
DEFINITION 4 Class NP is the class of decision problems
that can be solved by nondeterministic polynomial
algorithms. This class of problems is called
nondeterministic polynomial.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-30
Most decision problems are in NP. First of all, this class
includes all the problems in P:
P ⊆ NP.
This is true because, if a problem is in P, we can use the
deterministic polynomialtime algorithm that solves it in the
verification-stage of a nondeterministic algorithm that simply
ignores string S generated in its nondeterministic (“guessing”)
stage.
But NP also contains the Hamiltonian circuit problem, the
partition problem, decision versions of the travelling salesman,
the knapsack, graph coloring, and many hundreds of other
difficult combinatorial optimization problems. The halting
problem, on the other hand, is among the rare examples of
decision problems that are known not to be in NP.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-31
This leads to the most important open question of
theoretical computer science:
Is P a proper subset of NP, or are these two classes, in fact,
the same? We can put this symbolically as
P = NP?
Note that P = NP would imply that each of many hundreds
of difficult combinatorial decision problems can be solved
by a polynomial-time algorithm, although computer
scientists have failed to find such algorithms despite their
persistent efforts over many years.
Moreover, many well-known decision problems are known
to be “NP-complete” which seems to cast more doubts on the
possibility that P = NP.

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-32
NP-Complete Problems
problem in NP that is as difficult as any other problem in this
class because, by definition, any other problem in NP can be
reduced to it in polynomial time
DEFINITION 5 A decision problem D1 is said to be
polynomially reducible to a decision problem D2, if there
exists a function t that transforms instances of D1 to instances
of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and all no
instances of D1 to no instances of D2
2. t is computable by a polynomial time algorithm

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-33
NP-Complete Problems
Implication of the Definition
• if a problem D1 is polynomially reducible to some problemD2 that can be solved in polynomial
time, then problem D1 can also be solved in polynomial time.

DEFINITION 6
A decision problem D is said to be NP-complete if:
• 1. it belongs to class NP
• 2. every problem in NP is polynomially reducible to D

Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 2 2-34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy