Complexity of An Algorithm

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 20

Complexity of an

Algorithm
Dr. Nitin Paharia
Associate Professor, Department of CSE,
PSIT, Kanpur
Outline
• Complexity of an algorithm?
• What is time complexity?
• What is space complexity?
• What is asymptotic notation?
Complexity of an Algorithm
• Multiple solutions(algorithm) of any given problem for computer.
• Some are good and some are bad
• The selection of an algorithm depends on it’s goodness(efficiency),
which is determined through complexity of algorithm.
• The complexity of an algorithm can be defined as function
describing it’s efficiency in processing the given amount of data.
• There are two main complexity measures of the efficiency of an
algorithm: Time and Space
• We need approximate counting that is machine independent.
Time Complexity
Time Complexity- Time complexity is a function describing numbers of
times elementary operations need to be performed for given amount of
data in an algorithm. An elementary operation is any one of the
arithmetic operations (addition, subtraction, multiplication, division) or
a comparison between two numbers or the execution of a branching
instruction.

Why running time of an algorithm is not considered?


Input Size
• An algorithm may have different running times for different inputs.
• How do we think about comparing algorithms?
• We define the rough size of the input, usually in terms of important
parameters of input.
• Example: In the problem of search, we say that the number of
elements in the array is the input size.
• Please note that the size of individual elements is not considered.
Space Complexity
Space Complexity is a function describing the amount of memory
required in addition to input data.
What is Asymptotic Analysis in Data Structure?

• Asymptotic analysis is a technique for analyzing how an algorithm


behaves or performs as the input size changes.
• Asymptotic notation of an algorithm is a mathematical
representation(Function) of its complexity.
• In asymptotic notation, when we want to represent the complexity of
an algorithm, we use only the most significant terms in the
complexity of that algorithm and ignore least significant terms in the
complexity of that algorithm
• For example, consider the following time complexities of two
algorithms...
• Algorithm 1 : 5n2 + 2n + 1
• Algorithm 2 : 10n2 + 8n + 3
Order of growth in time complexity

* Images have been taken from internet


Asymptotic notations for algorithmic
complexity analysis
• There are three basic asymptotic notations for analyzing
algorithm complexity. They are as follows.
• Worst case: Worst running time for some input. (Big-O notation)
• O-notation describes an asymptotic upper bound. We use O-notation to give an upper bound
on a function, to within a constant factor.
• Best case: Shortest running time for some input. (Omega notation)
• Omega-notation describes an asymptotic lower bound. We use O-notation to give an
lower bound on a function, to within a constant factor.
• Average case: Average running time on all the inputs of the given size. Theta
notation
• We use theta notation for asymptotically tight bounds
Big Oh Definition
Big Oh Properties
■ Fastest growing function dominates a sum
■ O(f(n)+g(n)) is O(max{f(n), g(n)})
■ Product of upper bounds is upper bound for the product
■ If f is O(g) and h is O(r) then fh is O(gr)
■ f is O(g) is transitive
■ If f is O(g) and g is O(h) then f is O(h)
■ Hierarchy of functions
■ O(1), O(logn), O(n1/2), O(nlogn), O(n2), O(2n), O(n!)
Big-Omega (Ω)

Definition: Let f(n) and g(n) be functions,


where n is a positive integer.
We write f(n) Ω(g(n)) (read as "f of n is big
omega of g of n.“) if and only if there exists
constants real number c and positive integer n0 s.t.

f(n) >= cg(n)>0 for all n >= n0


Big Omega (Lower Bound Function)
Big-Omega Example

■ Example: n 1/2
= Ω( lg n) .
Use the definition with c = 1 and n0 = 16. Checks
1/2
OK. Let n > 16: n > (1) lg n if and only
n
if n > Diff
(log n)^2
( lg
n )2 16 16 0
17 16.71 0.29
by squaring both sides. 18 17.39 0.61
19 18.04 0.96
20 18.68 1.32
This is an example of polynomial vs. log. 21 19.29 1.71
22 19.89 2.11
23 20.46 2.54
24 21.02 2.98
Theta (θ)

Definition: Let f(n) and g(n) be functions, where n is a


positive integer.
We write f(n) θ(g(n)) (read as "f of n is theta of g of n.“) if
and only if there exists constants real numbers c1 and
c2 and positive integer n0 s.t.

0 < c1g(n) <= f(n) <= c2g(n) for all n >= n0


Theta Notation
Theta Example
■Example: f(n) = n2 - 5n + 13.
■The constant 13 doesn't change as n grows, so it is not
crucial. The low order term, -5n, doesn't have much effect
on f compared to the quadratic term, n2.
■Q: What does it mean to say f(n) = Q(g(n)) ?
■A: Intuitively, it means that function f is the same order
of magnitude as g.
Space-time tradeoff
• In computer science, a space-time or time-memory tradeoff is
a way of solving a problem or calculation in less time by using
more storage space (or memory), or by solving a problem in
very little space by spending a long time.
• For example- Dynamic programming based algorithm may use
extra space to store intermediate results for reducing time
(improving efficiency).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy