FINAL
FINAL
FINAL
Comprehensive Guide
to Algorithm Analysis
Introduction to
Algorithm Analysis
In this presentation, we explore
algorithm analysis, a crucial aspect of
computer science. Understanding how to
evaluate the efficiency and complexity
of algorithms helps in optimizing
performance and resource usage. Let's
embark on this journey to decode
complexity!
Types of Complexity
Asymptotic Notation
Time Asymptotic notations like Big O (O), Omega (Ω), and Theta (Θ)
describe the growth rate of an algorithm’s time complexity.
• Theta (Θ) provides the exact bound when the algorithm’s best and worst
cases are similar.
Definition of Big O
Notation
Big O Notation is a mathematical expression that describes the upper bound
of an algorithm's time complexity. It helps analyze how the runtime of an
algorithm grows as the input size increases.
Complexity Classes
Adjective O-Notation
Classes
Constant O(1)
Logarithmic O(log n)
Linear O(n)
n log n O(n log n)
Quadratic O(n2)
Cubic O(n3)
Exponential O(2n)
Exponential O(10n)
3
Complexity Method Name
4
Constant Time Complexity
(O(1))
1 Execution time is 2 Examples include looking up
independent of the input size, an element in an array
taking the same amount of
time regardless of how large
the input is.
Code
int getFirstElement(int arr[])
{
return arr[0];
}
Linear Time Complexity
(O(n))
The runtime grows directly Algorithms with linear
proportional to the input size, complexity complete their tasks
making it a common and by performing a fixed number of
important complexity class. operations per input element.
Code
Nested Loops
Nested loops are a common cause of quadratic complexity, as the
algorithm performs an operation for each pair of elements.
O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) < O(2^n) < O(n!)
Conclusion
• Big O Notation is a powerful tool for understanding and comparing the
efficiency of algorithms.
• By quantifying time complexity, it enables developers to make informed
choices when designing and optimizing their code.
Space & Time
Complexity:
Understanding the
Big Picture
Time Complexity
Time complexity is a measure of the amount of time an algorithm takes to complete as a function of the input size. It helps predict
how the runtime of an algorithm grows as the size of the input data increases. Time complexity is typically expressed using Big-O
notation, which captures the algorithm's growth rate. Common time complexities include:
• O(1): Constant time - the algorithm's runtime does not depend on input size.
• O(log n): Logarithmic time - the algorithm's runtime grows logarithmically as the input size increases.
• O(n): Linear time - runtime grows proportionally with input size.
• O(n^2): Quadratic time - runtime grows proportionally with the square of the input size.
• O(2^n): Exponential time - runtime doubles with each additional input.
Example: In a loop that iterates through each element of an array, the time complexity is O(n)O(n)O(n) because the runtime
increases linearly with the input size.
Space Complexity
Space complexity measures the amount of memory an algorithm needs to run as a function of the input size. It
accounts for both the memory needed to store the input data and any additional memory (auxiliary space)
required by the algorithm during its execution. Space complexity is also expressed in Big-O notation.
• Fixed Space (Constant Space): Memory needed for constants, variables, and program code, which doesn't
depend on input size.
• Variable Space: Memory that depends on the input size, like data structures, recursive calls, or temporary
storage.
Example: In Bubble Sort, which sorts an array in-place, the space complexity is O(1)O(1)O(1) because it requires
only a small, constant amount of memory for auxiliary variables, regardless of the input size.
• The outer loop runs from i = 0 to i = n - 1, which means it executes n−1n - 1n−1 times.
• This loop keeps track of the number of passes needed to ensure the largest unsorted element bubbles up to its correct position.
2. Inner Loop (Variable j):
• On each pass of the outer loop, the inner loop executes n−i−1n - i - 1n−i−1 times.
• The inner loop is responsible for comparing adjacent elements and swapping them if they’re out of order.
Comparisons:
• The total number of comparisons in Bubble Sort can be calculated by summing up the number of times the inner loop
executes for each iteration of the outer loop: (n−1)+(n−2)+(n−3)+⋯+1=∑k=1n−1k=(n−1)⋅n2(n - 1) + (n - 2) + (n - 3) + \dots + 1
= \sum_{k=1}^{n-1} k = \frac{(n-1) \cdot n}{2}(n−1)+(n−2)+(n−3)+⋯+1=k=1∑n−1k=2(n−1)⋅n
• Using Big-O notation, this simplifies to O(n2)O(n^2)O(n2).
• Worst Case (Array is in reverse order): O(n2)O(n^2)O(n2), as all elements must be compared and swapped.
• Average Case: O(n2)O(n^2)O(n2), since there will generally be many swaps required.
• Best Case (Array is already sorted): O(n)O(n)O(n), because the algorithm can terminate early if no swaps occur in a pass, as
indicated by a "swapped" flag.
• The input array arr[] of size nnn is passed to the bubbleSort function. However, this space does not count toward auxiliary
space since it’s part of the input.
• The for loops use integer variables i and j for indexing. These require a constant amount of memory, O(1)O(1)O(1), regardless
of the input size.
Temporary Variable (temp):
• A temporary variable temp is used during swaps. This also requires O(1)O(1)O(1) space.
Auxiliary Space:
• Bubble Sort does not require any additional arrays or data structures. All operations are done directly within the input array.
• Therefore, the auxiliary space complexity of Bubble Sort is O(1)O(1)O(1), as only a constant amount of extra memory is needed for
temporary variables.
Summary of Bubble Sort Complexity
In-place algorithms modify the input data directly without requiring extra memory proportional to the input size. They generally use
a constant amount of additional memory, typically O(1)O(1)O(1), or sometimes O(logn)O(\log n)O(logn) for recursive algorithms. In-
place algorithms are efficient in terms of memory usage, as they work by rearranging elements within the original input structure.
Algorithms that require additional memory use extra space proportional to the input size, usually in the form of temporary data
structures like arrays, lists, or recursion stacks. These algorithms do not modify the input data in place and thus may be preferred
when the original data needs to be preserved or when the algorithm itself inherently requires additional memory.
1. Merge Sort
2. Breadth-First Search
Comparison of In-Place vs. Additional Memory Algorithms
Example Algorithms Quick Sort, Insertion Sort, Heap Sort Merge Sort, BFS, Dynamic
Programming
Choosing Between In-Place and Additional Memory Algorithms
The choice between in-place algorithms and those that require extra memory often depends on the problem requirements and
constraints:
By carefully analyzing space complexity and choosing the appropriate algorithm type, we can design solutions that not only conserve
memory but also meet specific requirements, ultimately leading to more efficient and adaptable systems.
Balancing Act: The Pros and
Cons of Space and Time
Complexities in Algorithm
Design
Pros of Time Complexity Optimization