Space Complexity 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

ADDISABABAUNIVERSITY

Addis Ababa Institute of Technology

School of Information Technology and Engineering


Fundamental of Data Structure And Algorithm Analysis Group
Assignment

Section: 2

Group Members:
1. Anansi Sime : UGR/9691/15
2. Meron Sisay : UGR/0752/15
3. Selamawit Shimeles : UGR/8982/15
4. Tsion Shimelis : UGR/0654/15
5. Yabtsega Kinfe : UGR/2887/15
6. Yedi Worku : UGR/1035/15
7. Yordanos Abay : UGR/0919/15

Submitted to: Mr. Yared Y.


Submission date:April 1,2024
Introduction
Efficiency in computer science refers to the ability of a system to perform tasks quickly and
with minimal resource consumption. It encompasses both time complexity and space
complexity considerations. Time complexity quantifies the amount of time taken by an
algorithm to execute, based on the input size. Understanding time complexity helps us predict
how an algorithm’s performance will scale as the input size grows. But Efficiency in
programming is not solely about how fast an algorithm can perform a task; it also
encompasses how effectively it manages memory resources. Understanding the relationship
between efficiency and space complexity is pivotal in designing algorithms that execute tasks
swiftly while utilizing memory usage.
Space complexity refers to the amount of memory required by an algorithm to solve a
computational problem as a function of the input size. It quantifies how much memory the
algorithm needs to allocate and manage during its execution.it encompasses both auxiliary
space, which is additional memory required beyond the input, and input space, which is the
memory needed to store the input data. As algorithms process inputs of varying sizes,
analysing space complexity becomes essential to ensure optimal memory utilization.
Minimizing space complexity is crucial for efficient algorithm design, as it allows programs
to run on machines with limited memory resources and ensures scalability for larger inputs.
Efficiency and space complexity are intertwined in the pursuit of optimal algorithm design.
Algorithms with lower space complexity are generally preferred because they minimize
memory usage. This aspect is particularly crucial in resource-constrained environments such
as embedded systems or mobile devices. Conversely, algorithms with high space complexity
can strain available memory resources, potentially leading to inefficiencies or system failures,
especially when handling large datasets.
In this assignment, we delve into the critical concept of space complexity in algorithms. Our
exploration will cover the following key points:
1. Explanation of Space Complexity:
- Introduction to Space Complexity
- Briefly introduce the concept of space complexity.
- Highlight its importance in evaluating algorithms.
- Mention that space complexity complements time complexity in assessing an algorithm’s
efficiency.
2. Introduce Analytical Framework:
- Present an analytical framework for evaluating space complexity, focusing on the use of
Big O notation.
- Explain how Big O notation sets an upper bound on space requirements, facilitating
comparison and optimization.
3. Illustrative Examples:
- Through examples, demonstrate how space complexity manifests in different scenarios,
showcasing O(1) and O(n) space complexities.
- Analyse memory utilization patterns and discuss the implications for scalability.
- Present a case study involving dynamic programming or a similar optimization technique
to illustrate how space complexity can be enhanced or reduced, transitioning from O(n) to
O(1) or vice versa.
- Provide a step-by-step explanation to illuminate the optimization process and its impact
on memory usage.

Fundamentals of Space Complexity

Space complexity is a fundamental concept in computer science that describes the amount of
memory an algorithm needs to run as a function of the length of its input. It’s a measure of
the efficiency of an algorithm in terms of memory usage which includes all the memory that
the algorithm requires, including the input data and any auxiliary space that the algorithm
needs to for temporary storage during execution.

Memory Usage while Execution

While executing, algorithm uses memory space for three reasons

1.Instruction Space: It's the amount of memory used to save the compiled version of
instructions.

2.Environmental Stack: Sometimes an algorithm (function) may be called inside another


algorithm (function). In such a situation, the current variables are pushed onto the system
stack, where they wait for further execution and then the call to the inside algorithm (function)
is made.

For example, If a function A () calls function B () inside it, then all the variables of the
function A () will get stored on the system stack temporarily, while the function B() is called
and executed inside the function A().

3.Data Space: Amount of space used by the variables and constants. But while calculating
the Space Complexity of any algorithm, we usually consider only Data Space and we neglect
the Instruction Space and Environmental Stack.

Understanding Auxiliary Space and Space Used by Input


a.Auxiliary Space in Algorithms
Auxiliary space, also known as extra space, is additional memory used by an algorithm
during execution, aside from input storage. It includes variables, data structures, and call
stack memory.
Characteristics:
-Variables and Data Structures: Includes memory for algorithm variables and internal data
structures like arrays, lists, trees, and stacks.
-Function Call Stack: Recursive algorithms or function calls may require stack memory for
local variables and execution-related data.
-Internal Operations: Operations like variable swapping or computations need temporary
storage space.
Importance:
-Accurate Space Complexity Analysis: Vital for precise analysis beyond input size.
-Resource Management: Essential for efficient memory use, crucial in memory-constrained
environments.
-Algorithm Optimization: Key to improving space efficiency and performance by minimizing
unnecessary space usage.
For instance, consider the following algorithm to compute the Fibonacci sequence using
recursion:
def Fibonacci(n):
if n <= 1:
return n
else:
return Fibonacci(n-1) + Fibonacci(n-2)
In this recursive Fibonacci algorithm: The auxiliary space includes memory allocations for
function call stacks to store parameters, return addresses, and local variables during recursive
calls. The depth of the call stack determines the auxiliary space usage, which grows linearly
with the recursion depth.

b. Space Used by Input


-The space required to store input data varies with its data type, format, and structure.
Different data types like integers, floats, characters, and strings have varying memory needs.
The format of input, such as arrays, lists, or files, influences how it's stored in memory.
-Memory allocation for input involves static (fixed-size arrays) or dynamic (resizable arrays,
linked lists) allocations based on algorithm and data requirements. Input may also be sourced
externally from files, databases, or networks, requiring additional resources for access and
storage.
-Understanding input space is crucial for algorithm efficiency, ensuring optimal memory
resource utilization. Efficient allocation and management are vital, particularly for large
datasets or memory-constrained environments. Moreover, considering input space
requirements guides decisions on data structures, formats, and processing techniques.
For instance,
def array sum(arr):
total = 0
for number in arr:
total += number
return total

input array = [1, 2, 3, 4, 5]


print (array sum (input array))
In this example
- The space used by input includes the memory required to store the input array `input array`.
- The input space complexity is proportional to the size of the input array and the memory
required to store each element.

Space Complexity = Auxiliary Space + Input space

Calculating the Space Complexity

For calculating the space complexity, we need to know the value of memory used by different
type of data type variables, which generally varies for different operating systems, but the
method for calculating the space complexity remains the same.

bool, char, unsigned char, signed char, __int8 1 byte


__int, short, unsigned short, wchar_t, __wchar_t 2 bytes

float, __int32, int, unsigned int, long, unsigned long 4 bytes

double, __int64, long double, long long 8 bytes

The analytical framework for evaluating space complexity


The analytical framework for evaluating space complexity provides a systematic approach to
understanding how an algorithm's memory usage grows with the size of its input. Big O
notation is a key tool within this framework, offering a concise way to express the upper
bound or worst-case scenario of an algorithm's space usage.
-Big O Notation: Big O notation is a mathematical notation used in computer science to
describe the performance or complexity of algorithms. It specifically characterizes the upper
bound or worst-case scenario of an algorithm's time or space usage as the size of the input
approaches infinity. This notation is crucial for analysing and comparing the efficiency of
different algorithms in solving computational problems.
-Big O notation, written as O(f(n)), gives a simplified way to describe how an algorithm's
performance or resource usage changes as the input size grows. It shows the maximum
growth rate of a function f(n), where n represents the input size.
Asymptotic Analysis: Big O focuses on how an algorithm behaves as the input size becomes
very large. It ignores small details and looks only at the dominant factor that affects
performance. This helps us understand the algorithm's behaviour for very large inputs.
Upper Bound: Big O notation gives an upper limit on how much time or space an algorithm
might need for any input of size n or larger. It guarantees that the algorithm won't need more
resources than this limit as the input size grows.

Space Complexity Notations

Space complexity in Big O notation measures the amount of memory used by an algorithm
with respect to the size of its input. It represents the worst-case memory consumption as the
input size increases. These are;
1. O(1) — Constant Space: The algorithm uses a fixed amount of memory that does not
depend on the input size.

Example 1:

public static int SumToNOne(int num)

int sum = 0;

for (int i = 0; i <= num; i++)

sum += i;

return sum;

 Space complexity -> O(1). Spaces used by variables sum, i are constant with respect the
input (N).

Example 2:
int a = 0, b = 0;

for (i = 0; i <= N; i++){

a = a + 5;

for (j = o; j <= M; j++){

b = b + 6;

 Space complexity -> O(1). Spaces used by a, b, i, j, N, M are constant with respect to
inputs (N, M).

2, O(n) — Linear Space: The algorithm’s memory usage grows linearly with the input size.

Example
// n is the length of array a[]
int sum (int arr[], int n)
{
int sum = 0; // 4 bytes for sum
for (int i = 0; i < n; i++) // 4 bytes for i
{
sum = sum + arr[i];
}
return(sum);
}
 In the above example, we need 4*n bytes of space for each
element of the array.
 4 bytes each for sum, n, i and the return value
So, the total amount of memory will be(4n*16) which is increasing
linearly with an increase in the input value n. This is called
linear space complexity. If you have a loop variable i, then the
required space complexity will be 1 word.

3, O(n²) — Quadratic Space: The algorithm’s memory usage increases proportionally to the
square of the input size.

Example

def find_pair(A, N,Z):

for i in range(N):

for j in range(N):

if i != j and A[i] + A[j]==Z

return true

return false

-In this algorithm we iterate through the array twice (nested loops), comparing each pair of
elements. If we find a pair whose sum equals Z, we return true. otherwise, we return False.

-For each iteration of the outer loop (indexed by i), we need memory for the inner loop
(indexed by j) and the memory used for j is released after each iteration of the outer loop.
Therefore, the maximum memory usage occurs when run through all N elements, resulting in
O(N^2) space complexity.

Importance of Big O Notation


Algorithm Selection: Big O notation helps in selecting the most efficient algorithm for a
given problem by comparing their complexities.
Performance Analysis: It provides insights into how an algorithm's performance scales with
increasing input size, aiding in performance optimization.
Resource Management: For space complexity analysis, Big O notation helps in estimating
the memory requirements of algorithms, aiding in resource allocation and management.
In conclusion, Big O notation is a powerful tool in computer science for analysing and
comparing the efficiency of algorithms. It provides a concise and standardized way to
describe the worst-case behaviours of algorithms in terms of time or space usage, aiding in
algorithm selection, performance analysis, and resource management.
Utilization of Big O Notation: When analysing space complexity, Big O notation helps
identify dominant factors affecting memory usage. It focuses on the most significant terms in
the function describing the algorithm's space requirements.

- Example: Creating a two-dimensional array of size n*n to represent a matrix, where each
element occupies space. This results in O(n^2) space complexity.
matrix = [[0] * n for _ in range(n)]

Optimizing space complexity

Space complexity is not the same for all programming languages. Space complexity depends
on various factors, including the programming language, the implementation of data
structures and algorithms, and the underlying runtime environment. Below we will see how
each one of these factors affect space complexity of a program.

Choosing the right algorithm and the right data structure can dramatically affect the
performance of our code (time and memory). In this paragraph we will talk a lot about how it
affects our memory. The space complexity helps to determine the efficiency and scalability
of a solution, and it is an important factor to consider when choosing a data structure or
designing an algorithm.

By understanding these space complexities, programmers can make informed decisions about
which data structures to use in their spacecraft software. Careful selection can help optimize
memory usage and ensure the smooth operation of critical systems within the limited space
available onboard. An algorithmic paradigm is a general approach or strategy for solving a
class of problems, which defines the main idea or concept behind an algorithm without
specifying the exact details or implementation. Algorithm design paradigms can significantly
influence space usage in a program. Here's how some common paradigms impact space
complexity:
-Divide and conquer involves breaking down a large problem into smaller and simpler sub
problems, solving them recursively, and combining the solutions to get the final answer. For
example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers
each, sorts each of them in turn, and interleave both results appropriately to obtain the sorted
version of the given list. This approach is known as the merge sort algorithm. It often requires
storing solutions to those sub problems before combining them. Sorting algorithms like
Merge Sort are a good example. They use extra space (often logarithmic) to hold these
temporary solutions, but this grows slowly with data size.

-Dynamic Programming is a method used in mathematics and computer science to solve


complex problems by breaking them down into simpler sub problems. By solving each sub
problem only once and storing the results, it avoids redundant computations, leading to more
efficient solutions for a wide range of problems. For instance, it can be used to determine the
Fibonacci sequence when the input's space complexity is linear (n), where n is the required
series length. However, storing all of the intermediate solutions for a problem with a lot of
distinct sub problems can require a lot of space in dynamic programming.

The other factors affecting space utilization in a program are programming languages and
runtime environment. Different programming languages provide varying levels of low-level
control, different standard libraries, and different language features, which can affect the
efficiency and complexity of algorithms implemented in those languages.

It's important to consider the specific characteristics and performance considerations of the
programming language being used when analysing time and space complexity. Additionally,
different implementations and optimizations within a programming language can also impact
the efficiency and complexity of algorithms. The runtime environment is the environment in
which a program or application is executed. It's the hardware and software infrastructure that
supports the running of a particular codebase in real time.

The environment where your program runs can significantly impact how efficiently it uses
memory. Factors like virtual memory versus physical memory, automatic garbage collection,
and pre-loaded libraries can all influence space usage. Even the operating system's memory
management can play a role. By understanding these environmental factors and writing
memory-conscious code, you can ensure your program gets the most out of its available
space.
Case study

-Choose a simple algorithm with O(1) space complexity and elucidate its characteristics.
Provide a detailed explanation of why its space complexity remains constant irrespective of
input size.

Selection sort arranges an array's elements in ascending or descending order without extra
memory allocation, maintaining a constant space complexity of O(1). It iteratively selects the
smallest (or largest) element from the unsorted part and swaps it with the first (or last)
element in the sorted portion. This process reduces the unsorted segment while expanding the
sorted one. It operates solely on the input array, using a fixed number of variables for
comparisons and swaps, making it space-efficient for large datasets with limited memory
resources.
Illustrative Example:
Consider an array of integers: [5, 2, 9, 1, 5, 6]. Initiating the algorithm, we observe the
following progression:
1. Initial State: Unsorted: [5, 2, 9, 1, 5, 6], Sorted: []
2. First Pass: Identify smallest (1) and swap with first element (5).
Updated array: [1, 2, 9, 5, 5, 6]
3. Second Pass: Smallest in remaining unsorted is 2. Swap with second element (2). No
change.
4. Third Pass: Identify smallest (5) and swap with third element (9).
Updated array: [1, 2, 5, 9, 5, 6]
5. Fourth Pass: Identify smallest (5) and swap with fourth element (9).
Updated array: [1, 2, 5, 5, 9, 6]
6. Fifth Pass: Identify smallest (6) and swap with fifth element (9).
Updated array: [1, 2, 5, 5, 6, 9]
The sorted array: [1, 2, 5, 5, 6, 9].
def selection_sort(arr):

n = len(arr)

for i in range(n):

# Find the index of the minimum element in the remaining unsorted array

min_idx = i

for j in range(i+1, n):

if arr[j] < arr[min_idx]:

min_idx = j
# Swap the found minimum element with the first element

arr[i], arr[min_idx] = arr[min_idx], arr[i]

# Print the array after each pass (optional, for visualization)

print(f"After pass {i+1}: {arr}")

return arr

# Given array

arr = [5, 2, 9, 1, 5, 6]

# Initial state

print("Initial State:", arr)

# Sorting using selection sort

sorted_arr = selection_sort(arr)

# Sorted array

print("The sorted array:", sorted_arr)

The space complexity of selection sort remains constant at O(1) as it operates solely on the
input array, avoiding additional data structures. It conducts in-place sorting by rearranging
elements within the existing array, eliminating the need for extra memory allocation.
Additionally, the algorithm employs a fixed number of variables for comparisons and swaps,
regardless of input size, ensuring consistent space usage. This optimization makes selection
sort suitable for sorting large datasets efficiently with limited memory resources.

-Select a moderately complex algorithm with O(n) space complexity and analyse its
memory utilization pattern. Discuss the factors contributing to its linear space complexity
and its implications for scalability.

Merge Sort:
Merge Sort, a comparison-based algorithm, divides an array into halves, sorts them
recursively, and merges them, achieving O(n) space complexity. Its divide-and-conquer
approach simplifies complex sorting tasks. Recursion divides arrays until single-element
subarrays, intensifying memory usage due to call stack maintenance. During merging,
temporary arrays store sorted elements, contributing to O(n) space complexity. Merge Sort's
stability maintains equal element order. Its memory usage pattern involves stack space
proportional to recursion depth (O(log n)) and temporary arrays sized to input (O(n)). Linear
space complexity arises from merging and recursion. While efficient for large datasets, Merge
Sort's memory usage can hinder scalability on memory-limited systems. Optimizations like
in-place merging or iterative approaches can mitigate memory demands but may complicate
implementation.

Let's illustrate Merge Sort with a simple example:

Consider an array of integers: [7, 2, 5, 3, 9, 1, 6, 8]

Step 1: Divide
The array is recursively divided into halves until each subarray contains only one element.

[7, 2, 5, 3, 9, 1, 6, 8]

[7, 2, 5, 3] [9, 1, 6, 8]
↓ ↓
[7, 2] [5, 3] [9, 1] [6, 8]
↓ ↓ ↓ ↓
[7] [2] [5] [3] [9] [1] [6] [8]

Step 2: Merge
The sorted subarrays are merged back together while maintaining the sorted order.
[7, 2] [5, 3] [9, 1] [6, 8]
↓ ↓ ↓ ↓
[2, 7] [3, 5] [1, 9] [6, 8]
↓ ↓ ↓ ↓
[2, 3, 5, 7] [1, 6, 8, 9]
↓ ↓
[1, 2, 3, 5, 6, 7, 8, 9]

Memory Utilization:
During divide, memory is allocated for the call stack with a max depth of log₂(n), where n is
the array size. In merging, temporary arrays of size n are created, contributing to O(n) space
complexity.

Overall, Memory Utilization:


Divide contributes O(log n) due to the call stack, while merging adds O(n) due to temporary
arrays. Thus, Merge Sort's space complexity is O(n), suitable for moderately complex
algorithms.

def merge_sort(arr):

if len(arr) <= 1:

return arr

# Divide the array into halves

mid = len(arr) // 2

left_half = arr[:mid]

right_half = arr[mid:]

# Recursively sort the halves

left_half = merge_sort(left_half)

right_half = merge_sort(right_half)

# Merge the sorted halves

return merge(left_half, right_half)

def merge(left, right):

merged = []

i = j = 0

# Merge the two sorted arrays

while i < len(left) and j < len(right):

if left[i] < right[j]:

merged.append(left[i])

i += 1

else:

merged.append(right[j])

j += 1
# Append remaining elements

merged.extend(left[i:])

merged.extend(right[j:])

return merged

# Example usage

arr = [7, 2, 5, 3, 9, 1, 6, 8]

sorted_arr = merge_sort(arr)

print("Sorted Array:", sorted_arr)

Space Complexity in Data Structures


- Analysis of space complexity in common data structures (arrays, linked lists, trees, etc.).
- Strategies for minimizing space consumption in data structure implementations.

Space complexity refers to the amount of memory a data structure consumes during its
operations. It's crucial alongside time complexity for efficient program design. We express
space complexity using Big O notation.

Here's a breakdown of common data structures and their space complexities:

 Arrays: Arrays offer constant time access (O(1)) for any element using indexing.
However, their space complexity is directly tied to their size. They require contiguous
memory allocation to store all elements, resulting in a space complexity of O(n),
where n is the number of elements in the array. The size of the array directly affects
its space complexity. More elements require more memory allocation.
Example:

as we can see in the above example the space complexity is dominated by the array
numbers. Since its size depends on n, the space complexity is O(n). This means the
memory usage grows linearly with the number of elements in the array.
if we change the value of int n=10; to int n=24; the memory usage growth linearly .
 Space complexity of two-dimensional array:
Similar to single-dimensional arrays, multidimensional arrays also have a space
complexity related to the number of elements they store. However, for
multidimensional arrays, space complexity is the product of the space complexities of
each dimension.

Formula: O(n1 * n2 * ... * nk), where n1, n2, ..., nk represent the sizes of each
dimension (k dimensions).

Let’s clarify using example:

 In the above example the code creates a 2D array named matrix with 3 rows (first
dimension) and 4 columns (second dimension).
Here's the key point: memory is allocated for all elements of the array. Since it's a 2D
array, the total space used depends on both the number of rows and columns.
The space complexity for this 2D array is O(3 * 4), which simplifies to O(12). This is
because the space used is proportional to the product of the number of rows (3) and
the number of columns (4).
Generally, The concept extends to arrays with more dimensions. A 3D array with
dimensions (m x n x p) would have a space complexity of O(m * n * p).
Each dimension contributes to the overall space used by the multidimensional array.

 Linked List Space complexity: Linked lists don't have fixed sizes like arrays.
Each node stores data and a reference (pointer) to the next node. This dynamic
allocation allows for insertions and deletions at any point. However, space
complexity is O(n) because each node uses memory to store data and the pointer.
Additionally, random access is inefficient (O(n)) as you need to traverse the list to
find a specific element.
Example:

 Each node in the linked list uses a constant amount of space to store its data and the
reference (next). as we add n nodes to the list, the total space complexity grows
linearly with n. This is because the number of nodes directly determines the amount
of memory used by the linked list.

let’s see the difference between space complexity of singly and doubly linked list:

Both singly and doubly linked lists have a space complexity of O(n), but doubly
linked lists use a constant amount of extra space per node due to the additional "prev"
pointer.

in the above example, Each node in a singly linked list stores two pieces of data.
The actual data value (int in this example).
A reference (next) to the next node in the list. This reference typically occupies the
same amount of space as a pointer (e.g., 4 bytes on a 32-bit system).

 As you add more nodes (n) to the list, the total space used increases linearly. Each
additional node contributes a constant amount of space for its data and the next
reference.

For doubly linked list:

Each node in a doubly linked list stores three pieces of data:


o The data value (int in this example).
o A reference (next) to the next node.
o A reference (prev) to the previous node.

The space complexity is still O(n) because the memory usage is still directly
proportional to the number of nodes.
However, there's a slight difference from singly linked lists, The additional prev
reference adds a constant overhead per node. This overhead doesn't affect the
overall space complexity being O(n), but it does mean a doubly linked list uses
slightly more memory per node compared to a singly linked list.

 Space Complexity of Stacks and Queues:


These are often implemented using arrays or linked lists. Their space complexity
inherits from the underlying implementation. For example, a stack on an array has
O(n) space complexity, while a stack on a linked list has O(n) space complexity
due to the nodes.

Stacks:
Stacks follow a Last-In-First-Out (LIFO) principle. They typically have two main
implementation approaches:

1. Array-Based Stack: Uses a fixed-size array to store elements.


2. Linked List-Based Stack: Uses a linked list to dynamically add and remove elements.

Space Complexity:

1. Array-Based Stack: O(n)


o The entire array needs to be allocated upfront, even if not fully used.
o The space complexity is directly tied to the array size (n), regardless of the
number of elements currently in the stack.
2. Linked List-Based Stack: O(n)
Each node in the linked list uses a constant amount of space to store data and a
reference to the next node.The space complexity is still O(n) because the total
memory usage grows linearly with the number of elements in the stack (n).

Linked list based stack example:

Queues: Queues follow a First-In-First-Out (FIFO) principle. Similar to stacks, they can be
implemented using arrays or linked lists.

Space Complexity:
1. Array-Based Queue: O(n) (similar to array-based stacks)
o The entire array needs to be allocated upfront, even if not fully used.
o The space complexity is O(n), tied to the array size, regardless of the number
of elements in the queue.
2. Linked List-Based Queue: O(n)
Each node in the linked list uses a constant amount of space for data and a reference.
The space complexity is O(n) as memory usage grows linearly with the number of
elements in the queue.

Space complexity of Trees: Trees have a hierarchical structure with nodes containing data
and references to child nodes. The space complexity of a tree depends on how full it is. A
perfectly balanced tree (like a binary search tree) has a space complexity of O(n) in the worst
case. However, a skewed tree (where most nodes lean to one side) can have a space
complexity as bad as O(n) in the worst case.

 Balanced Tree (e.g., Binary Search Tree)


A balanced tree has roughly the same number of nodes on each level. This ensures
efficient searching, insertion, and deletion operations. Common balanced trees include
Binary Search Trees (BSTs) and AVL Trees.
Space Complexity: O(n) in the worst case
 Skewed Tree:
A skewed tree has most nodes leaning to one side, resulting in uneven distribution and
inefficiency.
Space Complexity: O(n) in the worst case

So what is the difference between balanced tree and skewed tree?


the best case space complexity of balanced tree is O(log n) in the best case, O(n) in the worst
case. But in skewed tree it has O(n) in all cases.

Imagine two trees, both with 10 nodes.

Balanced Tree: In a well-balanced scenario, the tree might have 4 levels with approximately
2-3 nodes on each level. The space complexity would be closer to O(log 10) (around O(4) in
this simplified example).

Skewed Tree: Here, all 10 nodes might be on a single path, resulting in a very deep tree with
a single level holding all nodes. The space complexity would be O(10), which simplifies to
O(n).

Practical application of space complexity

Space complexity plays a crucial role in various real-world scenarios, especially systems
where memory resources are limited or expensive. Understand and managing space
complexity is vital to ensure the efficient operation of these systems. Some practical
applications might include:
-Embedded systems: devices like microcontrollers in embedded systems often have very
limited memory. Algorithms used in such systems must be optimized for space to ensure they
fit within the constraints of the hardware.

-Mobile applications mobile devices, though increasingly powerful, still have limitations in
terms of memory, especially when multiple applications run simultaneously. Space-efficient
algorithms are essential to avoid exhausting the device’s memory and to ensure smooth
application performance.

-Big data application: in big data scenarios, where the volume of data is enormous, even
algorithms with linear space complexity can become impractical. Algorithms in these
contexts need to be especially mindful of space usage to handle large datasets effectively.

-Browser applications: web applications running in browsers have to operate within the
memory constraints of the browser and the underlying devices. Optimizing algorithms for
space can lead to faster and more responsive web applications.

Conclusion
Space complexity is a measure of how much memory or space an algorithm needs to execute.
It is important to consider space complexity when dealing with large datasets or limited
memory resources. The space complexity of an algorithm is typically expressed in terms of
the amount of memory it uses relative to the size of its input.
In resource-constrained environments like mobile devices or embedded systems, space
efficiency is vital. These devices often have limited memory capacities, necessitating space-
efficient programming to ensure optimal performance, minimize power consumption, and
provide a smooth user experience.
Optimizing space usage improves performance. Efficient memory utilization reduces time
spent on memory allocation, deallocation, and garbage collection. This results in faster
execution, reduced latency, and improved overall responsiveness. Space efficiency is
especially crucial in performance-critical applications like real-time systems or high-
throughput data processing.
Considering the importance of space efficiency is vital for building robust and efficient
software solutions in today’s data-intensive and resource-constrained environments. By
emphasizing optimized memory utilization, developers can create lean, scalable, and high-
performing applications.
Reference
 GfG. (2023). Time Complexity and Space Complexity. Retrieved from
https://www.geeksforgeeks.org/time-complexity-and-space-complexity/
 Huang, S. (2022). What is Big O Notation Explained: Space and Time Complexity.
Retrieved from https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-
why-it-doesnt-1674cfa8a23c/
 Okeke, C. (2023). Introduction to BIG O Notation - Time and Space Complexity.
Retrieved from https://medium.com/@DevChy/introduction-to-big-o-notation-time-and-
space-complexity-f747ea5bca58
 Space Complexity of Algorithms. (n.d.). Retrieved from
https://www.studytonight.com/data-structures/space-complexity-of-algorithms
 Upadhyay, S. (2023). Time and Space complexity in Data Structure - Ultimate Guide.
Retrieved from https://www.simplilearn.com/tutorials/data-structure-tutorial/time-and-
space-complexity

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy