0% found this document useful (0 votes)
10 views18 pages

data structures1

The document provides an overview of linear data structures, including their definition, importance, and types such as arrays, linked lists, stacks, and queues. It also introduces Abstract Data Types (ADTs), explaining their operations and examples like List ADT, Stack ADT, and Queue ADT. Additionally, it covers time and space complexity analysis, searching techniques (linear and binary search), and sorting algorithms (bubble sort and selection sort) with their respective complexities.

Uploaded by

villanmshuhaiv05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views18 pages

data structures1

The document provides an overview of linear data structures, including their definition, importance, and types such as arrays, linked lists, stacks, and queues. It also introduces Abstract Data Types (ADTs), explaining their operations and examples like List ADT, Stack ADT, and Queue ADT. Additionally, it covers time and space complexity analysis, searching techniques (linear and binary search), and sorting algorithms (bubble sort and selection sort) with their respective complexities.

Uploaded by

villanmshuhaiv05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Data structures

1.1)definition and importance of linear data structures:

 A linear data structure is a way to store data in a sequential order,with each


element connected to the next and previous elements
 Linear data structures are easy to understand and implement
Importance:
 Linear data structures are the building blocks for more complex
algorithms and structures
 They are used in many applications from simple data storage to
complex problem solving

Types of Linear Data Structure

There are four types of linear data structure:

1. Array.
2. Linked list.
3. Stack.
4. Queue.

1.2) Abstract Data Types



 In this article, we will learn about Abstract Data Types (ADTs).
But before understanding what an ADT is, let us consider
different built-in data types provided by programming
languages. Data types such as int, float, double, and long are
built-in types that allow us to perform basic operations like
addition, subtraction, division, and multiplication. However,
there are scenarios where we need custom operations for
different data types. These operations are defined based on
specific requirements and are tailored as needed. To address
such needs, we can create data structures along with their
operations, which are known as Abstract Data Types (ADTs).

Abstract Data Type (ADT)


Data types such as int,float,double,and long are built in types to perform basic
operations like addition,substraction,multiplication and division

An Abstract Data Type (ADT) is a conceptual model that defines a


set of operations and behaviours for a data type, without
specifying how these operations are implemented or how data is
organized in memory.

For example, we use primitive values like int, float, and char with
the understanding that these data types can operate and be
performed on without any knowledge of their implementation
details. ADTs operate similarly by defining what operations are
possible without detailing their implementation.
Defining ADTs: Examples
Now, let’s understand three common ADT’s: List ADT, Stack ADT,
and Queue ADT.
1. List ADT

Vies of list

The List ADT need to store the required data in the sequence and
should have the following operations:
 get(): Return an element from the list at any given position.
 insert(): Insert an element at any position in the list.
 remove(): Remove the first occurrence of any element from a
non-empty list.
 removeAt(): Remove the element at a specified location from
a non-empty list.
 replace(): Replace an element at any position with another
element.
 size(): Return the number of elements in the list.
 isEmpty(): Return true if the list is empty; otherwise, return
false.
 isFull(): Return true if the list is full; otherwise, return false.
2. Stack ADT
View of stack

In Stack ADT, the order of insertion and deletion should be


according to the FILO or LIFO Principle. Elements are inserted and
removed from the same end, called the top of the stack. It should
also support the following operations:
 push(): Insert an element at one end of the stack called the
top.
 pop(): Remove and return the element at the top of the stack,
if it is not empty.
 peek(): Return the element at the top of the stack without
removing it, if the stack is not empty.
 size(): Return the number of elements in the stack.
 isEmpty(): Return true if the stack is empty; otherwise, return
false.
 isFull(): Return true if the stack is full; otherwise, return false.
3. Queue ADT
View of Queue

The Queue ADT follows a design similar to the Stack ADT, but the
order of insertion and deletion changes to FIFO. Elements are
inserted at one end (called the rear) and removed from the other
end (called the front). It should support the following operations:
 enqueue(): Insert an element at the end of the queue.
 dequeue(): Remove and return the first element of the queue,
if the queue is not empty.
 peek(): Return the element of the queue without removing it,
if the queue is not empty.
 size(): Return the number of elements in the queue.
 isEmpty(): Return true if the queue is empty; otherwise,
return false.
1.3)overview of time and space complexity analysis for linear data structures

 Time and space complexity analysis for linear data structures measures
how much time and memory an algorithm takes to run.its important for
designing software, building web-sites and analyzing large datasets
Time complexity
 The amount of time it takes an algorithm to run
 The number of operations like comparisions,required to complete the
algorithm
 The worst-case time complexity is the maximum time it takes for any input
 The average-case time complexity is the average time it takes for an input
 The best-case time complexity is the minimum time it takes for an input
Space complexity
 The amount of memory an algorithm uses
 The fixed amount of space required by the algorithm
 The variable amount of space required by the algorithm which
depends on the input size
Calculating time and space complexity
 Identify the basic operation in the algorithm
 Count how many times the basic operation is performed
 Express the count as a function of the input size
 Simplify the expression and identify the dominant term
 Express the time complexity using big o notation

Example: Linear search algorithm

 The time complexity is o(n).where n is the number of elements in the array


 The space complexity is o(1).which means it uses a constant amount of
extra space

1.4)Searching techniques:Linear and binary search

Linear Search Algorithm



Given an array, arr of n integers, and an integer element x, find


whether element x is present in the array. Return the index of the
first occurrence of x in the array, or -1 if it doesn’t exist.
Input: arr[] = [1, 2, 3, 4], x = 3
Output: 2

Input: arr[] = [10, 8, 30], x = 6


Output: -1
Explanation: The element to be searched is 6 and its not
present, so we return -1.

In Linear Search, we iterate over all the elements of the array and
check if it the current element is equal to the target element. If
we find any element to be equal to the target element, then
return the index of the current element. Otherwise, if no element
is equal to the target element, then return -1 as the element is
not found. Linear search is also known as sequential search.
For example: Consider the array arr[] = {10, 50, 30, 70, 80,
20, 90, 40} and key = 30

Binary Search Algorithm – Iterative and Recursive Implementation

Binary Search is a highly efficient algorithm used to find an element in a sorted


array or list. Unlike linear search, which checks each element sequentially, binary
search works by repeatedly dividing the search interval in half, which makes it
much faster than linear search, especially for large datasets.

How Binary Search Works:

1. Initial Setup: Binary search begins by looking at the middle element of the
sorted array.
2. Comparison:
o If the middle element matches the target value, the search is complete.
o If the middle element is greater than the target, the target must be in
the left half of the array (since the array is sorted). So, the search
continues in the left half.
o If the middle element is less than the target, the target must be in the
right half of the array, and the search continues there.
3. Repeat: This process repeats, halving the search range each time, until the
element is found or the search range is empty (indicating the element is not
in the array).

Binary Search Algorithm

1. Start with the whole array.


2. Calculate the middle index: mid = (low + high) // 2
3. Compare the middle element with the target:
o If arr[mid] == target, return the index.
o If arr[mid] > target, repeat the search in the left half (high
= mid - 1).
o If arr[mid] < target, repeat the search in the right half (low
= mid + 1).
4. Repeat the process until the element is found or the search range becomes
invalid (low > high).

Example:

Given a sorted array: [2, 5, 8, 12, 15, 19, 25, 32, 37, 40] and
the target element 15.

1. Initial Range: low = 0, high = 9 (the indices of the array).


o mid = (0 + 9) // 2 = 4, so arr[mid] = 15.
o arr[mid] == 15, so we return mid = 4.

Time and Space Complexity:

1. Time Complexity:
o Best case: O(1)O(1)O(1) — The target is found in the first
comparison (if it's the middle element).
o Average case: O(log⁡n)O(\log n)O(logn) — With each iteration, the
search space is halved, so the time complexity grows logarithmically
with the size of the array.
o Worst case: O(log⁡n)O(\log n)O(logn) — The algorithm may need to
make log⁡n\log nlogn comparisons to exhaust the search space.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Binary search operates in
constant space, as it only requires a few variables to track the low,
high, and mid indices.

Requirements for Binary Search:

 Sorted Array/List: The array must be sorted before applying binary search.
If the array is not sorted, you would need to sort it first, which would take
O(nlog⁡n)O(n \log n)O(nlogn) time.
 Efficient Search Space Reduction: Binary search reduces the search space
by half each time, which is why it is much more efficient than linear search
for large datasets.

Advantages:
 Efficient for Large Datasets: With a time complexity of O(log⁡n)O(\log
n)O(logn), binary search is very efficient compared to linear search,
especially for large datasets.
 Constant Space: It uses O(1)O(1)O(1) space, making it very memory-
efficient.

Disadvantages:

 Sorted Data Requirement: The array or list must be sorted beforehand.


 Not Efficient for Small Datasets: For small datasets, binary search may not
be worth the overhead compared to simpler algorithms like linear search.



1.5) Sorting techniques: Bubble Sort, Selection Sort,


Insertion Sort:
Bubble Sort Algorithm



Bubble Sort is a simple comparison-based sorting algorithm in computer science.


It is named "bubble sort" because the smaller elements "bubble" to the top
(beginning of the array) with each pass through the list.

How Bubble Sort Works:

1. Iterate through the list: Starting at the first element, compare the current
element with the next element.
2. Swap if necessary: If the current element is greater than the next one (for
ascending order), swap the two elements.
3. Repeat: Continue this process for the entire list. After each pass, the largest
element "bubbles" to the correct position at the end of the list.
4. Optimization: If during a pass, no swaps are made, the list is already sorted,
and the algorithm can terminate early.

Example:

Let's sort an array: [5, 3, 8, 4, 2] in ascending order.

1. First Pass:
o Compare 5 and 3, swap → [3, 5, 8, 4, 2]
o Compare 5 and 8, no swap → [3, 5, 8, 4, 2]
o Compare 8 and 4, swap → [3, 5, 4, 8, 2]
o Compare 8 and 2, swap → [3, 5, 4, 2, 8]
o After the first pass, the largest element (8) is in its correct position at
the end of the list.
2. Second Pass:
o Compare 3 and 5, no swap → [3, 5, 4, 2, 8]
o Compare 5 and 4, swap → [3, 4, 5, 2, 8]
o Compare 5 and 2, swap → [3, 4, 2, 5, 8]
o After the second pass, the second-largest element (5) is in its correct
position.
3. Third Pass:
o Compare 3 and 4, no swap → [3, 4, 2, 5, 8]
o Compare 4 and 2, swap → [3, 2, 4, 5, 8]
o After the third pass, the third-largest element (4) is in its correct
position.
4. Fourth Pass:
o Compare 3 and 2, swap → [2, 3, 4, 5, 8]
o The list is now sorted.

Time Complexity:

 Best case (already sorted array): O(n)O(n)O(n) — when the algorithm


terminates early due to no swaps.
 Average case: O(n2)O(n^2)O(n2) — when the array is unsorted.
 Worst case: O(n2)O(n^2)O(n2) — when the array is sorted in reverse order.

Space Complexity:
 Space Complexity: O(1)O(1)O(1) — Bubble Sort is an in-place sorting
algorithm, meaning it doesn’t require additional storage proportional to the
input size.

Advantages:

 Simple to understand and implement.


 In-place sorting with O(1)O(1)O(1) extra space.

Disadvantages:

 Inefficient for large datasets due to its O(n2)O(n^2)O(n2) time complexity.


 Not suitable for large-scale sorting when compared to algorithms like
Merge Sort or Quick Sort.

Selection Sort is a simple and intuitive sorting algorithm. It repeatedly selects the
smallest (or largest) element from the unsorted portion of the array and swaps it
with the element at the beginning of the unsorted portion. This process is repeated
until the entire array is sorted.

How Selection Sort Works:

1. Start with the first element of the list.


2. Find the smallest element in the unsorted part of the list (from the current
element to the last element).
3. Swap the smallest element with the first unsorted element.
4. Move the boundary of the unsorted portion by one position forward (i.e.,
move the left pointer to the next element).
5. Repeat this process until the entire array is sorted.

Example:

Let's walk through Selection Sort on the following array:

[64, 25, 12, 22, 11]

Step-by-step Process:

First Pass:

 The initial array is [64, 25, 12, 22, 11].


 We need to find the smallest element from the entire array.
o Compare 64 with 25, 25 is smaller.
o Compare 25 with 12, 12 is smaller.
o Compare 12 with 22, 12 is still smaller.
o Compare 12 with 11, 11 is smaller.
o The smallest element in this pass is 11.

 Swap 11 with 64 (the first element of the array).


 The array now becomes:

[11, 25, 12, 22, 64]


Second Pass:

 Now, we focus on the unsorted portion of the array: [25, 12, 22, 64].
 Find the smallest element in this subarray:
o Compare 25 with 12, 12 is smaller.
o Compare 12 with 22, 12 is smaller.
o Compare 12 with 64, 12 is still smaller.
o The smallest element is 12.

 Swap 12 with 25 (the first element in this unsorted subarray).


 The array now becomes:

[11, 12, 25, 22, 64]


Third Pass:

 Now, focus on the subarray [25, 22, 64].


 Find the smallest element in this subarray:
o Compare 25 with 22, 22 is smaller.
o Compare 22 with 64, 22 is smaller.
o The smallest element is 22.

 Swap 22 with 25.


 The array now becomes:

[11, 12, 22, 25, 64]


Fourth Pass:

 Focus on the subarray [25, 64].


 Find the smallest element in this subarray:
o 25 is smaller than 64.

 No need to swap since 25 is already in the correct position.


 The array remains:

[11, 12, 22, 25, 64]


Fifth Pass:

 The remaining subarray is just [64], which is already sorted.

Final Sorted Array:

After completing all passes, the array is sorted:

[11, 12, 22, 25, 64]

Time and Space Complexity:

1. Time Complexity:
o Best, Average, and Worst Case: O(n2)O(n^2)O(n2) — Selection Sort
always compares every element with every other element in the
unsorted portion, resulting in a quadratic number of comparisons.
 Best Case: Even if the array is already sorted, Selection Sort
still performs O(n2)O(n^2)O(n2) comparisons.
 Worst Case: The worst case happens when the array is sorted
in reverse order, which still requires O(n2)O(n^2)O(n2)
comparisons.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Selection Sort is an in-place
sorting algorithm, meaning it requires only a constant amount of
additional space for the temporary variable used in swapping.

Advantages of Selection Sort:

 Simplicity: Very easy to understand and implement.


 In-place sorting: It sorts the array without using additional memory (i.e., no
auxiliary array).
 Not affected by the initial order of elements: Unlike algorithms like Bubble
Sort or Insertion Sort, which can perform better if the array is partially
sorted, Selection Sort always performs the same number of comparisons.

Disadvantages of Selection Sort:

 Inefficient for large datasets: Because of its O(n2)O(n^2)O(n2) time


complexity, it is very slow for large arrays, especially when compared to
more advanced algorithms like Merge Sort or Quick Sort, which have
O(nlog⁡n)O(n \log n)O(nlogn) time complexity.
 Not adaptive: It doesn't improve even when the array is partially sorted.
This makes it inefficient in many practical scenarios.

Insertion Sort in Data Structures with Example

Insertion Sort is a simple comparison-based sorting algorithm that builds the final
sorted array one element at a time. It is much like sorting playing cards in your
hands, where you take one card at a time and place it in the correct position relative
to the cards already sorted.

How Insertion Sort Works:

1. Start with the second element (since a single element is trivially sorted).
2. Compare the current element with the elements in the sorted portion of
the array (to its left).
3. Shift all elements that are greater than the current element one position to
the right to make space for the current element.
4. Insert the current element into its correct position.
5. Repeat this process for all the elements in the array.

Example:

Consider the array: [5, 2, 9, 1, 5, 6]

Step-by-step Process:

Initial Array:
[5, 2, 9, 1, 5, 6]

We will start with the second element (index 1), because the first element is
trivially considered sorted.

First Pass (i = 1):

 Current element: 2
 Compare 2 with 5 (the element to its left).
o 2 < 5, so shift 5 to the right.
o The array now looks like: [5, 5, 9, 1, 5, 6].
 Insert 2 into its correct position: Place 2 at the start.
 The array becomes: [2, 5, 9, 1, 5, 6].

Second Pass (i = 2):

 Current element: 9
 Compare 9 with 5 (the element to its left).
o 9 > 5, no shift needed.
 Insert 9: It's already in the correct position.
 The array remains: [2, 5, 9, 1, 5, 6].

Third Pass (i = 3):

 Current element: 1
 Compare 1 with 9 (shift 9 right).
o The array becomes: [2, 5, 9, 9, 5, 6].
 Compare 1 with 5 (shift 5 right).
o The array becomes: [2, 5, 5, 9, 5, 6].
 Compare 1 with 2 (shift 2 right).
o The array becomes: [2, 2, 5, 9, 5, 6].
 Insert 1 at the start.
 The array becomes: [1, 2, 5, 9, 5, 6].
Fourth Pass (i = 4):

 Current element: 5
 Compare 5 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 9, 9, 6].
 Compare 5 with 5 (no shift needed).
 Insert 5 after the first 5.
 The array becomes: [1, 2, 5, 5, 9, 6].

Fifth Pass (i = 5):

 Current element: 6
 Compare 6 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 5, 9, 9].
 Compare 6 with 5 (no shift needed).
 Insert 6 after the second 5.
 The array becomes: [1, 2, 5, 5, 6, 9].

Final Sorted Array:

After completing all passes, the array is sorted:

[1, 2, 5, 5, 6, 9]

Insertion Sort Algorithm Code (in Python):

Time and Space Complexity:

1. Time Complexity:
o Best case: O(n)O(n)O(n) — This happens when the array is already
sorted. In this case, only one comparison is made for each element,
and no shifting occurs.
o Worst case: O(n2)O(n^2)O(n2) — This occurs when the array is
sorted in reverse order. Each element needs to be compared and
shifted to the beginning.
o Average case: O(n2)O(n^2)O(n2) — On average, the algorithm will
need to perform about n2/2n^2 / 2n2/2 comparisons and shifts.

2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Insertion sort is an in-place
sorting algorithm, meaning it doesn't require any additional space
besides the input array.

Advantages of Insertion Sort:

 Simple to understand and implement: It is one of the easiest sorting


algorithms to understand and code.
 Efficient for small datasets: For small arrays, the overhead of more
advanced algorithms may not be justified, and Insertion Sort can perform
well.
 Adaptive: It is adaptive, meaning if the array is already partially sorted, the
algorithm will perform better (i.e., fewer shifts and comparisons).
 Stable: Insertion sort is stable, meaning that it maintains the relative order
of elements with equal values.

Disadvantages of Insertion Sort:

 Inefficient for large datasets: Due to its O(n2)O(n^2)O(n2) time complexity,


it is not suitable for large arrays.
 Not ideal for large-scale data: For larger datasets, more efficient algorithms
like Quick Sort, Merge Sort, or Heap Sort are typically preferred.

When to Use Insertion Sort:

 Small datasets: It is efficient for small arrays where the overhead of more
complex algorithms isn't necessary.
 Partially sorted arrays: If the array is already partially sorted, Insertion Sort
can be much faster than other algorithms.
 Memory-constrained environments: It uses only O(1)O(1)O(1) extra space,
so it is useful when memory is limited.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy