0% found this document useful (0 votes)
2 views9 pages

Parallel and Distributed Algorithms Notes

The document discusses parallel and distributed algorithms that enhance computational efficiency by allowing multiple processors to work simultaneously on smaller subproblems. It covers various algorithm models, types, challenges, and applications, including leader election and mutual exclusion in distributed systems. Additionally, it explores divide and conquer techniques, load balancing, and task scheduling strategies to optimize resource utilization and execution time.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views9 pages

Parallel and Distributed Algorithms Notes

The document discusses parallel and distributed algorithms that enhance computational efficiency by allowing multiple processors to work simultaneously on smaller subproblems. It covers various algorithm models, types, challenges, and applications, including leader election and mutual exclusion in distributed systems. Additionally, it explores divide and conquer techniques, load balancing, and task scheduling strategies to optimize resource utilization and execution time.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Parallel and Distributed Algorithms Notes

Introduction

Parallel and distributed systems use specialized algorithms to enhance computational efficiency by enabling
multiple processors to work simultaneously. These algorithms leverage concurrency to divide large problems
into smaller subproblems that can be processed independently.

Parallel Algorithm Models

To optimize parallel computing, models are designed based on:

1. Partitioning and Mapping Techniques: Properly distributing tasks among processors to maximize
efficiency.

2. Interaction Strategies: Reducing communication overhead between processing elements to improve


execution time.

Key Types of Parallel and Distributed Algorithms

1. Divide-and-Conquer Algorithms: Break tasks into smaller pieces to be solved concurrently.

o Examples:

 Parallel Merge Sort: Divides an array into smaller subarrays, sorts them in parallel, and
merges the results.
 Parallel Matrix Multiplication: Splits matrices into submatrices and performs
multiplications concurrently.

2. Iterative Algorithms: Perform repetitive computations to refine solutions.

o Example:

 Jacobi Method: Iteratively solves linear equations in parallel by updating solutions based
on previous iterations.

3. Search Algorithms: Execute graph traversal in parallel. Efficient searching is vital for large-scale systems,
especially when handling extensive datasets.

 Parallel Binary Search: Distributes the array across processors, allowing each to perform a binary
search on its assigned portion. Results are then combined.

 Hash-Based Searching: Uses a hash function to distribute data across processors, enabling quick
lookups and retrievals.

 Parallel BFS (Breadth-First Search): Processes all nodes at a particular level in parallel, improving
graph traversal efficiency.

 Parallel DFS (Depth-First Search): Implemented using task-based frameworks, though


challenging due to its recursive nature.

4. Sorting Algorithms: Organize data efficiently across multiple processors.

o Examples:

 Parallel Quicksort: Partitions data and sorts subsets concurrently.

 Odd-Even Transposition Sort: A parallel sorting technique using multiple processors.

 Parallel Merge Sort: Uses a divide-and-conquer approach; each processor sorts a


portion of the dataset independently before merging.

5. Graph Algorithms: Solve networking and routing-related problems.

o Examples:

 Parallel Minimum Spanning Tree (MST): Finds the MST efficiently in parallel.

 Parallel Shortest Path Algorithms: Compute shortest paths (e.g., Dijkstra’s or Floyd-
Warshall’s algorithm) in parallel.

6. Load Balancing Algorithms: Distribute computational tasks efficiently across processors to avoid
bottlenecks.

o Example:

 Work Stealing Algorithm: Idle processors take work from busy ones to balance the load.

7. Consensus Algorithms: Ensure agreement across distributed systems.


o Examples:

 Paxos: A fault-tolerant protocol for reaching consensus in distributed systems.

 Raft: An alternative to Paxos, used for leader election and log replication.

8. Matrix Multiplication in Parallel Systems: Matrix multiplication is fundamental in parallel computing,


particularly in scientific computations and machine learning.
• Row-wise Partitioning: The matrix is divided row-wise, and each processor is responsible for
computing a portion of the final matrix.
• Cannon’s Algorithm: This algorithm arranges processors in a 2D grid and performs block-wise
multiplications with an efficient communication pattern to reduce overhead.
• Fox’s Algorithm: Similar to Cannon’s but works in iterative stages, broadcasting matrix blocks to
compute partial results efficiently.
• Strassen’s Algorithm: A divide-and-conquer method that reduces computational complexity but
involves additional communication overhead.

Challenges in Parallel and Distributed Algorithms

 Communication Overhead: Managing data exchange between processors efficiently.

 Synchronization Issues: Ensuring tasks execute in the correct order without unnecessary delays.

 Fault Tolerance: Handling failures in distributed environments without losing data or progress.

 Load Imbalance: Distributing work unevenly may lead to inefficiencies and delays.

Applications of Parallel and Distributed Algorithms

 Scientific Computing: High-performance simulations and computations.

 Big Data Processing: Large-scale data analytics using distributed computing frameworks.

 Artificial Intelligence: Training deep learning models efficiently with parallel processing.

 Real-Time Systems: Ensuring low-latency processing for critical applications like autonomous vehicles
and financial transactions.

Leader Election and Mutual Exclusion in Distributed Systems


1. Leader Election

Definition

Leader election is the process by which a distributed system selects a single node to act as the leader,
responsible for coordination, decision-making, and resource allocation.

Importance of Leader Election

 Coordination: Synchronizes actions among nodes, preventing conflicts.


 Fault Tolerance: If a leader fails, a new one must be elected to maintain system stability.

 Consistency: Ensures a uniform system state across distributed nodes.

Leader Election Algorithms

1. Bully Algorithm

o Nodes have unique identifiers.

o A node detecting a leader failure initiates an election by messaging nodes with higher IDs.

o If a higher-ID node responds, it takes over the election process.

o If no response is received, the initiating node becomes the leader.

Pros: Simple implementation.


Cons: High communication overhead.

2. Ring Algorithm

o Nodes form a logical ring and know only their immediate successor.

o When a node detects leader failure, it circulates an election message containing its ID.

o The message travels around the ring, and the node with the highest ID is elected.

Pros: Lower message complexity.


Cons: Requires a stable ring structure.

3. Raft Consensus Algorithm

o Nodes can be in follower, candidate, or leader states.

o If no leader is detected, a node becomes a candidate and requests votes.

o The node receiving the majority votes becomes the leader.

o The leader manages log replication for consistency.

Pros: Handles network failures well.


Cons: Requires a majority of nodes to function.

2. Mutual Exclusion

Definition

Mutual exclusion ensures that only one process accesses a critical section at a time, preventing race conditions
and maintaining data consistency.

Requirements for Mutual Exclusion

 No Deadlock: No two nodes should endlessly wait for each other.


 No Starvation: Every process gets a chance to enter the critical section.

 Fairness: Requests must be processed in the order of arrival.

 Fault Tolerance: The system should handle node failures effectively.

Mutual Exclusion Algorithms

1. Lamport’s Logical Clocks

How It Works:

 Uses timestamps-based ordering for requests hendling.

 Each process maintains a logical clock and sends a request with its timestamp to all processes.

 Each recipient replies immediately if:

o It is not in the critical section.

o It has a higher timestamp request.

 The process enters the critical section when all replies are received.

 After finishing, it sends release messages to all processes.

Applications: File locking, shared resource management.

2. Ricart-Agrawala Algorithm

How It Works:

 Improves on Lamport's Algorithm by reducing message overhead.

 A process requests permission from all other nodes before entering the critical section.

 Other nodes reply immediately if they are not in the critical section.

 If a node is in the critical section, it delays the reply until it exits.

 After completing execution, the process releases the lock by informing all waiting processes.

Pros: No single point of failure.


Cons: High message complexity.

3. Token-Based Algorithm

How It Works:

 A unique token circulates among processes.

 A process must hold the token to enter the critical section.

 After using the resource, the process passes the token to the next requester.

Pros: Reduces message overhead.


Cons: Risk of token loss or duplication.
4. Centralized Algorithm

• A central coordinator (server) controls access to the critical section.


• A process that wants access sends a request to the coordinator.
• If the resource is available, the coordinator grants access.
• Once the process finishes, it releases the resource by notifying the coordinator.

Pros: Simple and efficient.


Cons: Single point of failure.

3. Differences Between Leader Election and Mutual Exclusion

Feature Leader Election Mutual Exclusion

Goal Select a single leader Ensure exclusive access to a resource

Key Concern Fault tolerance & coordination Prevent race conditions

Algorithms Bully, Ring, Raft Lamport’s, Ricart-Agrawala, Token-based

Failure Handling Requires re-election Requires ensuring fairness and recovery

Divide and Conquer, Load Balancing, and Task Scheduling


1. Divide and Conquer
Definition: Divide and Conquer is an algorithmic technique used to solve complex problems by breaking them
into smaller, manageable subproblems, solving them independently, and combining their results.

the divide and conquer method solve complex problems by:

1. Dividing the problem into smaller subproblems.

2. Conquering by solving subproblems independently.

3. Combining the results to form the final solution.

Examples of Divide and Conquer Algorithms

1. Merge Sort – Splits an array, sorts halves recursively, and merges them.

2. Quick Sort – Partitions an array, sorts partitions independently.

3. Matrix Multiplication (Strassen’s Algorithm) – Breaks matrices into smaller blocks for faster
multiplication.

Benefits of Divide and Conquer


✔ Reduces problem complexity.
✔ Improves parallel execution and efficiency.
✔ Enhances scalability in distributed systems.

2. Load Balancing
Definition: Load balancing distributes tasks efficiently across multiple computing resources to prevent
bottlenecks and maximize system performance.

Types of Load Balancing

1. Static Load Balancing – Task distribution is predetermined (e.g., Round Robin).

2. Dynamic Load Balancing – Tasks are allocated in real time based on current system load.

Load Balancing Strategies:


Strategy Description Pros Cons

Centralized Load Simplicity, global view of Single point of failure,


A single node assigns tasks.
Balancing the system. bottleneck risk.

Distributed Load Nodes share workload Increased communication


Fault-tolerant, scalable.
Balancing dynamically. overhead.

Hierarchical Load Uses multiple levels of Balances control and


Complex implementation.
Balancing managers. scalability.

Load Balancing Algorithms


1. Round Robin – Assigns tasks to nodes in a circular manner.

2. Least Loaded – Assigns tasks to the node with the lowest workload.

3. Weighted Fair Scheduling – Assigns tasks based on node processing power.

4. Ant Colony Optimization – Uses an adaptive approach inspired by ants finding optimal paths.

Objectives of Load Balancing

✔ Prevent overloading of a single node.


✔ Ensure fair resource utilization across all nodes.
✔ Minimize execution time and maximize throughput.

3. Task Scheduling
Definition: Task scheduling ensures efficient allocation of tasks to processors in parallel and distributed
systems, optimizing execution time and resource usage.

Types of Scheduling
Type Description Example

Static Scheduling Task allocation is precomputed before execution. Grid computing task allocation.

Dynamic Scheduling Task assignment happens during execution. Load balancing in cloud systems.

Task Scheduling Algorithms

1. First Come, First Serve (FCFS)

o Tasks are executed in the order they arrive.

o Pros: Simple, easy to implement.

o Cons: Can lead to long wait times for larger tasks.

2. Shortest Job First (SJF)

o Tasks with the shortest execution time are scheduled first.

o Pros: Reduces overall execution time.

o Cons: Longer tasks may starve if shorter ones keep arriving.

3. Priority Scheduling

o Tasks are assigned priority levels; higher-priority tasks execute first.

o Pros: Ensures critical tasks are executed quickly.

o Cons: Low-priority tasks may starve.

4. Fair Scheduling

o Ensures tasks are executed in a balanced manner to prevent starvation.

o Used in multi-processor and cloud environments.

Goals of Task Scheduling

✔ Minimize execution time – Reduce overall computation delay.


✔ Maximize resource utilization – Prevent idle processors.
✔ Balance workload – Distribute tasks efficiently across nodes.
✔ Meet deadlines – Ensure time-sensitive tasks complete on schedule.

Comparison of Load Balancing and Task Scheduling

Feature Load Balancing Task Scheduling

Purpose Distributes workload across multiple nodes. Assigns tasks to processors efficiently.

Focus Prevents overloading and ensures fairness. Minimizes execution time and maximizes
Feature Load Balancing Task Scheduling

throughput.

Approach Can be static or dynamic. Uses various scheduling algorithms.

Key Round Robin, Least Loaded, Weighted Fair


FCFS, SJF, Priority Scheduling, Fair Scheduling.
Algorithms Scheduling.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy