pdcco1
pdcco1
2) Types of Parallelism
i) Bit-Level Parallelism:
3) Distributed Computing:
Applications: Internet & web services, Cloud computing, Social media platforms, Financial systems
• In this model, processes communicate by exchanging messages using send and receive
operations over a communication line.
• Example: A client sends data to a server for modification, and the server sends the
response back.
Advantages: Simple to implement, No explicit synchronization, Works well in distributed systems
• In this system, processes communicate by reading from and writing to a shared memory
region created by one process.
• Requires synchronization (e.g., using semaphores or mutexes) to avoid data conflicts.
• Example: Web browsers and web servers using shared caching or memory.
Cost &
Cheaper and simpler More expensive and complex
Complexity
Fault Tolerance Failure affects all processors Failure in one node doesn't impact others
SMP (Symmetric
Examples Cloud computing, grid computing
Multiprocessing)
Advantages: Fast communication, Efficient for large data transfers, Less kernel involvement
Disadvantages: Complex synchronization, Security risks, Limited to a single machine (not suitable for
distributed systems)
These networks have a fixed layout; connections between nodes do not change during execution.
• Unidirectional:
Data flows in one direction only.
• Bidirectional:
Data flows in both directions, allowing two-way communication.
Characteristics:
➤ Ring Network
• Extension of the linear array where the last node connects back to the first
• Forms a closed loop
• Each node has two neighbors, allowing communication in both directions
• Better fault tolerance than linear arrays (especially if bidirectional)
Common dynamic network topologies used in parallel computing include Mesh and Hypercube.
a) Mesh Topology
Examples:
Hypercube Network
A Hypercube (also called an n-cube) is a highly structured and scalable interconnection network
used in parallel computing.
Construction of a Hypercube:
9) Omega Network
A scalable interconnection network used in parallel computing for efficient data routing and
processing.
Routing Algorithms:
Communication Patterns:
2. Diameter:
3. Bisection Width:
• Minimum number of links that need to be removed to divide the network into two equal
halves
• For 4×4 mesh: Bisection width = 4
4. Arc Connectivity:
• Minimum number of arcs (edges) that need to be removed to disconnect the network
• For 2D mesh, Arc connectivity = 2 (based on node degree of edge nodes)
Flynn’s taxonomy classifies computer architectures based on the number of instruction and data
streams:
• Links: 5
• Diameter: 5
• Bisection Width: 1
• Arc Connectivity: 1
Compare-Exchange:
Compare-Split:
Assumptions:
• nnn is a power of 2
• Using a tree-based reduction pattern
• Each process adds its share and participates in log₂(m) reduction steps
• Separate send() and recv() calls used