0% found this document useful (0 votes)
23 views70 pages

Arch13 Multiprocessors Afterlecture

Computer Architecture Course PDF by Onur Mutlu

Uploaded by

Atmadeep Dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views70 pages

Arch13 Multiprocessors Afterlecture

Computer Architecture Course PDF by Onur Mutlu

Uploaded by

Atmadeep Dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Computer Architecture

Lecture 19a: Multiprocessors

Prof. Onur Mutlu


ETH Zürich
Fall 2021
2 December 2021
Readings: Multiprocessing
n Required
q Amdahl, “Validity of the single processor approach to achieving large
scale computing capabilities,” AFIPS 1967.

n Recommended
q Mike Flynn, “Very High-Speed Computing Systems,” Proc. of IEEE,
1966
q Hill, Jouppi, Sohi, “Multiprocessors and Multicomputers,” pp. 551-
560 in Readings in Computer Architecture.
q Hill, Jouppi, Sohi, “Dataflow and Multithreading,” pp. 309-314 in
Readings in Computer Architecture.

2
Memory Consistency
n Required
q Lamport, “How to Make a Multiprocessor Computer That Correctly
Executes Multiprocess Programs,” IEEE Transactions on Computers,
1979

3
Readings: Cache Coherence
n Required
q Papamarcos and Patel, “A low-overhead coherence solution
for multiprocessors with private cache memories,” ISCA 1984.

n Recommended:
q Culler and Singh, Parallel Computer Architecture
n Chapter 5.1 (pp 269 – 283), Chapter 5.3 (pp 291 – 305)
q P&H, Computer Organization and Design
n Chapter 5.8 (pp 534 – 538 in 4th and 4th revised eds.)

4
Multiprocessors and
Issues in Multiprocessing
Remember: Flynn’s Taxonomy of Computers
n Mike Flynn, “Very High-Speed Computing Systems,” Proc.
of IEEE, 1966

n SISD: Single instruction operates on single data element


n SIMD: Single instruction operates on multiple data elements
q Array processor
q Vector processor
n MISD: Multiple instructions operate on single data element
q Closest form: systolic array processor, streaming processor
n MIMD: Multiple instructions operate on multiple data
elements (multiple instruction streams)
q Multiprocessor
q Multithreaded processor
6
Why Parallel Computers?
n Parallelism: Doing multiple things at a time
n Things: instructions, operations, tasks

n Main (or Original) Goal


q Improve performance (Execution time or task throughput)
n Execution time of a program governed by Amdahl’s Law

n Other Goals
q Reduce power consumption
n (4N units at freq F/4) consume less power than (N units at freq F)
n Why?
q Improve cost efficiency and scalability, reduce complexity
n Harder to design a single unit that performs as well as N simpler units
q Improve dependability: Redundant execution in space
7
Types of Parallelism and How to Exploit Them
n Instruction Level Parallelism
q Different instructions within a stream can be executed in parallel
q Pipelining, out-of-order execution, speculative execution, VLIW
q Dataflow

n Data Parallelism
q Different pieces of data can be operated on in parallel
q SIMD: Vector processing, array processing
q Systolic arrays, streaming processors

n Task Level Parallelism


q Different “tasks/threads” can be executed in parallel
q Multithreading
q Multiprocessing (multi-core)
8
Task-Level Parallelism: Creating Tasks
n Partition a single problem into multiple related tasks
(threads)
q Explicitly: Parallel programming
n Easy when tasks are natural in the problem
q Web/database queries
n Difficult when natural task boundaries are unclear

q Transparently/implicitly: Thread level speculation


n Partition a single thread speculatively

n Run many independent tasks (processes) together


q Easy when there are many processes
n Batch simulations, different users, cloud computing workloads
q Does not improve the performance of a single task
9
Multiprocessing Fundamentals

10
Multiprocessor Types
n Loosely coupled multiprocessors
q No shared global memory address space
q Multicomputer network
n Network-based multiprocessors
q Usually programmed via message passing
n Explicit calls (send, receive) for communication

n Tightly coupled multiprocessors


q Shared global memory address space
q Traditional multiprocessing: symmetric multiprocessing (SMP)
n Existing multi-core processors, multithreaded processors
q Programming model similar to uniprocessors (i.e., multitasking
uniprocessor) except
n Operations on shared data require synchronization
11
Main Design Issues in Tightly-Coupled MP
n Shared memory synchronization
q How to handle synchronization: locks, atomic operations, barriers

n Cache coherence
q How to ensure correct operation in the presence of private
caches keeping the same memory address cached

n Memory consistency: Ordering of all memory operations


q What should the programmer expect the hardware to provide?

n Shared resource management

n Communication: Interconnects
12
Main Programming Issues in Tightly-Coupled MP
n Load imbalance
q How to partition a single task into multiple tasks

n Synchronization
q How to synchronize (efficiently) between tasks
q How to communicate between tasks
q Locks, barriers, pipeline stages, condition variables,
semaphores, atomic operations, …

n Contention (avoidance & management)


n Maximizing parallelism
n Ensuring correct operation while optimizing for performance

13
Aside: Hardware-based Multithreading
n Coarse grained
q Quantum based
q Event based (switch-on-event multithreading), e.g., switch on L3 miss

n Fine grained
q Cycle by cycle
q Thornton, “CDC 6600: Design of a Computer,” 1970.
q Burton Smith, “A pipelined, shared resource MIMD computer,” ICPP
1978.

n Simultaneous
q Can dispatch instructions from multiple threads at the same time
q Good for improving execution unit utilization

14
Lecture on Fine-Grained Multithreading

https://www.youtube.com/watch?v=6e5KZcCGBYw&list=PL5Q2soXY2Zi_uej3aY39YB5pfW4SJ7LlN&index=16 15
More on Multithreading (I)

https://www.youtube.com/onurmutlulectures 16
More on Multithreading (II)

https://www.youtube.com/onurmutlulectures 17
More on Multithreading (III)

https://www.youtube.com/onurmutlulectures 18
More on Multithreading (IV)

https://www.youtube.com/onurmutlulectures 19
Lectures on Multithreading
n Parallel Computer Architecture, Fall 2012, Lecture 9
q Multithreading I (CMU, Fall 2012)
q https://www.youtube.com/watch?v=iqi9wFqFiNU&list=PL5PHm2jkkXmgDN1PLwOY
_tGtUlynnyV6D&index=51
n Parallel Computer Architecture, Fall 2012, Lecture 10
q Multithreading II (CMU, Fall 2012)
q https://www.youtube.com/watch?v=e8lfl6MbILg&list=PL5PHm2jkkXmgDN1PLwOY_
tGtUlynnyV6D&index=52
n Parallel Computer Architecture, Fall 2012, Lecture 13
q Multithreading III (CMU, Fall 2012)
q https://www.youtube.com/watch?v=7vkDpZ1-
hHM&list=PL5PHm2jkkXmgDN1PLwOY_tGtUlynnyV6D&index=53
n Parallel Computer Architecture, Fall 2012, Lecture 15
q Speculation I (CMU, Fall 2012)
q https://www.youtube.com/watch?v=-
hbmzIDe0sA&list=PL5PHm2jkkXmgDN1PLwOY_tGtUlynnyV6D&index=54

https://www.youtube.com/onurmutlulectures 20
Limits of Parallel Speedup

21
Parallel Speedup Example
n a4x4 + a3x3 + a2x2 + a1x + a0

n Assume given inputs: x and each ai

n Assume each operation 1 cycle, no communication cost,


each op can be executed in a different processor

n How fast is this with a single processor?


q Assume no pipelining or concurrent execution of instructions

n How fast is this with 3 processors?

22
23
24
Speedup with 3 Processors

25
Revisiting the Single-Processor Algorithm

Horner, “A new method of solving numerical equations of all orders, by continuous


approximation,” Philosophical Transactions of the Royal Society, 1819.

26
27
Superlinear Speedup
n Can speedup be greater than P with P processing
elements?

n Unfair comparisons
Compare best parallel
algorithm to wimpy serial
algorithm à unfair

n Cache/memory effects
More processors à
more cache or memory à
fewer misses in cache/mem

28
Utilization, Redundancy, Efficiency
n Traditional metrics
q Assume all P processors are tied up for parallel computation

n Utilization: How much processing capability is used


q U = (# Operations in parallel version) / (processors x Time)

n Redundancy: how much extra work is done with parallel


processing
q R = (# of operations in parallel version) / (# operations in best
single processor algorithm version)

n Efficiency
q E = (Time with 1 processor) / (processors x Time with P processors)
q E = U/R
29
Utilization of a Multiprocessor

30
31
Amdahl’s Law and
Caveats of Parallelism

32
Caveats of Parallelism (I)

33
Amdahl’s Law

Amdahl, “Validity of the single processor approach to


achieving large scale computing capabilities,” AFIPS 1967.
34
Amdahl’s Law Implication 1

35
Amdahl’s Law Implication 2

36
Caveats of Parallelism (II)
n Amdahl’s Law
q f: Parallelizable fraction of a program
q N: Number of processors

1
Speedup =
f
1-f + N

q Amdahl, “Validity of the single processor approach to achieving large scale


computing capabilities,” AFIPS 1967.
n Maximum speedup limited by serial portion: Serial bottleneck
n Parallel portion is usually not perfectly parallel
q Synchronization overhead (e.g., updates to shared data)
q Load imbalance overhead (imperfect parallelization)
q Resource sharing overhead (contention among N processors)
37
Sequential Bottleneck

200
190
180
170
160
150
140
130
120
110 N=10
100
90 N=100

80 N=1000

70
60
50
40
30
20
10
0
0.04
0.08
0.12
0.16

0.24
0.28
0.32
0.36

0.44
0.48
0.52
0.56

0.64
0.68
0.72
0.76

0.84
0.88
0.92
0.96
0

1
0.2

0.4

0.6

f (parallel fraction) 0.8

38
Why the Sequential Bottleneck?
n Parallel machines have the
sequential bottleneck

n Main cause: Non-parallelizable


operations on data (e.g. non-
parallelizable loops)
for ( i = 0 ; i < N; i++)
A[i] = (A[i] + A[i-1]) / 2

n There are other causes as well:


q Single thread prepares data and
spawns parallel tasks (usually
sequential)

39
Another Example of Sequential Bottleneck (I)

Suleman+, “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS 2009. 40
Another Example of Sequential Bottleneck (II)

Suleman+, “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS 2009. 41
Bottlenecks in Parallel Portion
n Synchronization: Operations manipulating shared data
cannot be parallelized
q Locks, mutual exclusion, barrier synchronization
q Communication: Tasks may need values from each other

- Causes thread serialization when shared data is contended

n Load Imbalance: Parallel tasks may have different lengths


q Due to imperfect parallelization or microarchitectural effects
- Reduces speedup in parallel portion

n Resource Contention: Parallel tasks can share hardware


resources, delaying each other
q Replicating all resources (e.g., memory) expensive
- Additional latency not present when each task runs alone
42
Bottlenecks in Parallel Portion: Another View
n Threads in a multi-threaded application can be inter-
dependent
q As opposed to threads from different applications

n Such threads can synchronize with each other


q Locks, barriers, pipeline stages, condition variables,
semaphores, …

n Some threads can be on the critical path of execution due


to synchronization; some threads are not

n Even within a thread, some “code segments” may be on


the critical path of execution; some are not
43
Remember: Critical Sections

n Enforce mutually exclusive access to shared data


n Only one thread can be executing it at a time
n Contended critical sections make threads wait à threads
causing serialization can be on the critical path

Each thread:
loop {
Compute N
lock(A)
Update shared data
unlock(A) C
}

44
Remember: Barriers
n Synchronization point
n Threads have to wait until all threads reach the barrier
n Last thread arriving to the barrier is on the critical path

Each thread:
loop1 {
Compute
}
barrier
loop2 {
Compute
}

45
Remember: Stages of Pipelined Programs
n Loop iterations are statically divided into code segments called stages
n Threads execute stages on different cores
n Thread executing the slowest stage is on the critical path

A B C

loop {
Compute1 A

Compute2 B

Compute3 C
}

46
Difficulty in Parallel Programming
n Little difficulty if parallelism is natural
q “Embarrassingly parallel” applications
q Multimedia, physical simulation, graphics
q Large web servers, databases?

n Difficulty is in
q Getting parallel programs to work correctly
q Optimizing performance in the presence of bottlenecks

n Much of parallel computer architecture is about


q Designing machines that overcome the sequential and parallel
bottlenecks to achieve higher performance and efficiency
q Making programmer’s job easier in writing correct and high-
performance parallel programs
47
We Have Already Seen
Examples

48
In Previous Two Lectures
n Lecture 17b: Parallelism and Heterogeneity
q https://www.youtube.com/watch?v=GLzG_rEDn9A&list=PL5Q
2soXY2Zi-Mnk1PxjEIG32HAGILkTOF&index=18

n Lecture 18a: Bottleneck Acceleration


q https://www.youtube.com/watch?v=P8l3SMAbyYw&list=PL5Q
2soXY2Zi-Mnk1PxjEIG32HAGILkTOF&index=19

49
More on Accelerated Critical Sections
n M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N. Patt,
"Accelerating Critical Section Execution with Asymmetric
Multi-Core Architectures"
Proceedings of the 14th International Conference on Architectural
Support for Programming Languages and Operating
Systems (ASPLOS), pages 253-264, Washington, DC, March
2009. Slides (ppt)
One of the 13 computer architecture papers of 2009 selected
as Top Picks by IEEE Micro.

50
More on Bottleneck Identification & Scheduling
n Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt,
"Bottleneck Identification and Scheduling in Multithreaded
Applications"
Proceedings of the 17th International Conference on Architectural
Support for Programming Languages and Operating
Systems (ASPLOS), London, UK, March 2012. Slides (ppt) (pdf)

51
More on Utility-Based Acceleration
n Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt,
"Utility-Based Acceleration of Multithreaded Applications
on Asymmetric CMPs"
Proceedings of the 40th International Symposium on Computer
Architecture (ISCA), Tel-Aviv, Israel, June 2013. Slides (ppt)
Slides (pdf)

52
More on Data Marshaling
n M. Aater Suleman, Onur Mutlu, Jose A. Joao, Khubaib, and Yale N. Patt,
"Data Marshaling for Multi-core Architectures"
Proceedings of the 37th International Symposium on Computer
Architecture (ISCA), pages 441-450, Saint-Malo, France, June
2010. Slides (ppt)
One of the 11 computer architecture papers of 2010 selected
as Top Picks by IEEE Micro.

53
Computer Architecture
Lecture 19a: Multiprocessors

Prof. Onur Mutlu


ETH Zürich
Fall 2021
2 December 2021
An Example Parallel Problem:
Task Assignment to Processors
Static versus Dynamic Scheduling
n Static: Done at compile time or parallel task creation time
q Schedule does not change based on runtime information

n Dynamic: Done at run time (e.g., after tasks are created)


q Schedule changes based on runtime information

n Example: Instruction scheduling


q Why would you like to do dynamic scheduling?
q What pieces of information are not available to the static
scheduler?

56
Parallel Task Assignment: Tradeoffs
n Problem: N tasks, P processors, N>P. Do we assign tasks to
processors statically (fixed) or dynamically (adaptive)?

n Static assignment
+ Simpler: No movement of tasks.
- Inefficient: Underutilizes resources when load is not balanced
When can load not be balanced?

n Dynamic assignment
+ Efficient: Better utilizes processors when load is not balanced
- More complex: Need to move tasks to balance processor load
- Higher overhead: Task movement takes time, can disrupt
locality

57
Parallel Task Assignment: Example
n Compute histogram of a large set of values
n Parallelization:
q Divide the values across T tasks
q Each task computes a local histogram for its value set
q Local histograms merged with global histograms in the end

58
Parallel Task Assignment: Example (II)
n How to schedule tasks updating local histograms?
q Static: Assign equal number of tasks to each processor
q Dynamic: Assign tasks to a processor that is available
q When does static work as well as dynamic?

n Implementation of Dynamic Assignment with Task Queues

59
Software Task Queues
n What are the advantages and disadvantages of each?
q Centralized
q Distributed
q Hierarchical

60
Task Stealing
n Idea: When a processor’s task queue is empty it steals a
task from another processor’s task queue
q Whom to steal from? (Randomized stealing works well)
q How many tasks to steal?

+ Dynamic balancing of computation load

- Additional communication/synchronization overhead


between processors
- Need to stop stealing if no tasks to steal

61
Parallel Task Assignment: Tradeoffs
n Who does the assignment? Hardware versus software?

n Software
+ Better scope
- More time overhead
- Slow to adapt to dynamic events (e.g., a processor becoming
idle)

n Hardware
+ Low time overhead
+ Can adjust to dynamic events faster
- Requires hardware changes (area and possibly energy
overhead)
62
How Can the Hardware Help?
n Managing task queues in software has overhead
q Especially high when task sizes are small

n An idea: Hardware Task Queues


q Each processor has a dedicated task queue
q Software fills the task queues (on demand)
q Hardware manages movement of tasks from queue to queue
q There can be a global task queue as well à hierarchical
tasking in hardware

q Kumar et al., “Carbon: Architectural Support for Fine-Grained


Parallelism on Chip Multiprocessors,” ISCA 2007.
n Optional reading

63
Dynamic Task Generation
n Does static task assignment work in this case?

n Problem: Searching the exit of a maze

64
Programming Model vs.
Hardware Execution Model
Programming Models vs. Architectures
n Five major models
q (Sequential)
q Shared memory
q Message passing
q Data parallel (SIMD)
q Dataflow
q Systolic

n Hybrid models?

66
Shared Memory vs. Message Passing
n Are these programming models or execution models
supported by the hardware architecture?

n Does a multiprocessor that is programmed by “shared


memory programming model” have to support a shared
address space processors?

n Does a multiprocessor that is programmed by “message


passing programming model” have to have no shared
address space between processors?

67
Programming Models: Message Passing vs. Shared Memory
n Difference: how communication is achieved between tasks
n Message passing programming model
q Explicit communication via messages
q Loose coupling of program components
q Analogy: telephone call or letter, no shared location accessible to
all
n Shared memory programming model
q Implicit communication via memory operations (load/store)
q Tight coupling of program components
q Analogy: bulletin board, post information at a shared space

n Suitability of the programming model depends on the


problem to be solved. Issues affected by the model include:
q Overhead, scalability, ease of programming, bugs, match to
underlying hardware, …
68
Message Passing vs. Shared Memory Hardware
n Difference: how task communication is supported in
hardware
n Shared memory hardware (or machine model)
q All processors see a global shared address space
n Ability to access all memory from each processor
q A write to a location is visible to the reads of other processors
n Message passing hardware (machine model)
q No global shared address space
q Send and receive variants are the only method of
communication between processors (much like networks of
workstations today, i.e. clusters)

n Suitability of the hardware depends on the problem to be


solved as well as the programming model.
69
Programming Model vs. Hardware
n Most of parallel computing history, there was no separation
between programming model and hardware
q Message passing: Caltech Cosmic Cube, Intel Hypercube, Intel
Paragon
q Shared memory: CMU C.mmp, Sequent Balance, SGI Origin.
q SIMD: ILLIAC IV, CM-1

n However, any hardware can really support any


programming model
n Why?
q Application à compiler/library à OS services à hardware

70

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy