0% found this document useful (0 votes)
42 views18 pages

PDC Week 2 (Performance Metrice, Amdahl's Law)

This document discusses performance metrics for parallel programs and Amdahl's law. It defines key metrics like speedup, efficiency, and parallel runtime. It explains that Amdahl's law asserts that the maximum speedup obtainable from parallel programs is limited by the sequential fraction of the program. The document provides examples of using Amdahl's law to calculate maximum speedup for programs with different serial fractions run on 8 CPUs.

Uploaded by

masjaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views18 pages

PDC Week 2 (Performance Metrice, Amdahl's Law)

This document discusses performance metrics for parallel programs and Amdahl's law. It defines key metrics like speedup, efficiency, and parallel runtime. It explains that Amdahl's law asserts that the maximum speedup obtainable from parallel programs is limited by the sequential fraction of the program. The document provides examples of using Amdahl's law to calculate maximum speedup for programs with different serial fractions run on 8 CPUs.

Uploaded by

masjaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Parallel & Distributed

Computing
By: Rabia Siddiqui
Introduction Senior Lecturer
Department of Computer Engineering,
Sir Syed University of Engineering &
Technology,
Email: rabiasid@ssuet.edu.pk
Topics covered in today’s lecture

Performance Metrics for Parallel Programs


Amdahl’s Law

Prepared By: Rabia Siddiqui


Performance

The two key goals to be achieved with the design of parallel applications are:
Performance - the capacity to reduce the time to solve a problem as the computing
resources increase.
Scalability – the capacity to increase performance as the size of the problem
increases.
There also some laws that try to explain and assert the potential performance of a
parallel application. The best known are Amdahl’s Law.
Principle of Scalable Performance

Performance Metrics

Scalability

Speedup Law: Amdahl’s Law

Prepared By: Rabia Siddiqui


Performance Metrics

There are 2 distinct classes of performance metrics:


Performance metrics for processors/cores – Access the performance of
processing unit, normally done by measuring the speed or the number of operations
that it does in a certain period of time.
Performance metrics for parallel applications -- Access the performance of a
parallel application, normally done by comparing the execution time with multiple
processing units against the execution time with just on processing.

Prepared By: Rabia Siddiqui


Performance Metrics for Processors / Cores
Some of the best known metrices are
MIPS – Millions of Instructions Per Second
MFLOPS – Millions of Floating point Operation Per Second
SPECint -- SPEC (Standard Performance Evaluation Corporation) benchmarks
that evaluate processor performance on integer arithmetic (first released in 1992)
Whetstone – synthetic benchmarks to access processor performance on floating
point operations (first release in 1972)
Dhrystone -- synthetic benchmark to access processor performance on integer
arithmetic (first released in 1984)
Prepared By: Rabia Siddiqui
Performance Metrics for Parallel Application

Parallel Runtime
Speedup
Efficiency

Prepared By: Rabia Siddiqui


Parallel Run Time
The parallel run time of a program is the time
required to run the program on n- processor
computer.
It is denoted by T(n)

Where n=1, T(1) denotes the runtime of single


processor and
When n=10, T(10) denotes the runtime of 10 no.
of parallel processor.

Prepared By: Rabia Siddiqui


Speedup
Speedup is the ratio of the runtime needed by the single processor to the parallel
runtime.
Speedup is the ratio of the time it takes to execute a program in single processor to
the time it takes to execute in n-processors.
It is denoted by S(n)
S(n) = T(1)
T(n)
Where T(1) is the execution time with one processing unit
T(n) is the execution time with n- processing unit
Prepared By: Rabia Siddiqui
Efficiency

The efficiency of a program on n-processor is defined as the ratio of speedup


achieved to the number of processor needed to achieve it.
Efficiency measures the fraction of time for which a processor is usefully utilized.
It is denoted by E(n)
E(n) = S(n) = T(1)
n n* T(n)
Where S(n) is the speedup for n- processing unit

Prepared By: Rabia Siddiqui


Scalability

Scalability is a measure of a parallel systems capacity to increase speedup in


proportion to the no of processors.
A parallel computing system is said to be scalable if its efficiency can be fixed by
simultaneously increasing the number of processors and the problem size.

Well known Amdahl’ Law dictates the achievable speedup and efficiency .

Prepared By: Rabia Siddiqui


Performance Metrics for Parallel Programs
Amdahl’s Law
The parallel execution time of programs cannot be arbitrarily reduced by employing
parallel resources. As shown, the number of processors is an upper bound for the
speedup that can be obtained.
Other restrictions may come from data dependencies within the algorithm to be
implemented, which may limit the degree of parallelism.
An important restriction comes from program parts that have to be executed
sequentially.

Prepared By: Rabia Siddiqui


Amdahl’s Law
When a (constant) fraction f, 0 ≤ f ≤ 1, of a parallel program must be executed
sequentially, the parallel execution time of the program is composed of a fraction of
the sequential execution time f · T (n) and the execution time of the fraction (1 − f )
· T (n), fully parallelized for p processors, i.e., (1 − f )/p · T (n). The attainable
speedup is therefore
Sp(n) = 1
f + (1–f)
n
This estimation assumes that the best sequential algorithm is used and that the parallel
part of the program can be perfectly parallelized.
Prepared By: Rabia Siddiqui
Amdahl’s Law
Example :If 20% of a program must be executed sequentially then the attainable
speedup is limited to 20/100 = 1/5= 5 according to Amdahl’s law no matter how
many processors are used.
Program parts that must be executed sequentially must be taken into account in
particular when a large no of processors are employed.

Prepared By: Rabia Siddiqui


Example 1
70% of a program’s execution time occurs inside a loop that can be executed in
parallel and rest 30% in serial. What is the maximum speedup we should expect
from a parallel version of program execution on 8 CPU.
S(n) = 1 = f = 30/100=0.3 , n = 8
f + (1–f)
n
S(n) = 1 = 1 = 2.6
0.3 + ( 1 – 0.3) 0.3875
8

Prepared By: Rabia Siddiqui


Example 2
80% of a program’s execution time occurs inside a loop that can be executed in
parallel and rest 20% in serial. What is the maximum speedup we should expect
from a parallel version of program execution on 8 CPU.
S(n) = 1 = f = 20/100=0.20 , n = 8
f + (1–f)
n
S(n) = 1 = 1 = 3.33
0.20 + ( 1 – 0.20) 0.30
8

Prepared By: Rabia Siddiqui


Example 3
95% of a program’s execution time occurs inside a loop that can be executed in
parallel and rest 5% in serial. What is the maximum speedup we should expect from
a parallel version of program execution on 8 CPU.
S(n) = 1 = f = 5/100=0.05 , n = 8
f + (1–f)
n
S(n) = 1 = 1 = 5.9
0.05 + ( 1 – 0.05) 0.16875
8

Prepared By: Rabia Siddiqui

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy