PDC 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

20 BS(CS)

DEPARTMENT OF COMPUTER SCIENCE

PARALLEL & DISTRIBUTED


COMPUTING
Engr. MIR MUHAMMAD
mir.juno@quest.edu.pk
Prerequisites:

 Programming in C, C++, or similar


 Basics of Data Structures
 Basics of Computer Architecture & Org.
 Basics of Operating System
 Basics of Network Topologies

Tuesday, January 9, 2024 2


INTRODUCTION TO
PARALLEL & DISTRIBUTED COMPUTING
 The simultaneous growth in availability of big data and
in the number of simultaneous users on the Internet
places particular pressure on the need to carry out
computing tasks “in parallel”, or simultaneously
 Parallel and distributed computing occurs across many
different topic areas in computer science, including:
 algorithms
 computer architecture
 Networks
 operating systems
 software engineering

Tuesday, January 9, 2024 3


INTRODUCTION TO
PARALLEL & DISTRIBUTED COMPUTING
During the early 21st century there was
explosive growth in multiprocessor design and
other strategies for complex applications to
run faster
Parallel and distributed computing builds on
fundamental systems concepts, such as
concurrency, mutual exclusion, consistency in
state/memory manipulation, message-
passing, and shared-memory models.
Tuesday, January 9, 2024 4
Parallel Computing..!!

In parallel computing multiple processors


performs multiple tasks assigned to them
simultaneously.
Memory in parallel systems can either be
shared or distributed
Parallel computing provides concurrency and
saves time and money

Tuesday, January 9, 2024 5


Parallel Computing?!!!

If one man can carry 20 bricks in 20 minutes,


how much would it take for 5 men to carry 20
bricks?

Can you do read two books simultaneously?


Simultaneous
Concurrent
Parallel

Tuesday, January 9, 2024 6


Distributed Computing..!!

In distributed computing we have multiple


autonomous computers which seems to the
user as single system.
In distributed systems there is no shared
memory and computers communicate with
each other through message passing.
In distributed computing a single task is
divided among different computers

Tuesday, January 9, 2024 7


Difference between
Parallel Computing & Distributed Computing:

PARALLEL COMPUTING DISTRIBUTED COMPUTING


Many operations are System components are
performed simultaneously located at different locations
Single computer is required Uses multiple computers

Multiple processors perform Multiple computers perform


multiple operations multiple operations

Tuesday, January 9, 2024 8


Difference between
Parallel Computing & Distributed Computing:

PARALLEL COMPUTING DISTRIBUTED COMPUTING


It may have shared or It have only distributed
distributed memory memory
Processors communicate with Computer communicate with
each other through bus each other through message
passing
Improves the system Improves system scalability,
performance fault tolerance and resource
sharing capabilities

Tuesday, January 9, 2024 9


Introduction to Parallel Computing

• Traditionally, software has been written for serial


computation:
– To be run on a single computer having a single Central
Processing Unit (CPU);
– A problem is broken into a discrete series of instructions.
– Instructions are executed one after another.
– Only one instruction may execute at any moment in time.
Introduction to Parallel Computing

 What is Parallel Computing?


 Parallel computing. Originally implemented only in
supercomputers for scientific research

 Used for scientific research and computational science

 Main area of discipline is developing parallel processing


algorithms and software so that programs can be divided into
small independent parts and can be executed simultaneously by
separate processors

Tuesday, January 9, 2024 12


Introduction to Parallel Computing

• In the simplest sense, parallel computing is the simultaneous


use of multiple compute resources to solve a single
computational problem:
– To be run using multiple CPUs
– A problem is broken into discrete parts that can be solved
concurrently
– Each part is further broken down to a series of instructions
– Instructions from each part execute simultaneously on
different CPUs
Introduction to Parallel Computing

• The compute resources might be:


– A single computer with multiple processors;
– An arbitrary number of computers connected by a network;
– A combination of both.
• The computational problem should be able to:
– Be broken apart into discrete pieces of work that can be solved
simultaneously;
– Execute multiple program instructions at any moment in time;
– Be solved in less time with multiple compute resources than with a
single compute resource.
The Universe is Parallel:
Parallel computing is an evolution of serial computing that
attempts to emulate what has always been the state of affairs in
the natural world: many complex, interrelated events happening
at the same time, yet within a temporal sequence. For example:
Advantages of Parallel Computing over Serial Computing

 It saves time and money as many resources working


together will reduce the time and cut potential costs
 It can be impractical to solve larger problems on Serial
Computing
 It can take advantage of non-local resources when the
local resources are finite
 Serial Computing ‘wastes’ the potential computing
power, thus Parallel Computing makes better work of
hardware

Tuesday, January 9, 2024 17


Types of Parallelism

 Bit-level parallelism
 It is the form of parallel computing which is based on the
increasing processor’s size
 It reduces the number of instructions that the system must
execute in order to perform a task on large-sized data
 Example: Consider a scenario where an 8-bit processor must
compute the sum of two 16-bit integers. It must first sum up
the 8 lower-order bits, then add the 8 higher-order bits, thus
requiring two instructions to perform the operation. A 16-bit
processor can perform the operation with just one
instruction

Tuesday, January 9, 2024 18


Types of Parallelism

 Instruction-level parallelism
 A processor can only address less than one instruction for
each clock cycle phase
 These instructions can be re-ordered and grouped which are
later on executed concurrently without affecting the result
of the program.
 This is called instruction-level parallelism.

Tuesday, January 9, 2024 19


Types of Parallelism

Task parallelism
Task parallelism employs the decomposition of a
task into subtasks and then allocating each of the
subtasks for execution.
The processors perform execution of sub tasks
concurrently

Tuesday, January 9, 2024 20


Why Use Parallel Computing?
• The whole real world runs in dynamic nature
i.e. many things happen at a certain time but
at different places concurrently. This data is
extensively huge to manage.
• Real world data needs more dynamic
simulation and modeling, and for achieving
the same, parallel computing is the key.
Why Use Parallel Computing?
• Parallel computing provides concurrency and
saves time and money.
• Complex, large datasets, and their management
can be organized only and only using parallel
computing’s approach.
• Ensures the effective utilization of the resources.
• The hardware is guaranteed to be used effectively
whereas in serial computation only some part of
hardware was used and the rest rendered idle
Why Use Parallel Computing?
• Main Reasons:
– Save time and/or money: In theory, throwing
more resources at a task will shorten its time to
completion, with potential cost savings. Parallel
computers can be built from cheap, commodity
components.
– Eg: Clusters and grids
Why Use Parallel Computing?
– Solve larger problems: Many problems are so large and/or
complex that it is impractical or impossible to solve them
on a single computer, especially given limited computer
memory. For example:
• "Grand Challenge"
(en.wikipedia.org/wiki/Grand_Challenge) problems
requiring PetaFLOPS and PetaBytes of computing
resources.
• Web search engines/databases processing millions of
transactions per second
Why Use Parallel Computing?
– Provide concurrency: A single compute resource
can only do one thing at a time. Multiple
computing resources can be doing many things
simultaneously. For example, the Access Grid
(www.accessgrid.org) provides a global
collaboration network where people from around
the world can meet and conduct work "virtually".
Why Use Parallel Computing?
– Use of non-local resources: Using compute
resources on a wide area network, or even the
Internet when local compute resources are scarce.
For example:

• SETI@home (setiathome.berkeley.edu) over 1.3 million


users, 3.2 million computers in nearly every country in
the world.

• Folding@home (folding.stanford.edu) uses over


450,000 CPUs globally (July 2011)
Why Use Parallel Computing?
– Limits to serial computing: Both physical and practical reasons pose
significant constraints to simply building ever faster serial computers:

• Transmission speeds - the speed of a serial computer is directly


dependent upon how fast data can move through hardware.
Absolute limits are the speed of light (30 cm/nanosecond) and the
transmission limit of copper wire (9 cm/nanosecond). Increasing
speeds necessitate increasing proximity of processing elements.

• Limits to miniaturization - processor technology is allowing an


increasing number of transistors to be placed on a chip. However,
even with molecular or atomic-level components, a limit will be
reached on how small components can be.
Why Use Parallel Computing?
• Economic limitations - it is increasingly expensive to
make a single processor faster. Using a larger number of
moderately fast commodity processors to achieve the
same (or better) performance is less expensive.
• Current computer architectures are increasingly relying
upon hardware level parallelism to improve
performance:
– Multiple execution units
– Pipelined instructions
– Multi-core
When We Need Parallel Computing?

 Case -1: Complete a time-consuming operation in less time


 I am an automotive engineer
 I need to design a new car that consumes less gasoline
 I’d rather have the design completed in 6 months than in 2 years
 I want to test my design using computer simulations rather than
building very expensive prototypes and crashing them

 Case 2: Complete an operation under a tight deadline


 I work for a weather prediction agency
 I am getting input from weather stations/sensors
 I’d like to predict tomorrow’s forecast today

Tuesday, January 9, 2024 29


When We Need Parallel Computing?

 Case 3: Perform a high number of operations per seconds


 I am an engineer at Amazon.com
 My Web server gets 1,000 hits per seconds
 I’d like my web server and databases to handle 1,000 transactions
per seconds so that customers do not experience bad delays

Tuesday, January 9, 2024 30


Where are we using Parallel Computing
(Applications)?
 Used to solve complex modeling problems in a spectrum of disciplines:
o Artificial intelligence o Physical oceanography
o Climate modeling o Plasma physics
o Automotive engineering o Quantum physics
o Cryptographic analysis o Quantum chemistry
o Geophysics o Nuclear physics
o Molecular biology o Solid state physics
o Molecular dynamics o Structural dynamics
 Parallel Computing is currently applied to business uses as well
 data warehouses
 transaction processing

Tuesday, January 9, 2024 31


Changing times

 From 1986 – 2002, microprocessors were speeding like a rocket,


increasing in performance an average of 50% per year

 Since then, it’s dropped to about 20% increase per year..

Tuesday, January 9, 2024 32


The Problem

 Up to now, performance increases have been


attributed to increasing density of transistors
 But there are inherent problems
 A little Physics lesson:
 Smaller transistors = faster processors
 Faster processors = increased power consumption
 Increased power consumption = increased heat
 Increased heat = unreliable processors

Tuesday, January 9, 2024 33


An intelligent solution

 Move away from single‐core systems to multicore processors


 “core” = processing unit
 Introduction of parallelism..!!

 But …
 Adding more processors doesn’t help much if programmers
aren’t aware of them..!!
 … or don’t know how to use them
 Serial programs don’t benefit from this approach (in most
cases)

Tuesday, January 9, 2024 34


Parallel Computing
 Form of computation in which many calculations are carried out
simultaneously, operating on the principle that large problems can
often be divided into smaller ones, which are then solved
concurrently i.e. "in parallel“

 So, we need to rewrite serial programs so that they’re parallel.

 Write translation programs that automatically convert serial progra


ms into parallel programs.
 This is very difficult to do.
 Success has been limited

Tuesday, January 9, 2024 35


Parallel Computing
 Example
 Compute n values and add them together.
 Serial solution:

Tuesday, January 9, 2024 36


Parallel Computing
 Example
 We have p cores, p much smaller than n.
 Each core performs a partial sum of approximately n/p values.

Tuesday, January 9, 2024 37


Parallel Computing
 Example
 After each core completes execution of the code, is a private
variable my_sum contains the sum of the values computed by
its calls to Compute_next_value.

 Ex., n = 200, then


• Serial – will take 200 addition
• Parallel (for 8 cores)
– Each core will perform n/p = 25 addition
– And master will perform 8 more addition + 8 receive
operation
– Total 41 operation.

Tuesday, January 9, 2024 38


Parallel Computing
 Some coding constructs can be recognized by an automatic program
generator, and converted to a parallel construct.
 However, it’s likely that the result will be a very inefficient program.
 Sometimes the best parallel solution is to step back and devise an en
tirely new algorithm.
 Parallel computer programs are more difficult to write than
sequential programs
 Potential problems
 Race condition (output depending on sequence or timing of
other events)
 Communication and synchronization between the different
subtasks

Tuesday, January 9, 2024 39


Limitations of Parallel Computing
 It addresses such as communication and synchronization
between multiple sub-tasks and processes which is difficult to
achieve
 The algorithms must be managed in such a way that they can
be handled in the parallel mechanism
 The algorithms or program must have low coupling and high
cohesion. But it’s difficult to create such programs
 More technically skilled and expert programmers can code a
parallelism based program well

Tuesday, January 9, 2024 40


Future of Parallel Computing
 The computational graph has undergone a great transition
from serial computing to parallel computing.
 Tech giant such as Intel has already taken a step towards
parallel computing by employing multicore processors.
 Parallel computation will revolutionize the way computers
work in the future, for the better good.
 With all the world connecting to each other even more than
before, Parallel Computing does a better role in helping us
stay that way.
 With faster networks, distributed systems, and multi-
processor computers, it becomes even more necessary

Tuesday, January 9, 2024 41

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy