Unit 6: Programmability Issues

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Unit 6

Programmability Issues
Points to be covered
 Types and levels of parallelism
 Operating systems for parallel processing, Models of
parallel operating systems-Master-slave configuration,
Separate supervisor configuration, Floating supervisor
control
Types and Levels Of Parallelism
1) Instruction Level Parallelism
2) Loop-level Parallelism
3) Procedure-level Parallelism
4) Subprogram-level Parallelism
5) Job or Program-Level Parallelism
Instruction Level Parallelism
 This fine-grained, or smallest granularity level typically
involves less than 20 instructions per grain.
 The number of candidates for parallel execution varies
from 2 to thousands, with about five instructions or
statements (on the average) being the average level of
parallelism.
Advantages:
 There are usually many candidates for parallel execution
 Compilers can usually do a reasonable job of finding this
parallelism
Loop-level Parallelism
 Typical loop has less than 500 instructions. If a loop
operation is independent between iterations, it can be
handled by a pipeline, or by a SIMD machine.
 Most optimized program construct to execute on a
parallel or vector machine. Some loops (e.g. recursive) are
difficult to handle.
 Loop-level parallelism is still considered fine grain
computation.
Procedure-level Parallelism
 Medium-sized grain; usually less than 2000 instructions.
 Detection of parallelism is more difficult than with
smaller grains; interprocedural dependence analysis is
difficult.
 Communication requirement less than instruction level
SPMD (single procedure multiple data) is a special case
 Multitasking belongs to this level.
Subprogram-level Parallelism
 Grain typically has thousands of instructions.
 Multi programming conducted at this level
 No compilers available to exploit medium- or coarse-
grain parallelism at present
Job Level
 Corresponds to execution of essentially independent jobs
or programs on a parallel computer.
 This is practical for a machine with a small number of
powerful processors, but impractical for a machine with a
large number of simple processors (since each processor
would take too long to process a single job).
Hardware and software parallelism
 Hardware parallelism is defined by machine architecture.
 It can be characterized by the number of instructions that
can be issued per machine cycle. If a processor issues k
instructions per machine cycle, it is called a k-issue
processor.
 Conventional processors are one-issue machines.
Software Parallelism
 Software parallelism is defined by the control and data
dependence of programs, and is revealed in the program’s
flow graph i.e., it is defined by dependencies with in the
code and is a function of algorithm, programming style,
and compiler optimization.
The Role of Compilers
 Compilers used to exploit hardware features to improve
performance.
 Interaction between compiler and architecture design is a
necessity in modern computer development. It is not
necessarily the case that more software parallelism will
improve performance in conventional scalar processors.
 The hardware and compiler should be designed at the
same time.
Operating System For Parallel Processing
 There are basically 3 organizations that have been
employed in the design of operating system for
multiprocessors. They are :-

 1)Master Slave Configuration


 2)Separate Supervisor Configuration
 3)Floating Supervisor Configuration

 This classification applies not only to operating system,


but in general, to all the parallel programmming strategies.
Master Slave Configuration
 In the master slave mode, one processor, called master,
maintains the status of all the other processors in the
system and distributes the works to all the slave
processors.
 The operating system runs only on master processor and
all other processor are treated as schedulable resources.
 Other processors needing executive service must request
the master that acknowledges the request and performs
the services.
 This scheme is a simple extension of uniprocessor
operating system and is fairly easy to implement.
 The scheme, however makes the system extremely
susceptible to failures.(What if the master fails?)
 Many of the slaves have to wait for master’s work to get
over, before they can get their request served.
Separate Supervisor System
 Here in this approach each processor contains the copy
of kernal.
 Resource sharing occurs via shared memory blocks.
 Each processor services its own needs.
 If the processor access the shared kernal code,then the
code must be reentrant.
 Separate supervisor system is less susceptible to failures.
 This scheme however demands excess resources for
maintaining copies of tables describing resource allocation
etc for each of the processors.
Floating Supervisor Control
 The supervisor routine floats from one supervisor to
another although several of the processor may be
executing supervisory service routines simultaneously.
 Better Load balancing
 Several amount of code sharing.
 Generally one sees a combination of above schemes to
obtain a useful solution.
End Of Module 6

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy