Unit 6: Programmability Issues
Unit 6: Programmability Issues
Unit 6: Programmability Issues
Programmability Issues
Points to be covered
Types and levels of parallelism
Operating systems for parallel processing, Models of
parallel operating systems-Master-slave configuration,
Separate supervisor configuration, Floating supervisor
control
Types and Levels Of Parallelism
1) Instruction Level Parallelism
2) Loop-level Parallelism
3) Procedure-level Parallelism
4) Subprogram-level Parallelism
5) Job or Program-Level Parallelism
Instruction Level Parallelism
This fine-grained, or smallest granularity level typically
involves less than 20 instructions per grain.
The number of candidates for parallel execution varies
from 2 to thousands, with about five instructions or
statements (on the average) being the average level of
parallelism.
Advantages:
There are usually many candidates for parallel execution
Compilers can usually do a reasonable job of finding this
parallelism
Loop-level Parallelism
Typical loop has less than 500 instructions. If a loop
operation is independent between iterations, it can be
handled by a pipeline, or by a SIMD machine.
Most optimized program construct to execute on a
parallel or vector machine. Some loops (e.g. recursive) are
difficult to handle.
Loop-level parallelism is still considered fine grain
computation.
Procedure-level Parallelism
Medium-sized grain; usually less than 2000 instructions.
Detection of parallelism is more difficult than with
smaller grains; interprocedural dependence analysis is
difficult.
Communication requirement less than instruction level
SPMD (single procedure multiple data) is a special case
Multitasking belongs to this level.
Subprogram-level Parallelism
Grain typically has thousands of instructions.
Multi programming conducted at this level
No compilers available to exploit medium- or coarse-
grain parallelism at present
Job Level
Corresponds to execution of essentially independent jobs
or programs on a parallel computer.
This is practical for a machine with a small number of
powerful processors, but impractical for a machine with a
large number of simple processors (since each processor
would take too long to process a single job).
Hardware and software parallelism
Hardware parallelism is defined by machine architecture.
It can be characterized by the number of instructions that
can be issued per machine cycle. If a processor issues k
instructions per machine cycle, it is called a k-issue
processor.
Conventional processors are one-issue machines.
Software Parallelism
Software parallelism is defined by the control and data
dependence of programs, and is revealed in the program’s
flow graph i.e., it is defined by dependencies with in the
code and is a function of algorithm, programming style,
and compiler optimization.
The Role of Compilers
Compilers used to exploit hardware features to improve
performance.
Interaction between compiler and architecture design is a
necessity in modern computer development. It is not
necessarily the case that more software parallelism will
improve performance in conventional scalar processors.
The hardware and compiler should be designed at the
same time.
Operating System For Parallel Processing
There are basically 3 organizations that have been
employed in the design of operating system for
multiprocessors. They are :-