CH 3

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 29

Debre Birhan University

School of Computing
Department of Computer Science

Introduction to Distributed Systems

1
Chapter 3
Processes and Processors in
Distributed Systems

2
Processes and Processors in
Distributed Systems
3.1 Processes and Threads
3.2 Remote Procedure Call (RPC)
• Remote Method Invocation (RMI) with Java
3.3 System Models
• Centralized model
• Client-server model
• Peer-to-peer model
• Multi-tier client-server architectures
• Processor-pool model
3.4 Processor Allocation
3.5 Scheduling in Distributed Systems 3
Processes
• To introduce the notion of a process – a
program in execution, which forms the basis of
all computation.
• Components of process are :
• Data
• Resources
• Status of the process
• Only one process can be running on any
processor at any instant and many processes
may be ready and waiting state.
4
Threads
• A thread is a flow of execution through the
process code, with its own program counter,
system registers and stack.
• If there is only a single thread of control,
computation cannot proceed while the
program is waiting for input.
• The easy solution is to have at least two
threads of control – Multithreading
• That exploit parallelism to attain high
performance 5
6
Example Multithreading
• Spread Sheet Program
• one for handling interaction with the user and
• one for updating the spreadsheet.
• In the mean time, a third thread could be used
for backing up the spreadsheet to disk while the
other two are doing their work.
• Word Processor
• Editor
• Spelling Checker
• Auto Saving
7
Remote Procedure Call (RPC)
• Many distributed systems have been based on
explicit message exchange between processes.
• Programs to call procedures located on other
machines.
• When a process on machine A calls a procedure on
machine B, the calling process on A is suspended,
and execution of the called procedure takes place
on B.
• Major Issue: access transparency

8
Client and Server Stubs
• The idea behind RPC is to make a remote
procedure call look as much as possible like a local
one.
• The calling procedure should not be aware that the
called procedure is executing on a different
machine.
• Client stub packs the parameters (parameter
marshaling) into a message and requests that
message to be sent to the server
• Server stub unpacks the parameters from the
message and then calls the server procedure in the
usual way 9
Client and Server Stubs

Fig. Principle of RPC between a client and server program.


10
Steps of a Remote Procedure Call
1. Client procedure calls client stub in normal way
2. Client stub builds message, calls local OS
3. Client's OS sends message to remote OS
4. Remote OS gives message to server stub
5. Server stub unpacks parameters, calls server
6. Server does work, returns result to the stub
7. Server stub packs it in message, calls local OS
8. Server's OS sends message to client's OS
9. Client's OS gives message to client stub
10. Stub unpacks result, returns to client

11
Passing Value Parameters

2-8

Fig. Steps involved in doing remote computation through RPC


12
Writing a Client and a Server

2-14

The steps in writing a client and a server 13


How to Write RMI Applications
1 Define your Client
Server remote interface
2 Implement the
interface
(.java
3 javac )

Server class (.class)


(.class) 4 uses8
rmic Client Stub
Server skeleton Implement Client
(.class)
(.class) Run the Stub
(.java
Compiler
5 ) 9 javac
Start RMI registry
(.class)
6 Server objects 7 Register remote
Start 10Start client
objects Client 14
Binding a Client to a Server

2-15

Client-to-server binding
15
System Models
• Computers can perform various functions
and each unit in a distributed system may
be responsible for number of functions

• System models
• Centralized model
• Client-server model
• Peer-to-peer model
• Multi-tier client-server architectures
• Processor-pool model

16
Centralized model
• All aspects of the application are hosted on one machine and users directly connect to that
machine.
• The main problem with the centralized model is that it is not easily scalable.

17
Client-server model
• The client-server model is a popular networked
model consisting of three components:
• Service
• Server
• Client

18
Peer-to-peer model
• A peer-to-peer model assumes that each machine has somewhat
equivalent capabilities, that no machine is dedicated to serving others.

19
Multi-tier client-server
architectures
• For certain services, it may make sense to have a
hierarchy of connectivity
• This leads us to examine multitier architectures
• Middle tier is added between the client providing the
user interface and the application server.

20
Processor-pool model
• Use all available computing resources for running jobs
• An operating system can automatically start processes on idle machines and even migrate processes to machines with the most available
CPU cycles
• We maintain a collection of CPUs that can be dynamically assigned to processes on demand.

21
Processor allocation and scheduling
• Determine which process is assigned to which processor, also called load distribution.
• Two categories:
• Nonmigratory: once allocated, can not move, no matter how overloaded the machine is.
• Migratory: can move even if the execution started.

22
The goals of allocation
• Maximize CPU utilization
• Minimize mean response time

23
Response Ratio
• Minimize response ratio
Response ratio-the amount of time it takes to run a process on some machine, divided by how long it would take on some unloaded benchmark processor.
E.g. 1. a 1-sec job that takes 5 sec. The ratio is 5/1.
2. a 1-min job that takes 70 sec. The ratio 7/6

24
Design issues for processor
allocation algorithms
 Deterministic versus heuristic algorithms
 Deterministic - appropriate when everything about the process is known
 Heuristic – load is completely unpredictable
 Centralized versus distributed algorithms
 Centralized- collecting all information on a single machine
 Distributed- information is decentralized
 Optimal versus suboptimal algorithms
 Optimal- best allocation
 Suboptimal- an acceptable


25
• Local versus global algorithms
 Local – makes decision based on local transfer policy
 Global- based on information gathered from elsewhere about the load
• Sender-initiated versus receiver-initiated algorithms
 Sender-initiated- overloaded machine sends out re
requests for help to other machines
 Receiver-initiated-idle machines announces to other machines that it has little work

26
Scheduling in Distributed Systems
• Each processor can do its local scheduling without regard to what the other processors are doing.
• So what is an issue here?
• When a group of interacting processes are running on different processors
• Independent scheduling becomes ineffective

27
Example
• Timesharing processing with the slice of 100-msec

• Q. Assume that A sends many messages to D how long it takes to complete one message exchange-
A. 200msec
• Solution: processes that communicate frequently run simultaneously.

Fig. Two jobs running out of phase with each other 28


Co-Scheduling
• Which takes account into interprocess communication
patterns while scheduling to ensure that all members of a
group runs at the same time.
• How is it possible?
• By using conceptual matrix [Processor ][Time Slot]
• Eg. If we have four processes (P1, P2, P3 and P4) how do
schedule them for optimum performance

29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy