Module 3
Module 3
Application
Servers
(Kernel
services
running in Applications
user space
A thread cannot live independently it lives A process contains at least one thread
within the process
Threads are very inexpensive to create Processes are very expensive to create,
Involves many OS overhead
Context switching is inexpensive and fast Context switching is complex and involves
lot of OS overhead and is comparatively
slower
If a thread expires its stack is reclaimed by If a process dies the resources allocated to
the process it are reclaimed by the OS and all the
associated threads of the process also
dies
Task Communication :
• A shared memory is an extra piece of memory that is attached to
some address spaces for their owners to use.
• As a result, all of these processes share the same memory
segment and have access to it.
• Consequently, race conditions may occur if memory accesses are
not handled properly
• The following figure shows two processes and their address
spaces.
• The yellow rectangle is a shared memory attached to both address
spaces and both process 1 and process 2 can have access to this
shared memory as if the shared memory is part of its own address
space.
• Round Robin Scheduling Algorithm:
• This type of algorithm is designed only for the time sharing
system.
• It is similar to FCFS scheduling with preemption condition to
switch between processes.
• The average waiting time under the round robin policy is quiet
long.
• Consider the following example:
Preemptive Scheduling
• It is the responsibility of CPU scheduler to allot a process to CPU
whenever the CPU is in the idle state.
• The CPU scheduler selects a process from ready queue and
allocates the process to CPU.
• The scheduling which takes place when a process switches from
running state to ready state or from waiting state to ready state is
called Preemptive Scheduling
First Scheduling (SJF) Algorithm:
• This algorithm associates with each process if the CPU is available.
• Consider the following example:
• Three processes with process IDs P1,P2,P3 with estimated
completion time 10,5,7 miliseconds respectively enters the ready
queue together.Calculate the waiting time and Turn around Time
(TAT) for each process and the average waiting time and Turn
around Time (Assuming there is no I/O waiting for the processes)
in SJF(shortest Job first) algorithm.
Message passing:
• Message passing can be synchronous or asynchronous.
Synchronous message passing systems require the sender and
receiver to wait for each other while transferring the message.
• In asynchronous communication the sender and receiver do not
wait for each other and can carry on their own computations
while transfer of messages is being done.
• The advantage to synchronous message passing is that it is
conceptually less complex.
2 Message queue:
• Message queues provide an asynchronous communications
protocol, meaning that the sender and receiver of the message do
not need to interact with the message queue at the same time.
Implementations exist as proprietary software, provided as a
service, open source software, or a hardware-based solution.
• Mail box
• Mailboxes provide a means of passing messages between tasks for
data exchange or task synchronization.
• For example, assume that a data gathering task that produces data
needs to convey the data to a calculation task that consumes the
data.
• This data gathering task can convey the data by placing it in a
mailbox and using the SEND command; the calculation task uses
RECEIVE to retrieve the data.
• Remote Procedure Call (RPC)
• Is a powerful technique for constructing distributed, client- server
based applications.
.
• The following steps take place during a RPC:
• A client invokes a client stub procedure, passing parameters in the
usual way. The client stub resides within the client’s own address
space.
• The client stub marshalls(pack) the parameters into a message.
Marshalling includes converting the representation of the
parameters into a standard format, and copying each parameter into
the message.
• The client stub passes the message to the transport layer, which
sends it to the remote server machine.
• On the server, the transport layer passes the message to a server
stub, which demarshalls(unpack) the parameters and calls the
desired server routine using the regular procedure call mechanism.
• When the server procedure completes, it returns to the server stub
(e.g., via a normal procedure call return), which marshalls the
return values into a message. The server stub then hands the
message to the transport layer.
• The transport layer sends the result message back to the client
transport layer, which hands the message back to the client stub.
• The client stub demarshalls the return parameters and execution
returns to the caller.
Deadlock:
• In a multiprogramming environment several processes may
compete for a finite number of resources.
• A process request resources; if the resource is available at that
time a process enters the wait state.
• Waiting process may never change its state because the resources
requested are held by other waiting process.
• This situation is known as deadlock.
Conditions Favouring deadlock situation:
1 Mutual Exclusion: The criteria that only one process can hold a
resource at a time meaning processes should access shared resources
with mutual exclusion.
2 Hold and wait : The condition in which process holds a shared
resource by acquiring the lock controlling the shared access and
waiting for additional resources held by other processes .
3 No resource Pre-emption: The criteria that operating system
cannot take back a resource from a process which is currently
holding it and the resource can only be released voluntarily by the
process holding it.
4 Circular wait :A process is waiting for a resource which is
currently held by another process which in turn is waiting for a
resource held by the first process .
5 Ignore Deadlocks : Always assume that the system design is
deadlock free.
This is acceptable for the reason the cost of removing a deadlock is
large compared to the chance of happening a deadlock .
UNIX is an example for OS following this principle.
Prevention of Deadlock
1 A process must request all its required resource and the resources
should be allocated before the process begins in execution
2 Grant resource allocation requests from processes only if the
process does not hold a resource currently
• Ensure that resource preemption(resource releasing) is possible at
operating system level.
• This can be achieved by implementing the following set of
rules /guidelines in resource allocation.
• Release all the resources currently held by a process if a request
made by the process for a new resource is not able to fulfil
immediately.
• Add the resources which are preempted(released) to a resource list
describing the resources which the process requires to complete its
execution.
• Reschedule the process for execution only when the process gets
its old resources and the new resources which is requested by the
process.
Semaphore:
• Is a synchronization mechanism that controls access to shared
resources by multiple threads or tasks ensuring that only a limited
number can access them at any given time .
• It acts like a key that a task needs to acquire before accessing a
resource and the key must be released after usage.
• Semaphores prevent race conditions(Two or more operations at the
same time) and ensure proper coordination among different parts of
an embedded system.
• Example :Imagine a shared printer in a multitasking embedded
system a semaphore could be used to ensure that only one task at a
time can access the printer.
• When a task wants to print it waits for printer semaphore to become
available.
• Once it acquires the semaphore it can print .
• When the printing is complete the task releases the
semaphore,allowing another waiting task to access the printer
Semaphores have two main operations:
1 Wait: This operation decrements the semaphores value .
• If the value becomes negative the task blocks until the semaphores
value becomes non negative.
2 Signal (or V):This operation increments the semaphores value .
• It signals that a resource is available and may unblock a waiting
task
Types of semaphore:
• Binary Semaphore: These have a valuue of either 0 or 1 and are
used for mutual exclusion, allowing only one task to access a
shared resource at a time
• Counting Semaphore: These can have values greater than 1 and
are used to manage multiple resource of the same type
How To Choose An RTOS:
• Functional Requirements:
1 Processor support: It is not necessary that all RTOS support all
kinds of processor architecture. It is essential to ensure the processor
support by the RTOS.
2 Memory Requirements: The OS requires ROM memory for
holding the OS files and it is normally stored in a non volatile
memory like FLASH.
• OS also requires working memory RAM for loading OS services.
3 Real Time Capabilities: It is mandatatory that the operating system
for all embedded systems need to be Real time and all embedded
operating systems are Real time behaviour.
4 Kernel and Interrupt Latency: The kernel of the OS may disable
interrupts while executing certain services and it may may lead to
interrupt latency.
5 Support for Networking and Communication:
The OS kernel may provide stack implementation and driver support
for a bunch of communication interfaces and networking .
Non Functional Requirements:
1 Cost: The total cost for developing or buying the OS and
maintaining it in terms of commercial product and custom build needs
to be evaluated before taking a decision on selection of OS
2 Development and Debugging Availability: The availability of
development and debugging tools is a critical decision making factor
in the selection of an OS for Embedded system.
3 Ease Of Use: How easy it is to use a commercial RTOS is another
important feature that needs to be considered in RTOS selection