A Term Paper of Principle of Operating System On: Submitted To

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

A

TERM PAPER OF
PRINCIPLE OF OPERATING SYSTEM
On

TOPIC: SEMAPHORES

Submitted To: Submitted By:


M/s. Sanjima
Lovely Institute of
Technology (POLY)

Jalandhar-Delhi G.T. Road (NH-1), Phagwara, Punjab (INDIA) - 144402. TEL: +91-
1824-404404 Toll Free: 1800 102 4431 info@lpu.co.in

Written by raj 9464554250


ACKNOWLEDGEMENT

With regards I would like to thanks my Lect. Miss sanjima who helped me in
completing my Term Paper on the topic “SEMAPHORES”. Of subject
“PRINCILE OF OPERATING SYSTEM”. Due to his proper guidance and
under the shower of his eternal knowledge I was able to complete my Term Paper
comfortably which might not be possible without his efforts.
I must say thanks to my friend who helped me in the completion of my Term
paper. I must say sorry for the errors if I have committed in my Term Paper.

--------------------
Date:12 -11-2010

Written by raj 9464554250


INDEX
Sr.no Topic Name
I Introduction
II Disadvantage
III Implementation
IV Tasks
V Uses
VI Massages
VII References

Written by raj 9464554250


Introduction of Semaphore

Semaphores are devices used to help with synchronization. If multiple processes share a common
resource, they need a way to be able to use that resource without disrupting each other. You want
each process to be able to read from and write to that resource uninterrupted.

A semaphore will either allow or disallow access to the resource, depending on how it is set up.
One example setup would be a semaphore which allowed any number of processes to read from
the resource, but only one could ever be in the process of writing to that resource at a time.

Semaphores are the classic method for restricting access to shared resources (e.g. storage) in a
multi-processing environment. They were invented by Dijkstra.

Many people prefer to use monitors instead of semaphores, because semaphores make it too easy
to accidentally write code that deadlocks.

A semaphore is a protected variable(or ADT) which can only be accessed using the following
operations:

acquire(Semaphore s){
while (s==0); /* wait until s>0 */
s=s-1;
}

release(Semaphore s){
s=s+1;
}

Init(Semaphore s; Int v){


s=v;
}

Historically, "acquire" was originally called P (for Dutch “Proberen” to test) and is often called
"wait"; the standard Java library uses "acquire". The function "release" was originally called V
(for Dutch “Verhogen” to increment) and is often called "signal"; the standard Java library uses
"release".

The value of a semaphore is the number of units of the resources which are free. If there is only
one resource a “binary semaphore” with values 0 or 1 is used.

Written by raj 9464554250


The "acquire" operation busy-waits or sleeps until a resource is available whereupon it
immediately claims one. "release" is the inverse. It simply makes a resource available again after
the process has finished using it. Init is only used once, to initialize the semaphore before any
requests are made.

No other process can access the semaphore when P or V are executing.

To avoid busy waiting, a semaphore may have an associated queue of processes(FIFO) . If a


process attempts an acquire on a semaphore which is zero, the process is added to the
semaphore’s queue. When another process increments the semaphore by doing a release and
there are tasks on the queue, one is taken off and resumed.

We can implement mutual-exclusion using semaphores:

do
{

acquire (s);
critical section;
release (s);

}while(1);

Let us take semaphore value to 1;

If process P1 wants to enter its critical section it has to acquire the semaphore. The value of s is
decremented to 0. It releases the semaphore after executing its critical section. The value of s is
incremented to 1.

Other process P2 wants to enter its critical section while P1 is in its critical section, it uses the
shared semaphore value 0 and performs the acquire operation first. P2 continues to loop in the
while until P1 executes release(s);

So, only one process will be executing at a time.

Disadvantage

The main disadvantage here is that processes require busy-waiting (looping in while). This
continual looping is clearly a problem in a real multiprogramming system (where a single CPU is
shared among multiple processes). Busy waiting wastes CPU cycles that some other may use
them. This type of semaphore is called spinlock. what is spinlock?

Written by raj 9464554250


Implementation

To overcome the need for busy waiting, we can modify the definition of wait and signal
semaphore operations.When a process executes the wait operation and finds that the semaphore
value is not >0 , it is entered into waiting queue associated with the semaphore and the state of
the process is switched to the waiting state. Then the control is transferred to the CPU scheduler,
which selects another process to execute.

When another process executes a signal operation, any process from the waiting queue has to be
restarted by a wakeup operation. The wakeup operation changes the process from the waiting
state to the ready state. The process is then placed in the ready queue.

The operating system provides block() and wakeup(P) system calls. The block() operation
suspends the process that calls it. The wakeup(P <Process>) operation resumes the execution of
blocked process P.

Semaphores can be either counting or binary - lwIP works with both kinds. Semaphores are
represented by the type sys_sem_t which is typedef'd in the sys_arch.h file. lwIP does not place
any restrictions on how sys_sem_t should be defined or represented internally, but typically it is
a pointer to an operating system semaphore or a struct wrapper for an operating system
semaphore.

The following functions must be defined:

 sys_sem_t sys_sem_new(u8_t count): Creates and returns a new semaphore. The count
argument specifies the initial state of the semaphore. Returns the semaphore, or
SYS_SEM_NULL on error.
 void sys_sem_free(sys_sem_t sem): Frees a semaphore created by sys_sem_new. Since
these two functions provide the entry and exit point for all semaphores used by lwIP, you
have great flexibility in how these are allocated and deallocated (for example, from the
heap, a memory pool, a semaphore pool, etc).
 void sys_sem_signal(sys_sem_t sem): Signals (or releases) a semaphore.
 u32_t sys_arch_sem_wait(sys_sem_t sem, u32_t timeout): Blocks the thread while
waiting for the semaphore to be signaled. The timeout parameter specifies how many
milliseconds the function should block before returning; if the function times out, it
should return SYS_ARCH_TIMEOUT. If timeout=0, then the function should block
indefinitely. If the function acquires the semaphore, it should return how many
milliseconds expired while waiting for the semaphore. The function may return 0 if the
semaphore was immediately available.

Note that there is another function sys_sem_wait in sys.c, but it is a wrapper for the
sys_arch_sem_wait function. Please note that it is important for the semaphores to return an
accurate count of elapsed milliseconds, since they are used to schedule timers in lwIP. See the
timer section below for more information.

Written by raj 9464554250


People new to multitasking kernels usually have difficulty grasping what the kernel objects
usually are. In this era of object-oriented programming, it is appropriate to invoke the object
paradigm: Tasks, Semaphores, etc. are objects. Each has information and code associated with it.
Usually, the information is stored in a control block. For example, a semaphore has the following
control block:

Struct SCB { // Semaphore Control Block

CB_PTR fl // forward link

CB_PTR bl // backward link

byte cbyte // control block type

byte ctr // signal counter

word thres:8 // signal threshold

word tplim:7 // task priority limit

word tq:1 // task queue present

};

There is a control block, like this, for every semaphore used by the application

The code associated with an object is the set of services provided by the kernel for that object.
For example: signal(sem) signals a semaphore and test(sem) tests a semaphore. The principle of
information hiding applies – normally the operator does not directly access or alter control
blocks.

Hence, a multitasking kernel provides an object-oriented environment for embedded


applications. This environment consists of objects such as tasks and semaphores and services
such as signal() and test() provided by the kernel. The multitasking paradigm requires the
programmer to view his application as a collection of interacting objects rather than as a
sequence of operations or as a state machine. This can be a difficult adjustment to make and may
be a reason why multitasking kernels are frequently rejected or misused.

Written by raj 9464554250


Experience has shown the object model to be a better model for most embedded systems. The
other models do not deal well with the complexities of multiple, simultaneous events which are
typical in modern embedded systems. Flow charts are good for describing sequential processes.
State machines are good if there are a small number of possible states with well-defined
transition rules. But, neither is good for describing complex systems with many interdependent
parts. Multitasking, on the other hand, is ideal for such systems – just define a task to handle
each part. Then define how the parts interact.

A significant weakness of the sequential process and the state machine approaches is that they
are inflexible. A good programmer can initially create a workable solution using these
approaches. But requirements invariably change, and the workable design eventually turns into
spaghetti code. In times past, this was a problem primarily in the later states of product life.
However, because of the current rapid pace of high tech markets, this result is frequently
occurring before first delivery can even be made. This creates serious consequences for time to
market and success of the product.

Multitasking fosters code that is structured so that it can grow and change easily. Changes are
accomplished merely by adding, deleting, or changing tasks, while leaving other tasks
unchanged. Since the code is compartmentalized into tasks, propagation of changes through the
code is minimized. Hence, multitasking provides a flexibility much needed by modern embedded
systems.

Why Tasks?
Breaking a large job into smaller tasks and then performing the tasks one by one is a technique
we all use in our daily lives. For example, to build a fence, we first set the posts, then attach the
2x4’s, nail on the slats, then paint the fence. Although these operations must be done in order, it
is not necessary to complete one operation before starting another. If desirable, we might set a
few posts, then start the next task, and so on. This divide and conquer approach is equally
applicable to writing embedded systems software. A multitasking kernel takes this one step
further by allowing the final embedded system software to actually run as multiple tasks. This
has several advantages:

1) Small tasks are easier to code, debug, and fix than is a monolithic block of software,
which, typically, must be completely designed and coded before testing can begin.
2) A multitasking kernel provides a well defined interface between functions that are
implemented as independent tasks, thus minimizing hidden dependencies between them.
3) The uniformity provided by kernel services and interfaces is especially important if tasks
are created by different programmers.
4) A pre-emptive multitasking kernel allows tasks handling urgent events to interrupt less
urgent tasks. (Such as when the phone rings while you are watching TV.)
5) New features can easily be added by adding new tasks.
Written by raj 9464554250
Basically, a pre-emptive, multitasking environment is compatible with the way embedded
software is created and is a natural environment for the same software to run in. Let’s consider
an example: Suppose we need to control an engine using several measures parameters and a
complex control algorithm. Also, assume there is an operator interface which displays
information and allows operator control. Finally, assume that the system must communicate with
a remote host computer. Clearly there are at least three major functions:

1) Engine Control.
2) Operator Interface.
3) Host Interface.
So, the system basically looks like this:

Figure 1: Major functions for the Engine Control System.


Each of the above is sufficiently complex, that it is necessary to work on it individually. Would it
not be nice if an environment already existed so that the three functions could be created and
operated independently? This can be done by using a multitasking kernel and by making each
function a task.

Note that the tasks are not of equal urgency: The operator can be kept waiting for long periods of
time relative to microprocessor speeds, but the engine control task may need to respond quickly
to input changes in order to maintain smooth engine operation. The host probably falls
somewhere in between in urgency. With a pre-emptive multitasking kernel, this can be easily
accomplished merely by giving engine control task a higher priority than the other two tasks. The
host task requires an in-between priority to do its job well, and the operator task can operate
satisfactorily at low priority.

Written by raj 9464554250


Many embedded software projects reach this point in the design and still do not pick a
commercial multitasking kernel. The reason most often given is: “Our application is too simple
to need a kernel.” This is often a big mistake. There are many hidden complexities in the above
diagram. Let us then see how a commercial kernel can deal with these complexities more
effectively than can ad hoc code.

In this regard, it is important to recognize that a commercial kernel has already been used in a
large variety of projects. Hence, many potential problems which may occur in your project have
already been anticipated and solved. Ad hoc code, by contrast, deals with problems as they arise.
It is created without careful planning, and usually fails to provide general solutions. Also, a
commercial kernel contains tested and proven code. This is of utmost importance when meeting
a tight schedule (as you most probably will in this lab).

Using Semaphores:
Continuing our example, it would be logical to divide the engine control “task” into two smaller
tasks, Hence it becomes a “process”:

The data acquisition task reads the sensors, converts readings into engineering units, and
compensates for non-linearities, temperature changes, etc. The engine drive task performs
complex control calculations (eg. PID) and provides the final engine drive signals.

The above scheme looks workable, but how does the engine drive task know when its data is
ready? A simple way to handle this is with a semaphore:

Written by raj 9464554250


The dataAcq task signals the dataRdy semaphore when data is ready. This causes the engDrv to
run once. Then, engDrv tests dataRdy again for the next signal from dataAcq. If there is no
signal, it waits. Hence, engDrv is regulated by dataAcq, as we desire. The code would look like
this:

Void dataAcqMain(void)

// initialize dataAcqTask

while(1) // infinite loop

// acquire and convert data

signalx(dataRdy);

void engDrvMain(void)

// initialize engDrv task

Written by raj 9464554250


while (test(dataRdy, INF))

// perform control algorithm and output drive signals

The above would probably work fine with a simple binary (two state semaphore. Suppose,
however, that the engine control algorithm is so sensitive (or that the data is so noisy) that it is
necessary to smooth the data by running the data acquisition task more often than the engine
drive task, and averaging results? This could easily be accomplished with a counting semaphore
having a threshold of the desired number of iterations. A counting semaphore is decremented by
each signalx(). test() passes only when the count reaches 0. Then the count is reset to n.
dataRdy’s threshold can be externally changed by another task such as the operator task. This
permits tweaking responsiveness vs. smoothness while actually running the engine.

What other benefits could accrue from dividing engine control process into dataAcq and engDrv
tasks? Suppose, for example, that all engines use the same control algorithm, but that sensors
vary from engine to engine. Then it could be desirable to have a family of dataAcq functions (eg.
dataAcq1Main(), dataAcq2Main(), etc.) and be able to select the one needed. Why would
someone want to do this? Suppose that the sensor package is part of the engine and hence is
known only when the controller is mated to the engine. At that time, the correct dataAcq task
function could be selected and started by the operator. This way, only one version of the
controller software need be shipped. The code would look like this:

switch(sensorType) // provided by operator

case 1:

dataAcq = create_task(dataAcq1Main, NORM, 0);

case 2:

dataAcq = create_task(dataAcq2Main, NORM, 0);

….

startx(dataAcq)

Written by raj 9464554250


Using Messages:
A message is a block of data meant to be sent to another task. It is managed with message
control block.

Returning to the engine control process, how is data passed from the dataAcq task to the engDrv
task? In a multitasking system, it is desirable to isolate tasks from each other as much as
possible. Therefore, it is not good practice to pass data through a global buffer. Such a buffer
would be accessible to both tasks simultaneously. Hence, the data could be overwritten by the
dataAcq task before the engDrv task was done using it. This is an example of a hidden
interdependency.

The preferred approach is to use messages. Messages are sent by tasks to exchanges and received
from exchanges by other tasks. For our example, the process looks like this:

The code would look like this: (This code is additional to the previous code.)

void dataAcqMain(void)

MCB_PTR msgOut; // message handle

ENG_DATA struct *outPtr; // data template pointer

// initialize

while (1)

Written by raj 9464554250


// acquire & convert data

msgOut = receive(msgPool, INF); // get a free message

outPtr = msgOut!mp; // get its pointer

outPtr!field1 = ...; // load data into it

outPtr!field2 = ...;

...

sendx(msgOut, dataXchg) // send it

void engDrvMain(void)

MCB_PTR msgIn;

ENG_DATA struct *inPtr;

// initialize

while (msgIn = receive(dataXchg, INF)) // receive message

inPtr = msgIn!mp; // get its pointer

... = inPtr!field1 ...; // process it

...

sendx(msgIn, msgPool); // return used msg to free pool

So, basically, dataAcq gets a free message, fills it with data and sends it to dataXchg. Some time
later, engDrv gets the message from dataXchg, processes the data, then recycles the (now
“empty”) message back to the free message pool. Note that each task has exclusive access to the
data in the message while the message is within the task’s domain. This eliminates one possible
problem. Another advantage of this

Written by raj 9464554250


scheme is that the tasks are not forced to be in lock step with each other. A message may or may
not be waiting at dataXchg when engDrv attempts to receive. If not, engDrv waits. Conversely
engDrv may not be waiting at dataXchg when dataAcq sends a message to dataXchg. If not, the
message waits. Hence, there is flexibility in the system. This makes it less vulnerable to
breaking under stress. In fact, dataAcq can send many messages to dataXchg and they simply
will be queued up in the order received. engDrv will process them when it is allowed to run. This
is how the previously suggested averaging over many samples (i.e. messages) would be
implemented. The code would look like this:

int i;

while (test(dataRdy, INF))

i = 0;

while (msg = receive(dataXchg, NO_WAIT))

if (msg)

i++

// add message data to buffer

sendx(msg, msgPool);

else

break;

// divide buffer by i

// perform control algorithm and output drive signals

Written by raj 9464554250


Observe that the threshold of the dataRdy semaphore (see previous section) controls the number
of samples per average. How about that for a nifty implementation? Note also that if there were
no need to perform averaging, then the dataRdy semaphore would be superfluous — the
dataXchg exchange, alone, would be sufficient to regulate the engDrv task.

References

1. ↑ http://en.wikipedia.org/wiki/Semaphores
2. http://www.google.co.in/#q=semaphores+in+os+doc&hl=en&ei=wObcTIGUAozWvQO
dwpWGCg&start=30&sa=N&fp=7f6343a71b3fc92
3. http://www.google.co.in/#q=semaphores+in+os+doc&hl=en&ei=wObcTIGUAozWvQO
dwpWGCg&start=30&sa=N&fp=7f6343a71b3fc92
4. http://developer.apple.com/library/mac/#documentation/UserExperience/Conceptual/App
leHIGuidelines/XHIGMOSXEnvironment/XHIGMOSXEnvironment.html
5. http://www.google.co.in/#q=semaphores+in+os+doc&hl=en&ei=wObcTIGUAozWvQO
dwpWGCg&start=30&sa=N&fp=7f6343a71b3fc92
6. http://www.orafaq.com/maillist/oracle-l/2000/12/01/0241.htm
7. http://www.2dix.com/doc-2010/semaphore-in-operating-system-doc.php

Written by raj 9464554250

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy