A Term Paper of Principle of Operating System On: Submitted To
A Term Paper of Principle of Operating System On: Submitted To
A Term Paper of Principle of Operating System On: Submitted To
TERM PAPER OF
PRINCIPLE OF OPERATING SYSTEM
On
TOPIC: SEMAPHORES
Jalandhar-Delhi G.T. Road (NH-1), Phagwara, Punjab (INDIA) - 144402. TEL: +91-
1824-404404 Toll Free: 1800 102 4431 info@lpu.co.in
With regards I would like to thanks my Lect. Miss sanjima who helped me in
completing my Term Paper on the topic “SEMAPHORES”. Of subject
“PRINCILE OF OPERATING SYSTEM”. Due to his proper guidance and
under the shower of his eternal knowledge I was able to complete my Term Paper
comfortably which might not be possible without his efforts.
I must say thanks to my friend who helped me in the completion of my Term
paper. I must say sorry for the errors if I have committed in my Term Paper.
--------------------
Date:12 -11-2010
Semaphores are devices used to help with synchronization. If multiple processes share a common
resource, they need a way to be able to use that resource without disrupting each other. You want
each process to be able to read from and write to that resource uninterrupted.
A semaphore will either allow or disallow access to the resource, depending on how it is set up.
One example setup would be a semaphore which allowed any number of processes to read from
the resource, but only one could ever be in the process of writing to that resource at a time.
Semaphores are the classic method for restricting access to shared resources (e.g. storage) in a
multi-processing environment. They were invented by Dijkstra.
Many people prefer to use monitors instead of semaphores, because semaphores make it too easy
to accidentally write code that deadlocks.
A semaphore is a protected variable(or ADT) which can only be accessed using the following
operations:
acquire(Semaphore s){
while (s==0); /* wait until s>0 */
s=s-1;
}
release(Semaphore s){
s=s+1;
}
Historically, "acquire" was originally called P (for Dutch “Proberen” to test) and is often called
"wait"; the standard Java library uses "acquire". The function "release" was originally called V
(for Dutch “Verhogen” to increment) and is often called "signal"; the standard Java library uses
"release".
The value of a semaphore is the number of units of the resources which are free. If there is only
one resource a “binary semaphore” with values 0 or 1 is used.
do
{
acquire (s);
critical section;
release (s);
}while(1);
If process P1 wants to enter its critical section it has to acquire the semaphore. The value of s is
decremented to 0. It releases the semaphore after executing its critical section. The value of s is
incremented to 1.
Other process P2 wants to enter its critical section while P1 is in its critical section, it uses the
shared semaphore value 0 and performs the acquire operation first. P2 continues to loop in the
while until P1 executes release(s);
Disadvantage
The main disadvantage here is that processes require busy-waiting (looping in while). This
continual looping is clearly a problem in a real multiprogramming system (where a single CPU is
shared among multiple processes). Busy waiting wastes CPU cycles that some other may use
them. This type of semaphore is called spinlock. what is spinlock?
To overcome the need for busy waiting, we can modify the definition of wait and signal
semaphore operations.When a process executes the wait operation and finds that the semaphore
value is not >0 , it is entered into waiting queue associated with the semaphore and the state of
the process is switched to the waiting state. Then the control is transferred to the CPU scheduler,
which selects another process to execute.
When another process executes a signal operation, any process from the waiting queue has to be
restarted by a wakeup operation. The wakeup operation changes the process from the waiting
state to the ready state. The process is then placed in the ready queue.
The operating system provides block() and wakeup(P) system calls. The block() operation
suspends the process that calls it. The wakeup(P <Process>) operation resumes the execution of
blocked process P.
Semaphores can be either counting or binary - lwIP works with both kinds. Semaphores are
represented by the type sys_sem_t which is typedef'd in the sys_arch.h file. lwIP does not place
any restrictions on how sys_sem_t should be defined or represented internally, but typically it is
a pointer to an operating system semaphore or a struct wrapper for an operating system
semaphore.
sys_sem_t sys_sem_new(u8_t count): Creates and returns a new semaphore. The count
argument specifies the initial state of the semaphore. Returns the semaphore, or
SYS_SEM_NULL on error.
void sys_sem_free(sys_sem_t sem): Frees a semaphore created by sys_sem_new. Since
these two functions provide the entry and exit point for all semaphores used by lwIP, you
have great flexibility in how these are allocated and deallocated (for example, from the
heap, a memory pool, a semaphore pool, etc).
void sys_sem_signal(sys_sem_t sem): Signals (or releases) a semaphore.
u32_t sys_arch_sem_wait(sys_sem_t sem, u32_t timeout): Blocks the thread while
waiting for the semaphore to be signaled. The timeout parameter specifies how many
milliseconds the function should block before returning; if the function times out, it
should return SYS_ARCH_TIMEOUT. If timeout=0, then the function should block
indefinitely. If the function acquires the semaphore, it should return how many
milliseconds expired while waiting for the semaphore. The function may return 0 if the
semaphore was immediately available.
Note that there is another function sys_sem_wait in sys.c, but it is a wrapper for the
sys_arch_sem_wait function. Please note that it is important for the semaphores to return an
accurate count of elapsed milliseconds, since they are used to schedule timers in lwIP. See the
timer section below for more information.
};
There is a control block, like this, for every semaphore used by the application
The code associated with an object is the set of services provided by the kernel for that object.
For example: signal(sem) signals a semaphore and test(sem) tests a semaphore. The principle of
information hiding applies – normally the operator does not directly access or alter control
blocks.
A significant weakness of the sequential process and the state machine approaches is that they
are inflexible. A good programmer can initially create a workable solution using these
approaches. But requirements invariably change, and the workable design eventually turns into
spaghetti code. In times past, this was a problem primarily in the later states of product life.
However, because of the current rapid pace of high tech markets, this result is frequently
occurring before first delivery can even be made. This creates serious consequences for time to
market and success of the product.
Multitasking fosters code that is structured so that it can grow and change easily. Changes are
accomplished merely by adding, deleting, or changing tasks, while leaving other tasks
unchanged. Since the code is compartmentalized into tasks, propagation of changes through the
code is minimized. Hence, multitasking provides a flexibility much needed by modern embedded
systems.
Why Tasks?
Breaking a large job into smaller tasks and then performing the tasks one by one is a technique
we all use in our daily lives. For example, to build a fence, we first set the posts, then attach the
2x4’s, nail on the slats, then paint the fence. Although these operations must be done in order, it
is not necessary to complete one operation before starting another. If desirable, we might set a
few posts, then start the next task, and so on. This divide and conquer approach is equally
applicable to writing embedded systems software. A multitasking kernel takes this one step
further by allowing the final embedded system software to actually run as multiple tasks. This
has several advantages:
1) Small tasks are easier to code, debug, and fix than is a monolithic block of software,
which, typically, must be completely designed and coded before testing can begin.
2) A multitasking kernel provides a well defined interface between functions that are
implemented as independent tasks, thus minimizing hidden dependencies between them.
3) The uniformity provided by kernel services and interfaces is especially important if tasks
are created by different programmers.
4) A pre-emptive multitasking kernel allows tasks handling urgent events to interrupt less
urgent tasks. (Such as when the phone rings while you are watching TV.)
5) New features can easily be added by adding new tasks.
Written by raj 9464554250
Basically, a pre-emptive, multitasking environment is compatible with the way embedded
software is created and is a natural environment for the same software to run in. Let’s consider
an example: Suppose we need to control an engine using several measures parameters and a
complex control algorithm. Also, assume there is an operator interface which displays
information and allows operator control. Finally, assume that the system must communicate with
a remote host computer. Clearly there are at least three major functions:
1) Engine Control.
2) Operator Interface.
3) Host Interface.
So, the system basically looks like this:
Note that the tasks are not of equal urgency: The operator can be kept waiting for long periods of
time relative to microprocessor speeds, but the engine control task may need to respond quickly
to input changes in order to maintain smooth engine operation. The host probably falls
somewhere in between in urgency. With a pre-emptive multitasking kernel, this can be easily
accomplished merely by giving engine control task a higher priority than the other two tasks. The
host task requires an in-between priority to do its job well, and the operator task can operate
satisfactorily at low priority.
In this regard, it is important to recognize that a commercial kernel has already been used in a
large variety of projects. Hence, many potential problems which may occur in your project have
already been anticipated and solved. Ad hoc code, by contrast, deals with problems as they arise.
It is created without careful planning, and usually fails to provide general solutions. Also, a
commercial kernel contains tested and proven code. This is of utmost importance when meeting
a tight schedule (as you most probably will in this lab).
Using Semaphores:
Continuing our example, it would be logical to divide the engine control “task” into two smaller
tasks, Hence it becomes a “process”:
The data acquisition task reads the sensors, converts readings into engineering units, and
compensates for non-linearities, temperature changes, etc. The engine drive task performs
complex control calculations (eg. PID) and provides the final engine drive signals.
The above scheme looks workable, but how does the engine drive task know when its data is
ready? A simple way to handle this is with a semaphore:
Void dataAcqMain(void)
// initialize dataAcqTask
signalx(dataRdy);
void engDrvMain(void)
The above would probably work fine with a simple binary (two state semaphore. Suppose,
however, that the engine control algorithm is so sensitive (or that the data is so noisy) that it is
necessary to smooth the data by running the data acquisition task more often than the engine
drive task, and averaging results? This could easily be accomplished with a counting semaphore
having a threshold of the desired number of iterations. A counting semaphore is decremented by
each signalx(). test() passes only when the count reaches 0. Then the count is reset to n.
dataRdy’s threshold can be externally changed by another task such as the operator task. This
permits tweaking responsiveness vs. smoothness while actually running the engine.
What other benefits could accrue from dividing engine control process into dataAcq and engDrv
tasks? Suppose, for example, that all engines use the same control algorithm, but that sensors
vary from engine to engine. Then it could be desirable to have a family of dataAcq functions (eg.
dataAcq1Main(), dataAcq2Main(), etc.) and be able to select the one needed. Why would
someone want to do this? Suppose that the sensor package is part of the engine and hence is
known only when the controller is mated to the engine. At that time, the correct dataAcq task
function could be selected and started by the operator. This way, only one version of the
controller software need be shipped. The code would look like this:
case 1:
case 2:
….
startx(dataAcq)
Returning to the engine control process, how is data passed from the dataAcq task to the engDrv
task? In a multitasking system, it is desirable to isolate tasks from each other as much as
possible. Therefore, it is not good practice to pass data through a global buffer. Such a buffer
would be accessible to both tasks simultaneously. Hence, the data could be overwritten by the
dataAcq task before the engDrv task was done using it. This is an example of a hidden
interdependency.
The preferred approach is to use messages. Messages are sent by tasks to exchanges and received
from exchanges by other tasks. For our example, the process looks like this:
The code would look like this: (This code is additional to the previous code.)
void dataAcqMain(void)
// initialize
while (1)
outPtr!field2 = ...;
...
void engDrvMain(void)
MCB_PTR msgIn;
// initialize
...
So, basically, dataAcq gets a free message, fills it with data and sends it to dataXchg. Some time
later, engDrv gets the message from dataXchg, processes the data, then recycles the (now
“empty”) message back to the free message pool. Note that each task has exclusive access to the
data in the message while the message is within the task’s domain. This eliminates one possible
problem. Another advantage of this
int i;
i = 0;
if (msg)
i++
sendx(msg, msgPool);
else
break;
// divide buffer by i
References
1. ↑ http://en.wikipedia.org/wiki/Semaphores
2. http://www.google.co.in/#q=semaphores+in+os+doc&hl=en&ei=wObcTIGUAozWvQO
dwpWGCg&start=30&sa=N&fp=7f6343a71b3fc92
3. http://www.google.co.in/#q=semaphores+in+os+doc&hl=en&ei=wObcTIGUAozWvQO
dwpWGCg&start=30&sa=N&fp=7f6343a71b3fc92
4. http://developer.apple.com/library/mac/#documentation/UserExperience/Conceptual/App
leHIGuidelines/XHIGMOSXEnvironment/XHIGMOSXEnvironment.html
5. http://www.google.co.in/#q=semaphores+in+os+doc&hl=en&ei=wObcTIGUAozWvQO
dwpWGCg&start=30&sa=N&fp=7f6343a71b3fc92
6. http://www.orafaq.com/maillist/oracle-l/2000/12/01/0241.htm
7. http://www.2dix.com/doc-2010/semaphore-in-operating-system-doc.php