0% found this document useful (0 votes)
10 views53 pages

Module 3

The document provides an overview of operating systems (OS), detailing their functions such as process management, memory management, file management, and I/O system management. It discusses different types of OS, including general-purpose and real-time operating systems, as well as the architecture of monolithic and microkernel systems. Additionally, it covers concepts related to tasks, processes, scheduling, and threads, explaining their importance in embedded system design.

Uploaded by

kavana86600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views53 pages

Module 3

The document provides an overview of operating systems (OS), detailing their functions such as process management, memory management, file management, and I/O system management. It discusses different types of OS, including general-purpose and real-time operating systems, as well as the architecture of monolithic and microkernel systems. Additionally, it covers concepts related to tasks, processes, scheduling, and threads, explaining their importance in embedded system design.

Uploaded by

kavana86600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

module 3: RTOS AND IDE (Integrated Development

Environment) FOR EMBEDDED SYSTEM DESIGN


What Is OS ?
• An Operating System (OS) is an interface between computer user
and computer hardware.
• An operating system is software which performs all the basic tasks
like file management, memory management, process
management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
• Some popular Operating Systems include Linux Operating
System, Windows Operating System, VMS, OS/400, etc.
Definition:
• An operating system is a program that acts as an interface between
the user and the computer hardware and controls the execution of
all kinds of programs
Following are some of important functions of an operating
System.
• Memory Management· Processor Management· Device
Management· File Management· Security· Control over system
performance· Coordination between other software and users·
• An Operating System provides services to both the users and to
the programs.
• It provides programs an environment to execute.

Following are a few common services provided by an operating


system: Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• Protection
Basic Functions of Operation System:
The various functions of operating system are as follows:
1 Process Management:
• A program does nothing unless their instructions are executed by a
CPU.
• A process is a program in execution.
• A time shared user program such as a complier is a process.
• A word processing program being run by an individual user on a
pc is a process.
– The OS is responsible for the following activities of process
management.
– Creating & deleting both user & system processes.
– Suspending & resuming processes.
– Providing mechanism for process synchronization.
2 Main Memory Management:
The main memory is central to the operation of a modern
computer system.
Main memory is a large array of words or bytes ranging in size
from hundreds of thousand to billions .
Main memory stores the quickly accessible data shared by the
CPU & I/O device.
• The main memory is generally the only large storage device that
the CPU is able to address & access directly.
• For example, for the CPU to process data from disk.
• The OS is responsible for the following activities in connection
with memory management.
– Keeping track of which parts of memory are currently
being used & by whom.
– Deciding which processes are to be loaded into
memory when memory space becomes available.
– Allocating & de allocating memory space as needed.
3 File Management:
File management is one of the most important components of an OS
computer can store information on several different types of physical
media magnetic tape, magnetic disk & optical disk are the most
common media.
Each medium is controlled by a device such as disk drive or tape
drive those has unique characteristics
• The OS is responsible for the following activities of file
management.
– Creating & deleting files.
– Creating & deleting directories.
– Supporting primitives for manipulating files &
directories.
– Mapping files into secondary storage.
4 I/O System Management:
• One of the purposes of an OS is to hide the peculiarities
(characteristics,behaviour or features) of specific hardware devices
from the user.
• For example, in UNIX the peculiarities of I/O devices are hidden
from the bulk of the OS itself by the I/O subsystem.
• The I/O subsystem consists of:
– A memory management component that includes
buffering,(Temporarily stores data between devices )
– Catching(frequently accessed data for faster retrieval )
– & spooling(manages queuing of jobs for slower
devices Ex printer).
– Only the device driver knows the peculiarities of the
specific device to which it is assigned.
5 Secondary Storage Management:
• The main purpose of computer system is to execute programs.
These programs with the data they access must be in main memory
during execution.
• As the main memory is too small to accommodate all data &
programs & because the data that it holds are lost when power is
lost.
• The computer system must provide secondary storage to back-up
main memory.
• The operating system is responsible for the
following activities of disk management.
– Free space management.
– Storage allocation.
– Disk scheduling
• Because secondary storage is used frequently it must be used
efficiently.
Networking:
A distributed system is a collection of processors that don’t share
memory peripheral devices or a clock.
Each processor has its own local memory & clock and the processor
communicate with one another through various communication lines
such as high speed buses or networks.
The processors in the system are connected through communication
networks which are configured in a number of different ways
Protection or security:
If a computer system has multi users & allow the concurrent
execution of multiple processes then the various processes must be
protected from one another’s activities.
For that purpose, mechanisms ensure that files, memory segments,
CPU & other resources can be operated on by only those processes
that have gained proper authorization from the OS.
Monolithic Operating Systems

Application

Monolithic Kernel with all


operating system services
running in Kernel Space
• In monolithic kernel architecture, all Kernel services run in the
kernel space.
• Here all kernel modules run within the same memory space under
a single kernel thread.
• The tight integration of kernel modules in monolithic kernel
architecture allows the effective utilisation of the low level features
of the underlying systems.
• The major drawback of monolithic kernel is that any error or
failure in any one of the kernel modules leads to the crashing of the
entire kernel application.
• Examples: LINUX,MS-DOS,SOLARIS
• Microkernel model

Servers
(Kernel
services
running in Applications
user space

Microkernel with essential services


like memory management, process
management, timer system etc.
• The microkernel design incorporates only the essential set of
operating system services into the kernel.
• The rest of the operating system services are implemented in
programs known as servers which runs in user space.
• This provide a highly modular design and OS neutral abstraction
to the kernel.
• Memory management, timer systems and interrupt handlers are
the essential services which forms the part of the microkernel
• Examples QNX,Minix 3
Advantages:
If a problem is encountered in any of the services which runs as
server application the same can be reconfigured and re started
without the need for restarting the entire OS.
Types of Operating systems
Depending on the type of kernel and kernel services ,Operating
systems are classified into different types
General Purpose Operating System(GPOS):
The operating systems ehich are deployed in general computing
systems are reffered as GPOS.
The kernel of such an OS is more generalised and it contains all kinds
of services required for executing generic applications.
GPOS are often quite non deterministic in behaviour.
Their services can inject random delays into application software and
may cause slow responsiveness of an application at unexpected
times .Examples: Windows Xp/MS-DOS
Real Time Operating System(RTOS):
A Real time Operating System implements policies and rules concerning time
critical allocation of a system resources.
RTOS decides which application should run in which order and how much time
needs to be allocated for each application.
Examples: QNX, Windows CE, MicroC/OS-II
• The disadvantages of real time system are:
• a. A real time system is considered to function correctly only if it returns
the correct result within the time constraints.
• b. Secondary storage is limited or missing instead data is usually stored in
short term memory or ROM.
• c. Advanced OS features are absent.
• Real time system is of two types such as
• Hard real time systems: It guarantees that the critical task has been
completed on time. The sudden task it takes place at a sudden instant of
time.
• Soft real time systems: It is a less restrictive type of real time system
where a critical task gets priority over other tasks and retains that priority
until it computes
Task :
– Task is a piece of code or program that is
separate from another task and can be executed
independently of the other tasks.
– Multiple tasks are not executed at the same time
instead they are executed in pseudo parallel (i.e.
Rapidly switching between different processes on a
single processor).
– An Operating System decides which task to execute in
case there are multiple tasks to be executed.
– The operating system maintains information about
every task and information about the state of each task.
– The information about a task is recorded in a
data structure called the task context.
– When a task is executing, it uses the processor and the
registers available for all sorts of processing.
– When a task leaves the processor for another task to
execute before it has finished its own, it should resume
at a later time from where it stopped and not from the
first instruction.
Task States
• In an operation system there are always multiple tasks.
• At a time only one task can be executed.
• This means that there are other tasks which are waiting their turn to
be executed.
• Depending upon execution a task may be classified into the
following three states:
1 Running state - Only one task can actually be using the processor
at a given time that task is said to be the “running” task and its state is
“running state”.
– No other task can be in that same state at the same time
2 Ready state - Tasks that are not currently using the processor but
are ready to run are in the “ready” state.
There may be a queue of tasks in the ready state.
3 Waiting state - Tasks that are neither in running nor ready state but
that are waiting for some event external to themselves to occur before
the can go for execution on are in the “waiting” state.
Process: A process or task is an instance of a program in execution.
The execution of a process must programs in a sequential manner.
At any time at most one instruction is executed.
Process state: As a process executes, it changes state. The state of a
process is defined by the correct activity of that process.
Each process may be in one of the following states.
• New: The process is being created.
• Ready: The process is waiting to be assigned to a
processor.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur.
• Terminated: The process has finished execution.
• Many processes may be in ready and waiting state at the same
time.
• But only one process can be running on any processor at any
instant.
Process scheduling:
• Scheduling is a fundamental function of OS.
• When a computer is multiprogrammed, it has multiple processes
completing for the CPU at the same time.
• If only one CPU is available, then a choice has to be made
regarding which process to execute next.
• This decision making process is known as scheduling and the part
of the OS that makes this choice is called a scheduler.
Scheduling queues:
As processes enter the system, they are put into a job queue.
This queue consists of all process in the system.
The process that are residing in main memory and are ready &
waiting to execute or kept on a list called ready queue.
Process control block:
• Each process is represented in the OS by a process control block.
It is also known as task control block
– Process state: The state may be new, ready, running,
waiting or terminated state.
– Program counter: It indicates the address of the
next instruction to be executed for this purpose.
• CPU registers: The registers vary in number & type depending
on the computer architecture.
• It includes accumulators, index registers, stack pointer & general
purpose registers
• CPU scheduling information:
• This information includes process priority pointers to scheduling
queues & any other scheduling parameters.
.
Memory management information:
This may include information such as the value of registers, the
page tables(maps virtual address used by programs to physical
address) or the segment tables,(maps logical address used by
programs to physical address) depending upon the memory system
used by the operating system.
Accounting information:
This information includes the amount of CPU and real time used,
time limits, account number, job or process numbers and so on.
I/O Status Information:
This information includes the list of I/O devices allocated to this
process, a list of open files and so on.
Threads :
• A thread is the primitive that can execute code.
• A thread is a single sequential flow of control within a process.
• A process can have many threads of execution.
• Different threads which are part of process share the same address
space, meaning they share the data memory, code memory.
• Threads maintain their own thread status (CPU register
values),program counter (PC) and stack.
• The memory model for a process and its associated threads are
shown in below diagram
Stack memory for Thread 1 Stack
Stack memory for Thread 2 memory
for
process
Data memory for process
Code memory for process
POSIX Threads(Portable Operating System Interface)
• POSIX Threads, usually referred to as threads, is an execution
model that exists independently from a language, as well as a
parallel execution model.
• It allows a program to control multiple different flows of work that
overlap in time.
• The POSIX standard library for thread creation and management is
‘Pthreads’,.
• ‘Pthreads’ library defines the set of POSIX thread creation and
management functions in c language.
Thread Pre emption:
• Thread pre emption is the act of pre empting the currently
running thread (stopping the currently running thread
temporarily).
• Thread pre emption is performed for sharing the CPU time
among all the threads.
• The execution switching among threads are known as ‘Thread
context switching’..
• Thread falls into any one of the following types.
1 User Level Thread:
• User level threads do not have kernel/operating system support
and they exist solely in the running process.
• Process contains multiple user level threads ,the OS treats it as
single thread and will not switch the execution among the
different threads
• It is the responsibility of the process to schedule each thread as
when required.
2 Kernel/System level Thread:
• Kernel level threads are individual units of execution which the
OS treats as separate threads.
• The OS interrupts the execution of the currently running kernel
thread and switches the execution to another Kernel thread based
on the scheduling policies implemented by the OS.
• For user level threads the execution switching(thread context
switching) happens only when the currently executing user level
thread is voluntarily blocked.
Thread Process

Thread is a single unit of execution and is Process is a program in execution and


part of process contains one or more threads

A thread cannot live independently it lives A process contains at least one thread
within the process

Threads are very inexpensive to create Processes are very expensive to create,
Involves many OS overhead

Context switching is inexpensive and fast Context switching is complex and involves
lot of OS overhead and is comparatively
slower

If a thread expires its stack is reclaimed by If a process dies the resources allocated to
the process it are reclaimed by the OS and all the
associated threads of the process also
dies
Task Communication :
• A shared memory is an extra piece of memory that is attached to
some address spaces for their owners to use.
• As a result, all of these processes share the same memory
segment and have access to it.
• Consequently, race conditions may occur if memory accesses are
not handled properly
• The following figure shows two processes and their address
spaces.
• The yellow rectangle is a shared memory attached to both address
spaces and both process 1 and process 2 can have access to this
shared memory as if the shared memory is part of its own address
space.
• Round Robin Scheduling Algorithm:
• This type of algorithm is designed only for the time sharing
system.
• It is similar to FCFS scheduling with preemption condition to
switch between processes.
• The average waiting time under the round robin policy is quiet
long.
• Consider the following example:
Preemptive Scheduling
• It is the responsibility of CPU scheduler to allot a process to CPU
whenever the CPU is in the idle state.
• The CPU scheduler selects a process from ready queue and
allocates the process to CPU.
• The scheduling which takes place when a process switches from
running state to ready state or from waiting state to ready state is
called Preemptive Scheduling
First Scheduling (SJF) Algorithm:
• This algorithm associates with each process if the CPU is available.
• Consider the following example:
• Three processes with process IDs P1,P2,P3 with estimated
completion time 10,5,7 miliseconds respectively enters the ready
queue together.Calculate the waiting time and Turn around Time
(TAT) for each process and the average waiting time and Turn
around Time (Assuming there is no I/O waiting for the processes)
in SJF(shortest Job first) algorithm.
Message passing:
• Message passing can be synchronous or asynchronous.
Synchronous message passing systems require the sender and
receiver to wait for each other while transferring the message.
• In asynchronous communication the sender and receiver do not
wait for each other and can carry on their own computations
while transfer of messages is being done.
• The advantage to synchronous message passing is that it is
conceptually less complex.
2 Message queue:
• Message queues provide an asynchronous communications
protocol, meaning that the sender and receiver of the message do
not need to interact with the message queue at the same time.
Implementations exist as proprietary software, provided as a
service, open source software, or a hardware-based solution.
• Mail box
• Mailboxes provide a means of passing messages between tasks for
data exchange or task synchronization.
• For example, assume that a data gathering task that produces data
needs to convey the data to a calculation task that consumes the
data.
• This data gathering task can convey the data by placing it in a
mailbox and using the SEND command; the calculation task uses
RECEIVE to retrieve the data.
• Remote Procedure Call (RPC)
• Is a powerful technique for constructing distributed, client- server
based applications.
.
• The following steps take place during a RPC:
• A client invokes a client stub procedure, passing parameters in the
usual way. The client stub resides within the client’s own address
space.
• The client stub marshalls(pack) the parameters into a message.
Marshalling includes converting the representation of the
parameters into a standard format, and copying each parameter into
the message.
• The client stub passes the message to the transport layer, which
sends it to the remote server machine.
• On the server, the transport layer passes the message to a server
stub, which demarshalls(unpack) the parameters and calls the
desired server routine using the regular procedure call mechanism.
• When the server procedure completes, it returns to the server stub
(e.g., via a normal procedure call return), which marshalls the
return values into a message. The server stub then hands the
message to the transport layer.
• The transport layer sends the result message back to the client
transport layer, which hands the message back to the client stub.
• The client stub demarshalls the return parameters and execution
returns to the caller.
Deadlock:
• In a multiprogramming environment several processes may
compete for a finite number of resources.
• A process request resources; if the resource is available at that
time a process enters the wait state.
• Waiting process may never change its state because the resources
requested are held by other waiting process.
• This situation is known as deadlock.
Conditions Favouring deadlock situation:
1 Mutual Exclusion: The criteria that only one process can hold a
resource at a time meaning processes should access shared resources
with mutual exclusion.
2 Hold and wait : The condition in which process holds a shared
resource by acquiring the lock controlling the shared access and
waiting for additional resources held by other processes .
3 No resource Pre-emption: The criteria that operating system
cannot take back a resource from a process which is currently
holding it and the resource can only be released voluntarily by the
process holding it.
4 Circular wait :A process is waiting for a resource which is
currently held by another process which in turn is waiting for a
resource held by the first process .
5 Ignore Deadlocks : Always assume that the system design is
deadlock free.
This is acceptable for the reason the cost of removing a deadlock is
large compared to the chance of happening a deadlock .
UNIX is an example for OS following this principle.
Prevention of Deadlock
1 A process must request all its required resource and the resources
should be allocated before the process begins in execution
2 Grant resource allocation requests from processes only if the
process does not hold a resource currently
• Ensure that resource preemption(resource releasing) is possible at
operating system level.
• This can be achieved by implementing the following set of
rules /guidelines in resource allocation.
• Release all the resources currently held by a process if a request
made by the process for a new resource is not able to fulfil
immediately.
• Add the resources which are preempted(released) to a resource list
describing the resources which the process requires to complete its
execution.
• Reschedule the process for execution only when the process gets
its old resources and the new resources which is requested by the
process.
Semaphore:
• Is a synchronization mechanism that controls access to shared
resources by multiple threads or tasks ensuring that only a limited
number can access them at any given time .
• It acts like a key that a task needs to acquire before accessing a
resource and the key must be released after usage.
• Semaphores prevent race conditions(Two or more operations at the
same time) and ensure proper coordination among different parts of
an embedded system.
• Example :Imagine a shared printer in a multitasking embedded
system a semaphore could be used to ensure that only one task at a
time can access the printer.
• When a task wants to print it waits for printer semaphore to become
available.
• Once it acquires the semaphore it can print .
• When the printing is complete the task releases the
semaphore,allowing another waiting task to access the printer
Semaphores have two main operations:
1 Wait: This operation decrements the semaphores value .
• If the value becomes negative the task blocks until the semaphores
value becomes non negative.
2 Signal (or V):This operation increments the semaphores value .
• It signals that a resource is available and may unblock a waiting
task
Types of semaphore:
• Binary Semaphore: These have a valuue of either 0 or 1 and are
used for mutual exclusion, allowing only one task to access a
shared resource at a time
• Counting Semaphore: These can have values greater than 1 and
are used to manage multiple resource of the same type
How To Choose An RTOS:
• Functional Requirements:
1 Processor support: It is not necessary that all RTOS support all
kinds of processor architecture. It is essential to ensure the processor
support by the RTOS.
2 Memory Requirements: The OS requires ROM memory for
holding the OS files and it is normally stored in a non volatile
memory like FLASH.
• OS also requires working memory RAM for loading OS services.
3 Real Time Capabilities: It is mandatatory that the operating system
for all embedded systems need to be Real time and all embedded
operating systems are Real time behaviour.
4 Kernel and Interrupt Latency: The kernel of the OS may disable
interrupts while executing certain services and it may may lead to
interrupt latency.
5 Support for Networking and Communication:
The OS kernel may provide stack implementation and driver support
for a bunch of communication interfaces and networking .
Non Functional Requirements:
1 Cost: The total cost for developing or buying the OS and
maintaining it in terms of commercial product and custom build needs
to be evaluated before taking a decision on selection of OS
2 Development and Debugging Availability: The availability of
development and debugging tools is a critical decision making factor
in the selection of an OS for Embedded system.
3 Ease Of Use: How easy it is to use a commercial RTOS is another
important feature that needs to be considered in RTOS selection

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy