0% found this document useful (0 votes)
3 views

os module1 notes (1)

The document provides an overview of operating systems, defining them as intermediaries between users and computer hardware, managing resources and services for efficient program execution. It discusses key concepts such as process management, system calls, interprocess communication, and the importance of multithreading in modern applications. Additionally, it covers the roles of the kernel, process control blocks, and CPU scheduling in optimizing system performance.

Uploaded by

stickman8068
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

os module1 notes (1)

The document provides an overview of operating systems, defining them as intermediaries between users and computer hardware, managing resources and services for efficient program execution. It discusses key concepts such as process management, system calls, interprocess communication, and the importance of multithreading in modern applications. Additionally, it covers the roles of the kernel, process control blocks, and CPU scheduling in optimizing system performance.

Uploaded by

stickman8068
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Operating Systems NOTES

(CSE23304)
1.1 OPERATING SYSTEM

An operating system act as an intermediary between the user of a


computer and computer hardware. The purpose of an operating
system is to provide an environment in which a user can execute
programs in a convenient and efficient manner.
An operating system is a software that manages the computer
hardware. The hardware must provide appropriate mechanisms to
ensure the correct operation of the computer system and to prevent
user programs from interfering with the proper operation of the
system.
• An Operating system is a program that controls the execution
of application programs and acts as an interface between the
user of a computer and the computer hardware.
• A more common definition is that the operating system is the
one program running at all times on the computer (usually
called the kernel), with all else being applications programs.
• An Operating system is concerned with the allocation of
resources and services, such as memory, processors, devices
and information. The Operating System correspondingly
includes programs to manage theseresources, such as a traffic
controller, a scheduler, memory management module, I/O
programs, and a file system.

Fig 1.1 Conceptual view of a computer system

• Every computer must have an operating system to run other


programs. The operating system and coordinates the use of the
hardware among the various system programs and application
program for a various users. It simply provides an environment
within which other programs can do useful work.
• The operating system is a set of special programs that run on
a computer system that allow it to work properly. It performs
basic tasks such as recognizing input from the keyboard,
keeping track of files and directories on the disk, sending output
to the display screen and controlling a peripheral devices.

• OS is designed to serve two basic purposes :


1. It controls the allocation and use of the computing system’s
resources among the various user and tasks.

2. It provides an interface between the computer hardware and


the programmer that simplifies and makes feasible for coding,
creation, debugging of application programs.

• The operating system must support the following tasks. The


tasks are :
1. Provides the facilities to create, modification of program
and data files using and editor.
2. Access to the compiler for translating the user program
from high level language to machine language.
3. Provide a loader program to move the compiled program
code to the computer’s memory for execution.
4. Provide routines that handle the details of I/O
programming.

 What Operating Systems Do


Operating System Definition

Computer Startup

2.1 Operating System Services

An operating system provides services to programs and tothe users of


those programs. It provided by one environment for the execution of
programs. The services provided by one operating system is difficult than
other operating system. Operating system makes the programming task
easier.
One set of operating-system services provides functions that are helpful to the
user:
User interface - Almost all operating systems have a user interface (UI)
 Varies between Command-Line (CLI), Graphics User
Interface (GUI), Batch
The common service provided by the operating system islisted below.
1. Program execution
2. I/O operation
3. File system manipulation
4. Communications
5. Error detection
2.3 System Calls:
System calls provide an interface between the process and the operating system.
System calls allow user-level processes to request some services from the operating system
which process itself is not allowed to do.
For example, for I/O a process involves a system call telling the operating system to read or
write particular area and this request is satisfied by the operating system.
Example of System Calls
Standard C Library Example
Parameter Passing via Table

Types of System Calls


Process control
File management
Device management
Information maintenance
Communications
Protection

Examples of Windows and Unix System Calls


MS-DOS execution
The kernel is a computer program at the core of a computer's operating system and generally
has complete control over everything in the system.
The kernel is also responsible for preventing and mitigating conflicts between different
processes.
FreeBSD Running Multiple Programs

Processes: Process Concepts

Process : A process is a program in execution. A process is more than the program code, which
is sometimes known as the text section. It also includes the current activity, as represented by
the value of the program counter and the contents of the processor's registers. A process
generally also includes the process stack, which contains temporary data (such as function
parameters, return addresses, and local variables), and a data section, which contains global
variables. A process may also include a heap, which is memory that is dynamically allocated
during process run time. Structure of a process
Process in memory

As a process executes, it changes state


l new: The process is being created
l running: Instructions are being executed
l waiting: The process is waiting for some event to occur
l ready: The process is waiting to be assigned to a processor
l terminated: The process has finished execution

These names are arbitrary, and they vary across operating systems. The states that they represent
are fotind on all systems, however. Certain operating systems also more finely delineate process
states. It is important to realize that only one process can be running on any processor at any
instant.

Process Conrol Block


Each process is represented in the operating system by a process control block (PCB)—also
called a task control block. Process state. The state may be new, ready, running, and waiting,
halted, and so on. Program counter-The counter indicates the address of the next instruction to
be executed for this process. • CPU registers- The registers vary in number and type, depending
on the computer architecture. They include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition code information. CPU-scheduling information-
This information includes a process priority, pointers to scheduling queues, and any other
scheduling parameters. Memory-management information- This information may include such
information as the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system Accounting information-This
information includes the amount of CPU and real time used, time limits, account members, job
or process numbers, and so on. I/O status information-This information includes the list of I/O
devices allocated to the process, a list of open files, and so on.

………………………………

Thread
The process model discussed so far has implied that a process is a program that performs a single
thread of execution. For example, when a process is running a word-processor program, a single
thread of instructions is being executed.
This single thread of control allows the process to perform only one task at a time. Thus, the user
cannot simultaneously type in characters and run the spell checker. Most modern operating
systems have extended the process concept
to allow a process to have multiple threads of execution and thus to perform more than one task
at a time. This feature is especially beneficial on multicore systems, where multiple threads can
run in parallel. A multithreaded word processor could, for example, assign one thread to manage
user input while another thread runs the spell checker. On systems that support threads, the PCB
is expanded to include information for each thread. Other changes throughout
the system is also needed to support threads.

Process Scheduling
The objective of multiprogramming is to have some process running at all times so as to
maximize CPU utilization. The objective of time sharing is to switch a CPU core among
processes so frequently that users can interact with each program while it is running. To meet
these objectives, the process scheduler selects an available process (possibly from a set of
several available processes) for program execution on the CPU. As processes enter the system,
they are put into a job queue, which consists of all processes in the system. The processes that
are residing in main memory and are ready and waiting to execute are kept on a list called the
ready queue. This queue is generally stored as a linked list. A ready-queue header contains
pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to
the next PCB in the ready queue.

For a system with a single CPU core, there will never be more than one process running at a
time, whereas a multicore system can run multiple processes at one time. If there are more
processes than cores, excess processes will have to wait until a core is free and can be
rescheduled. The number of processes currently in memory is known as the degree of
multiprogramming.
Balancing the objectives of multiprogramming and time sharing also requires taking the general
behaviour of a process into account. In general, most processes can be described as either I/O
bound or CPU bound. An I/O-bound process is one that spends more of its time doing I/O than
it spends doing computations. A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
Scheduling Queues:
As processes enter the system, they are put into a ready queue, where they are ready and waiting
to execute on a CPU’s core This queue is generally stored as a linked list; a ready-queue header
contains pointers to the first PCB in the list, and each PCB includes a pointer field that points to
the next PCB in the ready queue.
The system also includes other queues. When a process is allocated a CPU core, it executes for
a while and eventually terminates, is interrupted, or waits for the occurrence of a particular event,
such as the completion of an I/O request. Suppose the process makes an I/O request to a device
such as a disk.
Since devices run significantly slower than processors, the process will have to wait for the I/O
to become available. Processes that are waiting for a certain event to occur — such as completion
of I/O — are placed in a wait queue.
A common representation of process scheduling is a queueing diagram, such as that in Figure
3.5. Two types of queues are present: the ready queue and a set of wait queues. The circles
represent the resources that serve the queues, and the arrows indicate the flow of processes in the
system.
A new process is initially put in the ready queue. It waits there until it is selected for execution,
or dispatched. Once the process is allocated a CPU core and is executing, one of several events
could occur: from memory to disk, where its current status is saved, and later “swapped in” from
disk back to memory, where its status is restored. Swapping is typically only necessary when
memory has been overcommitted and must be freed up.
The process could issue an I/O request and then be placed in an I/O wait queue.
• The process could create a new child process and then be placed in a wait queue while it
awaits the child’s termination.
• The process could be removed forcibly from the core, as a result of an interrupt or having its
time slice expire, and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state
and is then put back in the ready queue. A process continues this cycle until it terminates, at
which time it is removed from all queues and has its PCB and resources deallocated.

CPU Scheduling

A process migrates among the ready queue and various wait queues throughout its lifetime. The
role of the CPU scheduler is to select from among the processes that are in the ready queue and
allocate a CPU core to one of them.
The CPU scheduler must select a new process for the CPU frequently. An I/O-bound process
may execute for only a few milliseconds before waiting for an I/O request. Although a CPU-
bound process will require a CPU core for longer durations, the scheduler is unlikely to grant the
core to a process for an extended period. Instead, it is likely designed to forcibly remove the CPU
from a process and schedule another process to run.
Therefore, the CPU scheduler executes at least once every 100 milliseconds, although typically
much more frequently.
Some operating systems have an intermediate form of scheduling, known as swapping, whose
key idea is that sometimes it can be advantageous to remove a process from memory (and from
active contention for the CPU) and thus reduce the degree of multiprogramming. Later, the
process can be reintroduced into memory, and its execution can be continued where it left off.
This scheme is known as swapping because a process can be “swapped out”

memory speed, the number of registers that must be copied, and the existence of special
instructions (such as a single instruction to load or store all registers).
Atypical speed is a several microseconds.
Context-switch times are highly dependent on hardware support. For instance, some processors
provide multiple sets of registers. A context switch here simply requires changing the pointer to
the current register set. Of course, if there are more active processes than there are register sets,
the system resorts to copying register data to and from memory, as before. Also, the more
complex the operating system, the greater the amount of work that must be done during
a context switch.

Interprocess Communication
Processes executing concurrently in the operating system may be either independent
processes or cooperating processes. Aprocess is independent if it does not share data with any
other processes executing in the system. A process is cooperating if it can affect or be affected
by the other processes executing in the system. Clearly, any process that shares data with other
processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
• Information sharing. Since several applications may be interested in the same piece of
information (for instance, copying and pasting), we must provide an environment to allow
concurrent access to such information.
• Computation speedup. If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a
speedup can be achieved only if the computer has multiple processing cores.
• Modularity. We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.
Cooperating processes require an interprocess communication (IPC) mechanism that will
allow them to exchange data— that is, send data to and receive data from each other. There are
two fundamental models of interprocess communication: shared memory and message
passing. In the shared-memory model, a region of memory that is shared by the cooperating
processes is established. Processes can then exchange information by reading and writing data
to the shared region. In the message-passing model . communication takes place by means of
messages exchanged between the cooperating processes.

Both of the models just mentioned are common in operating systems, and many systems
implement both. Message passing is useful for exchanging smaller amounts of data, because no
conflicts need be avoided. Message passing is also easier to implement in a distributed system
than shared memory.
(Although there are systems that provide distributed shared memory, we do not consider them in
this text.) Shared memory can be faster than message passing,
since message-passing systems are typically implemented using system calls and thus require
the more time-consuming task of kernel intervention.
In shared-memory systems, system calls are required only to establish shared memory
regions. Once shared memory is established, all accesses are treated as routine memory accesses,
and no assistance from the kernel is required.
4.1 Threads

A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a
register set, and a stack. It shares with other threads belonging to the same process its code
section, data section, and other operating-system resources, such as open files and signals. A
traditional process has a single thread of control. If a process has multiple threads of control, it
can perform more than one task at a time. Figure 4.1 illustrates the difference between a
traditional single-threaded process and a multithreaded process.
Most software applications that run on modern computers and mobile devices are multithreaded.
An application typically is implemented as a separate process with several threads of control.
Below we highlight a few examples of multithreaded applications:
• An application that creates photo thumbnails from a collection of images may use a separate
thread to generate a thumbnail from each separate image.
• A web browser might have one thread display images or text while another thread retrieves
data from the network.
• A word processor may have a thread for displaying graphics, another thread for responding to
keystrokes from the user, and a third thread for performing spelling and grammar checking in
the background.

4.1.2 Benefits
The benefits of multithreaded programming can be broken down into four
major categories:
1. Responsiveness. Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user.This quality is especially useful in designing user
interfaces.
For instance, consider what happens when a user clicks a button that results in the
performance of a time-consuming operation. A single-threaded application would be
unresponsive to the user until the operation had been completed. In contrast, if the time-
consuming operation is performed in a separate, asynchronous thread, the application
remains responsive to the user.

2. Resource sharing. Processes can share resources only through techniques such as shared
memory and message passing. Such techniques must be explicitly arranged by the
programmer. However, threads share the memory and the resources of the process to
which they belong by default. The benefit of sharing code and data is that it allows an
application to have several different threads of activity within the same address space.

3. Economy. Allocating memory and resources for process creation is costly. Because threads
share the resources of the process to which they belong, it is more economical to create and
context-switch threads.Empirically gauging the difference in overhead can be difficult, but in
general thread creation consumes less time and memory than process creation. Additionally,
context switching is typically faster between threads than between processes.

4. Scalability. The benefits of multithreading can be even greater in a multiprocessor


architecture, where threads may be running in parallel on different processing cores. A single-
threaded process can run on only one processor, regardless how many are available. We explore
this issue

Multicore Programming
multiple computing cores on a single processing chip where each core appears as a separate CPU
to the operating system (Section. We refer to such systems as multicore and multithreaded programming
provides a mechanism for more efficient use of these multiple computing cores and improved concurrency.
A concurrent system supports more than one task by allowing all the tasks to make progress. In contrast, a
parallel system can perform more than one task simultaneously.

Programming Challenges:
The trend toward multicore systems continues to place pressure on system designers and
application programmers to make better use of the multiple computing cores.
In general, five areas present challenges in programming for multicore systems:
1. Identifying tasks. This involves examining applications to find areas that can be divided
into separate, concurrent tasks. Ideally, tasks are independent of one another and thus can
run in parallel on individual cores.

2. Balance. While identifying tasks that can run in parallel, programmers must also ensure that
the tasks perform equal work of equal value. In some instances, a certain task may not
contribute as much value to the overall process as other tasks. Using a separate execution
core to run that task may not be worth the cost.
3. Data splitting. Just as applications are divided into separate tasks, the data accessed and
manipulated by the tasks must be divided to run on separate cores.
4. Data dependency. The data accessed by the tasks must be examined for dependencies
between two or more tasks. When one task depends on data from another, programmers
must ensure that the execution of the tasks is synchronized to accommodate the data
dependency.
5. Testing and debugging. When a program is running in parallel on multiple cores, many
different execution paths are possible. Testing and debugging such concurrent programs is
inherently more difficult than testing and debugging single-threaded applications.

Types of Parallelism

In general, there are two types of parallelism: data parallelism and task parallelism.
Data parallelism focuses on distributing subsets of the same data across multiple computing
cores and performing the same operation on each core. Consider, for example, summing the
contents of an array of size N. On a single-core system, one thread would simply sum the
elements [0] . . . [N − 1].On a dual-core system, however, thread A, running on core 0, could sum
the elements [0] . . . [N∕2 − 1] while thread B, running on core 1, could sum the elements [N∕2] .
. . [N − 1]. The two threads would be running in parallel on separate computing cores.
Task parallelism involves distributing not data but tasks (threads) across multiple computing
cores. Each thread is performing a unique operation. Different threads may be operating on the
same data, or they may be operating on different data. Consider again our example above. In
contrast to that situation, an example of task parallelism might involve two threads, each
performing a unique statistical operation on the array of elements. The threads again are
operating in parallel on separate computing cores, but each is performing a unique operation.
Multithreading Models
User threads are supported above the kernel and are managed without kernel support, whereas
kernel threads are supported and managed directly by the operating system. Virtually all
contemporary operating systems—includingWindows, Linux, and macOS— support kernel
threads. Ultimately, a relationship must exist between user threads and kernel Threads

1 Many-to-One Model: The many-to-one model maps many user-level threads to one kernel
thread. Thread management is done by the thread library in user space, so it is efficient. However,
the entire process will block if a thread makes a blocking system call. Because only one thread
can access the kernel at a time, multiple threads are unable to run in parallel on multicore systems.

2 One-to-One Model
The one-to-one model maps each user thread to a kernel thread. It provides more concurrency
than the many-to-one model by allowing another thread to run when a thread makes a blocking
system call. It also allows multiple threads to run in parallel on multiprocessors. The only
drawback to this model is that creating a user thread requires creating the corresponding kernel
thread, and a large number of kernel threads may burden the performance of a system. Linux,
along with the family of Windows operating systems, implement the one-to-one model.
3 Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a smaller or equal number of
kernel threads. The number of kernel threads may be specific to either a particular application or
a particular machine.

Difference between User level thread and Kernel Level Thread

S.
No. Parameters User Level Thread Kernel Level Thread

Kernel threads are


User threads are
1. Implemented by implemented by Operating
implemented by users.
System (OS).

The operating System Kernel threads are


2. Recognize doesn’t recognize user-level recognized by Operating
threads. System.

Implementation of Kernel-
Implementation of User
3. Implementation Level thread is
threads is easy.
complicated.
S.
No. Parameters User Level Thread Kernel Level Thread

Context switch Context switch time is


4. Context switch time is less.
time more.

Hardware Context switch requires no Hardware support is


5.
support hardware support. needed.

If one kernel thread


If one user-level thread
performs a blocking
Blocking performs a blocking
6. operation then another
operation operation then the entire
thread can continue
process will be blocked.
execution.

Multithread applications
Kernels can be
7. Multithreading cannot take advantage of
multithreaded.
multiprocessing.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy