2.1 Computer-System Operation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

2.

1 Computer-System Operation

A modern, general-purpose computer system consists of a CPU and a number of device controllers that
are connected through a common bus that provides access to shared memory (Figure 2.1).

Each device controller is in charge of a specific type of device (for example, disk drives, audio devices,
and video displays).

The CPU and the device controllers can execute concurrently, competing for memory cycles.

To ensure orderly access to the shared memory, a memory controller is provided whose function is to
synchronize access to the memory.

(What is bootstrap program )

For a computer to start running:—for instance, when it is powered up -or rebooted—it needs to have an
initial program to run.

This initial program, or bootstrap program, tends to be simple It initializes all aspects of the system, from
CPU registers to device controllers to memory contents. The bootstrap program must know how to load
the operating system and to start executing that system. To accomplish this goal, the bootstrap program
must locate and load into memory the operating-system kernel. The operating system then starts
executing the first process, such as "init," and waits for some event to occur. The occurrence of an event
is usually signaled by an interrupt from either the hardware or the software. Hardware may trigger an
interrupt at any time by sending a signal to the CPU, usually by way of the system bus. Software may
trigger an interrupt by executing a special operation called a system call (also called a monitor call).

(What happens when cpu is interrupted? ) Write about interrupts

There are many different types of events that may trigger an interrupt— for example, the completion of
an I/O operation, "division by zero, invalid memory access, and a request for some operating-system
service. For each such interrupt, a service routine is provided that is responsible for dealing with the
interrupt. When the CPU is interrupted, it stops what it is doing and immediately transfers execution to
a fixed location. The fixed location usually contains the starting address where the service routine for
the interrupt is located.

The interrupt service routine executes; on completion, the CPU resumes the interrupted computation.

Interrupts are an important part of a computer architecture. Each computer design has its own interrupt
mechanism, but several functions are common. The interrupt must transfer control to the appropriate
interrupt service routine. The straightforward method for handling this transfer would be to invoke a
generic routine to examine the interrupt information; the routine, in turn, would call the interrupt-
specific handler. However, interrupts must be handled quickly, and, given that there is a predefined
number of possible interrupts, a table of pointers to interrupt routines can be used instead. The
interrupt routine is then called indirectly through the table, with no intermediate routine needed

Usually, other interrupts are disabled while an interrupt is being processed, so any incoming interrupts
are delayed until the operating system is done with the current one; then, interrupts are enabled. If they
were not thus disabled,

the processing of the second interrupt while the first was being serviced would overwrite the first one's
data, and the first would be a lost interrupt.

A higher-priority interrupt will be taken even if a lower-priority interrupt is active, but interrupts at the
same, or lower levels are masked, or selectively disabled, so that lost or unnecessary interrupts are
prevented.

2.2 « I/O Structure

(Write about SCSI)

a general-purpose computer system consists of a CPU and a nurnber of device controllers that are
connected through a common bus. Each device controller is in charge of a specific type of device.
Depending on the controller, there may be more than one attached device. For instance, the Small
Computer Systems Interface (SCSI) controller -found on many small to medium-sized computers, can
have seven or more devices attached to it. A device controller maintains some local buffer storage and a
set of specialpurpose registers. The device controller is responsible for moving the data between the
peripheral devices that it controls and its local buffer storage. The size of the local buffer within a device
controller varies from one controller to another, depending on the particular device being controlled.
For example, the size of the buffer of a disk controller is the same as or a multiple of the size of the
smallest addressable portion of a disk, called a sector, which is usually 512 bytes.

2.2.1 I/O Interrupts

(Synchronus and Asynchronus IO)


To start an I/O operation, the CPU loads the appropriate registers within the device
controller. The device controller, in turn, examines the contents of these registers to
determine what action to take. For example, if it finds a read request, the controller will
start the transfer of data from the device to its local buffer. Once the transfer of data is
complete, the device controller informs the CPU that it has finished its operation. It
accomplishes this communication by triggering an interrupt. This situation will occur, in
general, as the result of a user process requesting I/O. Once the I/O is started, two
courses of action are possible. In the simplest case, the I/O is started; then, at I/O
completion, control is returned to the user p'rocess. This case is known as synchronous
I/O. The other possibility, called asynchronous I/O, returns control to the user program
without waiting for the I/O to complete. The I/O then can continue while other system
operations occur
Waiting for I/O completion may be accomplished in one of two ways. Some computers
have a special wait instruction that idles the CPU until the next interrupt. Machines that
do not have such an instruction may have a wait loop:
Loop: jmp Loop
This tight loop simply continues until an interrupt occurs, transferring control to another
part of the operating system. Such a loop might also need to poll any I/O devices that do
not support the interrupt structure; instead, they simply set a flag in one of their
registers and expect the operating system to notice that flag. If the CPU always waits for
I/O completion, at most one I/O request is outstanding at a time. Thus, whenever an I/O
interrupt occurs,
DMA STRUCTURE
Direct memory access (DMA) is a method that allows an input/output (I/O) device to send or
receive data directly to or from the main memory, bypassing the CPU to speed up memory
operations. The process is managed by a chip known as a DMA controller (DMAC).
STORAGE STRUCTURE

Ideally,, we would want the programs and data to reside in main memory permanently. This
arrangement is not possible for the following two reasons:

1. Main memory is usually too small to store all needed programs and data permanently.

2. Main memory is a volatile storage device that loses its contents when power is turned off or
otherwise lost.

Thus, most computer systems provide secondary storage as an extension of main memory. The main
requirement for secondary storage is that it be able to hold large quantities of data permanently. The
most common secondary-storage device is a magnetic disk, which provides storage of both programs
and data. Most programs (web browsers, compilers, word processors, spreadsheets, and so on) are
stored on a disk until they are loaded into memory. Many programs then use the disk as both a source
and destination of the information for their processing. Hence, the proper management of disk storage
is of central importance to a computer system

In a larger sense, however, the storage structure that we have described— consisting of registers, main
memory, and magnetic disks—is only one of many possible storage systems. There are also cache
memory, CD-ROM, magnetic tapes^ and^sojon. Each storage^y^tem^rpyicles the basic functions of
storing a datum, and of holding that datum until it is retrieved at a later time. The main differences
among the various storage systems lie in speed, cost, size, and volatility.

1.Main memory : Main memory refers to physical memory that is internal to the computer. The


word main is used to distinguish it from external mass storage devices such as disk drives. Other
terms used to mean main memory include RAM and primary storage. The computer can manipulate
only data that is in main memory.

2.Magnetic disks: A magnetic disk is a storage device that uses a magnetization process to write,
rewrite and access data. It is covered with a magnetic coating and stores data in the form of tracks,
spots and sectors. Hard disks, zip disks and floppy disks are common examples of magnetic
disks.

STORAGE HIERARCHY

 One of the important component in the computer system is the computer storage. In this
post we will focus on the storage hierarchy.

 The wide variety of storage system can be organized in hierarchy according to speed and
cost. The hierarchy levels are expensive but they are fast.
 As we move down in a hierarchy the cost for bit generally decrease, where as the access
time and storage capacity generally increases.

 Semiconductor memory is a faster and chipper type of memory. The top 3 levels of


memory in hierarchy is constructed using semiconductor memory.
 In addition to having differing speed and cost the various storage system are either
volatile or non-volatile.

 Volatile storage lost its contents when the power supply for the device is off.
 In the absence of expensive battery and generator backup system data must be return to
non-volatile storage for permanent storage.

 In the hierarchy shown in figure the storage system above the electronic disk is volatile
where as those below are non-volatile.

 During normal operation the electronic disk stores data in large D-RAM array, which is
volatile. Many electronic disk devices contain a hidden magnetic hard disk and a battery for
backup power.

 If external power is interrupted the electronic disk controller copies the data from RAM
to magnetic disk. When external power is restore, the controller copies the data back in to the
RAM.
 The design of computer memory system must balance all this factors.
 It uses only as much expensive memory as necessary, which providing as much
inexpensive, non-volatile memory as possible.

 Cache can be installing to improve performance where a large access time is needed
between two components.

2.4.1 Caching

Caching is an important principle of computer systems. Information is normally kept in some storage system (such as main

memory). As it is used, it is copied into a faster storage system—the cache—on a temporary basis. When we need a

particular piece of information, we first check whether it is in the cache. If it is, we use the information directly from the

cache; if it is not, we use the information from the main storage system, putting a copy in the cache under the assumption

that there is a high probability that it will be needed again.

Extending this view, internal programmable registers, such as index registers, provide a high-speed cache for main memory.

The programmer (or compiler) implements the register-allocation and register-replacement algorithms to decide which

information to keep in registers and 'which to keep in main memory. There are also caches that are implemented totally in

hardware. For instance, most systems have an instruction cache to hold the next instructions expected to be executed.

Without this cache, the CPU would have to wait several cycles while an instruction was fetched from main memory. For

similar reasons, most systems have one or more high-speed data caches in the memory hierarchy.

Since caches have limited-size, cache management is an important design problem. Careful selection of the cache size and

of a replacement policy can result in 80 to 99 percent of all accesses being in the cache, causing extremely high

performance.
Hardware Protection and Type of Hardware
Protection
we know that a computer system contains the hardware like processor, monitor, RAM
and many more, and one thing that the operating system ensures that these devices
can not directly accessible by the user.
Basically, hardware protection is divided into 3 categories: CPU protection, Memory
Protection, and I/O protection. These are explained as following below.
1. CPU Protection:
CPU protection is referred to as we can not give CPU to a process forever, it
should be for some limited time otherwise other processes will not get the chance
to execute the process. So for that, a timer is used to get over from this situation.
which is basically give a certain amount of time a process and after the timer
execution a signal will be sent to the process to leave the CPU. hence process will
not hold CPU for more time.

2. Memory Protection:
In memory protection, we are talking about that situation when two or more
processes are in memory and one process may access the other process memory.
and to protecting this situation we are using two registers as:
3. 1. Bare register
2. Limit register
So basically Bare register store the starting address of program and limit register
store the size of the process, so when a process wants to access the memory then
it is checked that it can access or can not access the memory.
we must provide memory protection .at least for the interrupt vector and the
interrupt service routines of the operating system. In general, however, we want to
protect the operating system from access by user programs, and, in addition, to
protect user programs from one another. This protection must be provided by the
hardware. It can be implemented in several ways,
The operating system, executing in monitor mode, is given unrestricted access to
both monitor and users' memory. This provision allows the operating system to
load users' programs into users' memory, to dump out those programs in case of
errors, to access and modify parameters of system calls, and so on.
4. I/O Protection:
So when we ensuring the I/O protection then some cases will never have occurred
in the system as:
1. Termination I/O of other process
2. View I/O of other process
3. Giving priority to a particular process I/O
We know that when an application process wants to access any I/O device it
should be done through system call so that the Operating system will monitor the
task.

DUAL MODE OPERATION:

2.5.1 Dual-Mode Operation

To ensure proper operation, we must protect the operating system and all other programs and their data from

any malfunctioning program. Protection is needed for any shared resource. The approach taken is to provide

hardware to support to allow us to differentiate among various modes of executions. At the very least, we

need two separate modes of operation: user mode and monitor mode (also called supervisor mode, system

mode, or privileged mode}: A bit, called the mode bit, is added to the hardware of the computer to indicate

the current mode: monitor (0) or user (1). With the mode bit, we are able to distinguish between an execution

that is done on behalf of the operating system, and one that is done on behalf of the user.

At system boot time, the hardware starts in monitor mode. The operating system is then loaded, and starts

user processes in user mode. Whenever a trap or interrupt occurs, the hardware switches from user mode to

monitor mode (that is, changes the state of the mode bit to 0). Thus, whenever the operating system gains

control of the computer, it is in monitor mode. The system always switches to user mode (by setting the mode

bit to 1) before passing control to a user program. The dual mode of operation provides us with the means for

protecting the operating system from errant users, and errant users from one another. We accomplish this

protection by designating some of the machine instructions that may cause harm as privileged instructions.

The hardware allows privileged instructions to be executed in only monitor mode. If an attempt is made to

execute a privileged instruction in user mode, the hardware does not execute the instruction, but rather treats

the instruction as illegal and traps to the operating system. The lack of a hardware-supported dual mode can

cause serious shortcomings in an operating system. For instance, MS-DOS was written for the Intel 8088
architecture, which has no mode bit, and therefore no dual mode. A user program running awry can wipe out

the operating system by writing over it with data, and multiple programs are able to write to a device at the

same time, with possibly disastrous results. More recent and advanced versions of the Intel CPU, such as the

80486, do provide dual-mode operation. As a result, more recent operating systems, such as Microsoft

Windows NT, and IBM OS/2, take advantage of this feature and provide greater protection for the operating

system.

OS SERVICES
An Operating System provides services to both the users and to
the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a
convenient manner.
Following are a few common services provided by an operating
system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to
manipulate, registers, OS resources in use). Following are the major activities of an
operating system with respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation


A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose. Examples of storage media
include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of
these media has its own properties like speed, capacity, data transfer rate and data
access methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major
activities of an operating system with respect to file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes. Multiple processes communicate with one
another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are connected
through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.

Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or
in the memory hardware. Following are the major activities of an operating system with
respect to error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job. Following are the
major activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.

Protection
Considering a computer system having multiple users and concurrent execution of
multiple processes, the various processes must be protected from each other's
activities.
Protection refers to a mechanism or a way to control the access of programs,
processes, or users to the resources defined by a computer system. Following are the
major activities of an operating system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.
Architecture of an Operating System

Fu
nction of an Operating System

In an operating system software performs each of the function:

1. Process management:- Process management helps OS to create and


delete processes. It also provides mechanisms for synchronization and
communication among processes.

2. Memory management:- Memory management module performs the


task of allocation and de-allocation of memory space to programs in
need of this resources.

3. File management:- It manages all the file-related activities such as


organization storage, retrieval, naming, sharing, and protection of files.

4. Device Management: Device management keeps tracks of all devices.


This module also responsible for this task is known as the I/O controller.
It also performs the task of allocation and de-allocation of the devices.

5. I/O System Management: One of the main objects of any OS is to hide


the peculiarities of that hardware devices from the user.

6. Secondary-Storage Management: Systems have several levels of


storage which includes primary storage, secondary storage, and cache
storage. Instructions and data must be stored in primary storage or
cache so that a running program can reference it.
7. Security:- Security module protects the data and information of a
computer system against malware threat and authorized access.

8. Command interpretation: This module is interpreting commands given


by the and acting system resources to process that commands.

9. Networking: A distributed system is a group of processors which do not


share memory, hardware devices, or a clock. The processors
communicate with one another through the network.

10. Job accounting: Keeping track of time & resource used by


various job and users.

11.Communication management: Coordination and assignment of


compilers, interpreters, and another software resource of the various
users of the computer systems.

Interprocess communication is the mechanism provided by the operating system that


allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows:

The models of interprocess communication are as follows:

Shared Memory Model


Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
Advantage of Shared Memory Model
Memory communication is faster on the shared memory model as compared to the
message passing model on the same machine.
Disadvantages of Shared Memory Model
Some of the disadvantages of shared memory model are as follows:

 All the processes that use the shared memory model need to make sure that they are not
writing to the same memory location.
 Shared memory model may create problems such as synchronization and memory
protection that need to be addressed.

Message Passing Model


Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored on the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication and
are used by most operating systems.
Advantage of Messaging Passing Model
The message passing model is much easier to implement than the shared memory
model.
Disadvantage of Messaging Passing Model
The message passing model has slower communication than the shared memory model
because the connection setup takes time.
A diagram that demonstrates the shared memory model and message passing model is
given as follows:
Virtual Machines in Operating System
Virtual Machine abstracts the hardware of our personal computer such as CPU, disk
drives, memory, NIC (Network Interface Card) etc, into many different execution
environments as per our requirements, hence giving us a feel that each execution
environment is a single computer. For example, VirtualBox.
When we run different processes on an operating system, it creates an illusion that
each process is running on a different processor having its own virtual memory, with the
help of CPU scheduling and virtual-memory techniques. There are additional features of
a process that cannot be provided by the hardware alone like system calls and a file
system. The virtual machine approach does not provide these additional functionalities
but it only provides an interface that is same as basic hardware. Each process is
provided with a virtual copy of the underlying computer system.
We can create a virtual machine for several reasons, all of which are fundamentally
related to the ability to share the same basic hardware yet can also support different
execution environments, i.e., different operating systems simultaneously.

The main drawback with the virtual-machine approach involves disk systems. Let us
suppose that the physical machine has only three disk drives but wants to support
seven virtual machines. Obviously, it cannot allocate a disk drive to each virtual
machine, because virtual-machine software itself will need substantial disk space to
provide virtual memory and spooling. The solution is to provide virtual disks.
Users are thus given their own virtual machines. After which they can run any of the
operating systems or software packages that are available on the underlying machine.
The virtual-machine software is concerned with multi-programming multiple virtual
machines onto a physical machine, but it does not need to consider any user-support
software. This arrangement can provide a useful way to divide the problem of designing
a multi-user interactive system, into two smaller pieces.
Advantages:
1. There are no protection problems because each virtual machine is completely
isolated from all other virtual machines.
2. Virtual machine can provide an instruction set architecture that differs from real
computers.
3. Easy maintenance, availability and convenient recovery.
Disadvantages:
1. When multiple virtual machines are simultaneously running on a host computer,
one virtual machine can be affected by other running virtual machines, depending
on the workload.
2. Virtual machines are not as efficient as a real one when accessing the hardware.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy