SPOS EndSem 2023 (FlyHigh Services)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Q1.

Explain Different Functions Of Loaders→ set to the start of the newly assembled
Loader is a utility program which takes object program. Advantage : 1) This scheme is simple
code as input prepares it for execution and to implement. Because assembler is placed at
loads the executable code into the memory. one part of the memory and loader simply
Thus loader is actually responsible for initiating loads assembled machine instructions into the
the execution process. The loader is memory. Disadvantages : 1) In this scheme
responsible for the activities such as allocation, some portion of memory is occupied by
linking, relocation and loading. 1) It allocates assembler which is simply a wastage of
the space for program in the memory, by memory. As this scheme is combination of
calculating the size of the program. This assembler and loader activities, this
activity is called allocation. 2) It resolves the combination program occupies large block Of
symbolic references (code/ data) between the memory. 2) There is no production of .obj file,
object modules by assigning all the user the source code is directly converted to
subroutine and library subroutine addresses. executable form. Hence eventhough there is
This activity is called linking. 3) There are some no modification in the source program it needs
address dependent locations in the program, to be assembled and executed each time,
such address constants must be adjusted which then becomes a time consuming
according to allocated space, such activity activity.
done by loader is called relocation. 4) Finally it
Q4.Explain Overlay Structure→• Definition of
places all the machine instructions and data of
Overlay : Overlay is a part of the program that
corresponding programs and subroutines into
is currently required for the execution of
the memory. Thus program now becomes
program and it has the same load origin as
ready for execution, this activity is called
other parts of the program. • Definition of
loading.
Overlay Structure : The program containing
multiple overlays is called overlay structured
program. There might be some part of the
Q2.Static vs Dynamic link libraries -→ Static
program that need to be loaded permanently
Link Libraries: For static library, actual code is
in the memory for execution of other parts of
extracted from library by linker and final
the program. Such a permanent resident part
executable code is built at the compilation
of the program is called root. Other parts of the
time. Compatibility issues do not arise. Size is
program can be loaded in the memory as per
larger. Examples - All the .lib files.
the requirements. The program which controls
Dynamic link libraries: For dynamic link library the loading of the overlays when required is
code need not be extracted and copied rather called overlay manager. This overlay manager
it is linked with executable code at run time. is linked with the root. • At the initial stage of
Compatibility issues may arise. It is compact in the program execution, the root is loaded in
size. Examples - All the .dll files. the memory and given the control for
execution of the program. It then calls the
Q3.Compile and go Loader: In this type of overlay manager to control the loading of the
loader, the instruction is read line by line, its overlays that are required during the program
machine code is obtained and it is directly put execution. The overlay manager loads the
in the main memory at some known address. program segments that are currently required
That means the assembler runs in one part of for the execution. It would overwrite the
memory and the assembled machine previously overloaded overlays using the same
instructions and data is directly put into their load origin. • Example : In assembler, the
assigned memory locations. After completion overlay structure is used in assembler.
of assembly process loader contains the
instruction using which the location counter is
Study material provided by: Vishwajeet Londhe

Join Community by clicking below links

Telegram Channel

https://t.me/SPPU_TE_BE_COMP
(for all engineering Resources)

WhatsApp Channel
(for all tech updates)

https://whatsapp.com/channel/
0029ValjFriICVfpcV9HFc3b

Insta Page
(for all engg & tech updates)

https://www.instagram.com/
sppu_engineering_update
Q5.General Loader Scheme→In this loader Advantages: 1. It is simple to implement 2. This
scheme, the source program is converted to scheme allows multiple programs or the
object program by some translator (assmbler). source programs written in different
The loader accepts these object modules and languages. If there are multiple programs
puts machine instruction and data in an written in different languages then the
executable form at their assigned memory. The respective language assembler will convert it
loader occupies some portion of main to the taget language and a common object file
memory. Advantages :1. The program need can be prepared with all the address
not be retranslated each time while running it. resolution. 3. The task of loader becomes
This is because initially when source program simpler as it simply obeys the instruction
gets executed an object program gets regarding where to place the object code in the
generated. Of program is not modified, then main memory. 4. The process of execution is
loader can make use of this object program to efficient. Disadvantages: 1. In this scheme it is
convert it to executable form. 2. There is no the programmer's duty to adjust all the inter
wastage of memory, because assembler is not segment addresses and manually do the linking
placed in the memory, instead of it, loader activity. For that, it is necessary for a
occupies some portion of the memory. And programmer to know the memory
size of loader is smaller than assembler, so management. 2. If at all any modification is
more memory is available to the user. 3. It is done the some segments, the starting
possible to write source program with multiple addresses of immediate next segments may
programs and multiple languages, because the get changed, the programmer has to take care
source programs are first converted to object of this issue and he needs to update the
programs always, and loader accepts these corresponding starting addresses on any
object modules to convert it to executable modification in the source.
form.
Q7.Real time OS →Time constraints is the key
Q6.Absolute loader→ Absolute loader is a parameter is real time systems. It controls
kind Of loader in Which relocated Object files autonomous system such as robots, satellites,
are created, loader accepts these files and air traffic control and hydroelectric dams. •
places them at specified locations in the When user gives an input to the system, it must
memory. This type of loader is called absolute process within the time limit and result is sent
because no relocation information is needed; back. Real time system fails if it does not give
rather it is obtained from the programmer or result within the time limits. • Real time
assembler. The starting address of every systems are of two types : Hard real time and
module is known to the programmer, this soft real time. Critical task is completed within
corresponding starting address is stored in the the time limit in hard real time system. All the
object file, then task of loader becomes very delay in the system is fixed and time bounded.
simple and that is to simply place the Existing general purpose operating system
executable form of the machine instructions at does not support the hard real time system
the locations mentioned in the object file.\In functions. Real time task cannot keep waiting
this scheme, the programmer or assembler for longer time without allocating kernel. • Soft
should have knowledge of memory real time system is less restrictive type. Soft
management. The resolution of external real time cannot guarantee that it will be able
references or linking of different subroutines to meet deadline under all condition. Example
are the issues which need to be handled by the of soft real time system is digital telephone and
programmer. The programmer should take digital audio.
care of two things : first thing is : specification
of starting address of each module to be used.
Q8. Relocating Loaders→When a single freely by the program. This loader scheme
subroutine is changed then all the subroutines performs the translation of source program
need to be reassembled. The task of allocation independently. • The source program is read
and linking must be carried out once again. To by assembler and assembler submits following
avoid this rework a new class of loaders is information to loader - i) The length of
introduced which is called relocating loader. • program/ segment.\ ii) A list of symbols in the
Example : The Binary Symbolic Subroutine segment which may be referenced by other
(BSS) loader used in IBM 7094 machine is a segment and relative address of these symbols
relocating loader. • In BSS loader there are within the segment. iii) A list of all the symbols
many procedure segments and one common not defined in the segment but referred by
data segment. The assembler reads the source other segments. iv) The information about the
program, assembles each procedure segment locations of address constants, in the segment
independently. This information along with along with the description of how to revise
intersegment references is passed on to them. v) The translated machine code along
loader. Advantages: 1.Using branch with the assigned relative address. Advantages
instruction to corresponding subroutine the of direct linking loader : 1) The direct linking
desired subroutine can be accessed. Thus the loader allows programmer multiple procedure
four functions of loader i.e. allocation, linking, segments and multiple data segments. Hence
relocation and loading are performed external procedure and data references can be
automatically by the loader. 2.Using relocation resolved by direct linking loader.
bits it can be identified that which instruction Disadvantages of direct linking loader : 1) The
needs to be relocated and which instruction loading process of direct linking loader is
needs to be directly placed. 3.Using transfecter extremely time consuming because all the
vector linking of external subroutines can be modified needs to be allocated, relocated,
done with the main procedure. Thus transfer linked and then get loaded 2) This type of
vector of relocating loader helps in solving the loader requires lot of space to perform.
external references. 4.In this loader scheme,
Q10.Real time OS →Time constraints is the key
the assembler provides the additional
parameter is real time systems. It controls
information such as length of entire program
autonomous system such as robots, satellites,
and length of transfer vector to the loader.
air traffic control and hydroelectric dams. •
Using this information allocation problem gets
When user gives an input to the system, it must
solved. Disadvantages: The transfer vector
process within the time limit and result is sent
links are useful for transferring the control to
back. Real time system fails if it does not give
external subroutines but this vector table is not
result within the time limits. • Real time
well-suited for loading or storing external data
systems are of two types : Hard real time and
(i.e. data present in another program
soft real time. Critical task is completed within
segment). 2.Due to transfer vector, the size of
the time limit in hard real time system. All the
object program in the memory gets increased.
delay in the system is fixed and time bounded.
3.The relocating loader handles the shared
Existing general purpose operating system
procedure segments but it can not handle the
does not support the hard real time system
shared data segments.
functions. Real time task cannot keep waiting
Q9.Direct Linking Loaders→• The direct for longer time without allocating kernel. • Soft
linking loader is a general relocatable loaderv• real time system is less restrictive type. Soft
This type of relocatable loader allows real time cannot guarantee that it will be able
programmer multiple procedure segments and to meet deadline under all condition. Example
multiple data segments. Hence the procedures of soft real time system is digital telephone and
and data from other segments can be referred digital audio.
Q11.Explain Operating System? → An communication. 2.The File View: This view
Operating System (OS) is a software program concerns how the OS manages and accesses
that manages the hardware and software files. It examines how the OS organizes files on
resources of a computer. It acts as an disk, how it manages file permissions and
intermediary between the computer's user access control, and how it handles file
and the computer hardware. The OS provides input/output operations. 3.The Resource
a user interface, such as a command line or Manager View: This view examines how the OS
graphical user interface, through which users allocates and manages resources such as
can interact with the computer. memory, I/O devices, and other hardware
components. It looks at how the OS
There are several types of OS, including:
coordinates access to shared resources, how it
Single-user, single-tasking OS: Allows only one
resolves conflicts and manages contention for
user to execute one task at a time. Multi-user,
resources, and how it provides isolation and
single-tasking OS: Allows multiple users to
protection between different processes and
access the computer simultaneously, but only
users. 4.The Security View: This view examines
one task can be executed at a time. Single-user,
how the OS implements security measures to
multi-tasking OS: Allows one user to execute
protect the system and data from
multiple tasks at the same time. Multi-user,
unauthorized access, viruses and malware. It
multi-tasking OS: Allows multiple users to
looks at how the OS authenticates users and
execute multiple tasks at the same time.
processes, how it encrypts and decrypts data,
The functions of an OS include: Memory how it detects and responds to security
management: managing and allocating threats, and how it enforces security policies.
memory to different processes. Process 5.The Virtualization View: This view examines
management: creating, scheduling and how the OS provides virtualized environment,
terminating processes. File management: how it creates and manages virtual machines,
managing and organizing files on the how it allocates resources to virtual machines
computer. I/O management: managing input and how it isolates them.
and output operations. Security: protecting the
Q13.Client Server and Peer to Peer → Client—
system and data from unauthorized access.
server model: 1.The client-server model firmly
The services provided by an OS include: distinguishes the roles of the client and server.
Program execution: allowing programs to be 2.Under this model, the client requests
executed on the computer. I/O operations: services that are provided by the server.
providing a way for programs to access and 3.Single point Of failure.4. Need for dedicated
communicate with input/output devices. File application and database servers. 5.Resources
system manipulation: providing a way for may be removed at any time. 6.Storage and
programs to access and manipulate files. bandwidth must be provided the host.
Communication: allowing communication 7.Provides good security. Peer-to-peer model
between different processes or programs. ➔ 1.The peer-to-peer model doesn't have
Error detection and handling: detecting and such strict roles. In fact, all nodes in the system
handling errors that occur during the execution are considered peers and thus may act as
of a program. either clients or servers or both. 2.NO single
point Of failure. 3.No need for dedicated
Q12.Views Of OS?==>1.The Process View: This application and database servers. 4.Resources
view focuses on how the OS manages and are not removed from the network until they
schedules processes. It examines how the OS are no longer being requested, 5.Storage and
creates and terminates processes, how it bandwidth are distributed and provided by the
allocates and manages memory for processes, entire network. 6.Provides Poor security.
and how it handles inter-process
Q14.Batch Operating System→ Batch system utilizations. So multiprogramming increases
process a collection of jobs, called a batch. the CPU utilization. • Resource management is
Batch is a sequence of user jobs. • Job is a the main aim of multiprogramming operating
predefined sequence of commands, programs system. File system, command processor, 1/0
and data that are combined into a single unit. control system and transient area are the
• Each job in the batch is independent of other essential components of a single user
jobs in the batch. A user can define a job operating system. • Multiprogramming
control specification by constructing a file with operating system divides the transient area to
a sequence of commands. • Jobs with similar store the multiple programs and provides
needs were batched together to speed up resource management to the operating
processing. Card readers and tape drives are system. The concurrent execution of programs
the input device in batch systems. Output improves the utilization of system resources. A
devices are tape drives, card punches and line program in execution is called a "Process",
printers. • Primary function of the batch "Job" or a ' 'Task". • When two or more
system is to service the jobs in a batch one programs are in the memory at the same time,
after another without requiring the operator's sharing the processor is referred to the
intervention. There is no need for human/ user multiprogramming operating system.
interaction with the job when it runs, since all Advantages : 1. CPU utilization is high. 2. It
the information required to complete job is increases the degree Of multiprogramming.
kept in files. Some computer systems only did Disadvantages : 1. CPU scheduling is required.
one thing at a time. They had a list of 2. Memory management is also required.
instructions to carry out and these would be
carried out one after the other. This is called a
serial system. The mechanics of development Q16.Time Sharing OS →In an interactive
and preparation of programs in such system, many users directly interact with the
environments are quite slow and numerous computer from terminals connected to the
manual operations involved in the process. • computer. Processor's time which is shared
Batch monitor is used to implement batch among multiple users simultaneously is
processing system. Batch monitor is also called termed as time-sharing. Time sharing is logical
kernel. Kernel resides in one part of the extension of multiprogramming OS • Time
computer main memory. Advantages of batch sharing is a method that allows multiple users
system : . Move much of the work of the to share resources at the same time. Multiple
operator to the computer. 2. Increased users in various locations can use a specific
performance since it was possible for job to computer system at a time. • Time sharing is
start as soon as the previous job finished. essentially a rapid time division multiplexing of
Disadvantages of batch system : 1.Turn the processor time among several processes.
around time can be large from user standpoint. The processor switching is so frequent that it
2.Program debugging is difficult. 3.There was almost seems each process has its own
possibility of entering jobs in infinite loop. 4.A dedicated processor. • Time sharing OS is
job could corrupt the monitor, thus affecting designed to provide a quick response to sub-
pending jobs. 5.Due to lack of protection requests made by users. The processor time is
scheme, one batch job can affect pending jobs. shared between multiple users at a time. The
processor allows each user program to execute
Q15.Multiprogramming OS→ CPU remains
for a small time quantum. Moreover, time
idle in batch system. At any time either CPU or
sharing systems use multiprogramming and
1/0 device was idle in batch system. To keep
multitasking. • The operating system provides
CPU busy, more than one program/ job must
immediate feedback to the user and response
be loaded for execution. It increases the CPU
time can be in seconds.
Q17.Distributed OS→ Definition : A by the process. 3.CPU registers: This field
distributed system is a collection of contains the contents of the CPU registers used
autonomous hosts that are connected through by the process. 4.Memory management
a computer network. A distributed system is a information: This information includes the
collection of independent computers that base and limit registers, which define the
appears to its users as a single coherent memory boundaries for the process, as well as
system. Each host executes components and any page or segment tables used for memory
operates a distribution middleware. • management. 5.System resources: This field
Middleware enables the components to contains information about any system
coordinate their activities. Users perceive the resources the process is currently using, such
system as a single, integrated computing as I/O devices or semaphores.
facility. A distributed computer system consists
Q20.Process States →In an operating system,
of multiple software components that are on
the process state refers to the current
multiple computers, but run as a single system.
condition or status of a process. The process
The computers that are in a distributed system
state can change as the process is executed by
can be physically close together and connected
the system. The various process states that a
by a local network, A distributed system can
process can be in are: 1.New: The process is
consist of any number of possible
being created and is not yet ready to be
configurations, such as mainframes, personal
executed. 2.Running: The process is currently
computers, workstations, minicomputers and
being executed by the CPU. 3.Waiting: The
so on. • Distributed operating systems depend
process is waiting for an event, such as
on networking for their operation. Distributed
input/output or a resource, to occur before it
OS runs on and controls the resources of
can continue execution. 4.Ready: The process
multiple machines. It provides resource
is ready to be executed by the CPU, but it is
sharing across the boundaries of a single
currently waiting for the CPU to become
computer system. It looks to users like a single
available. 5.Terminated: The process has
machine OS.
completed execution or has been terminated
Q18.Process Vs Program ➔ Process: 1.Process by the operating system.
is active entity. 2.Process is a sequence of
Q21.Thread Vs Process➔ Thread : 1.Thread is
instruction executions. 3.Process exists in a
also called lightweight process. 2.Operating
limited span Of time. 4.Process is a dynamic
system is not required for thread switching.
entity. Program: 1.Program is passive entity,
3.One thread can read, write or even
2.Program contains the instructions. 3.A
completely clean another threads stack, 4.All
program exists at single place and continues to
threads can share same set of open files and
exist. 4.Program is a static entity.
child processes. 5.If one thread is blocked and
Q19.Process Control block (PCB):→Operating waiting then second thread in the same task
system keeps an internal data structure to can run, 6.Uses fewer resources. Process: 1.
describe each process it manages. When OS Process is also called heavyweight process.
creates process, it creates this process 2.Operating system interface is required for
descriptor. In some operating system, it calls process switching. 3.Each process operates
Process Control Block (PCB). The information independently of the other process. 4.In
stored in a PCB can include: 1.Process state: multiple processing, each process executes the
This information includes the current state of same code but has its own memory and file
the process, such as whether it is running, resources. 5.If one server process is blocked
blocked, or in a waiting state. 2.Program then other server process cannot execute until
counter: This field contains the memory the first process is unblocked. 6.Uses more
address of the next instruction to be executed resources.
Q22.Thread States →A thread, also known as others to do something. In other words, a
a lightweight process, is a separate execution deadlock is a state in which two or more
path that can run concurrently with other processes are blocked, each waiting for a
threads in a process. The thread life cycle resource that the other process has locked.
refers to the series of states that a thread goes This results in a situation where no process can
through from its creation to its termination. continue to execute, and the system becomes
The thread life cycle can vary depending on the unresponsive. There are four main conditions
operating system, but in general, it includes that must be met for a deadlock to occur:
the following states: 1.New: The thread is 1.Mutual Exclusion: At least one resource must
being created and is not yet ready to be be held in a non-shareable mode, meaning that
executed. 2.Ready: The thread is ready to be only one process can use the resource at a
executed by the CPU, but it is currently waiting time. 2.Hold and Wait: A process must be
for the CPU to become available. 3.Running: holding at least one resource while waiting for
The thread is currently being executed by the another resource that is currently held by
CPU. 4.Waiting: The thread is waiting for an another process. 3. No Preemption: Resources
event, such as input/output or a resource, to cannot be taken away from a process, meaning
occur before it can continue that a process can only release a resource
execution.5.Terminated: The thread has voluntarily. 4.Circular Wait: There must be a
completed execution or has been terminated circular chain of processes, where each process
by the operating system. is waiting for a resource held by the next
process in the chain.
Q23.Long Term Vs Short Term Vs Medium
Term ➔Long term: 1.It is job scheduler. Q25.Internal Fragmentation and External
2.Speed is less than short term scheduler. 3.It Fragmentation →Internal Fragmentation:
controls the degree of multiprogramming. 1.Internal fragmentation is the area occupied
4.Absent or minimal in time sharing system. by a process but cannot be used by the
5.It select processes from pool and load them process. 2.First fit and best fit memory
into memory for execution. 6.Process state is allocation does not suffer from internal
(New to Ready). 7.Select a good process, mix fragmentation. 3.In fixed partitioning,
of 1/0 bound and CPU bound. inefficient use of memory due to internal
fragmentation. 4.Paging's suffer from internal
Short term: 1.It is CPU scheduler. 2.Speed is
fragmentation. 5.Segmentation does not
very fast. 3.Less control over degree of
suffer from internal fragmentation.
multiprogramming. 4.Minimal in time sharing
system 5.It select from among the processes External Fragmentation: 1.External
that are ready to execute. 6.Process state is fragmentation exists when total free memory
(Ready to Running) 7.Select a new process for is enough for the new process but it's not
a CPU quite frequently. contiguous and can't satisfy the request. 2.First
fit and best fit memory allocation suffers from
Medium term: 1.It is swapping. 2.Speed is in
external fragmentation. 3.In dynamic
between both. 3.Reduce the degree of
partitioning, inefficient use of processor due to
multiprogramming. 4.Time sharing system use
need for compaction to counter external
medium term scheduler. 5.Process can be
fragmentation. 4.Paging does not suffer from
reintroduced into memory and its execution
external fragmentation. 5.Segmentation suffer
can be continued.
from external fragmentation.
Q24.What is Deadlock and Conditions ➔A
deadlock is a condition in an operating system
where two or more processes are unable to
proceed because each is waiting for one of the
Q26.Dead Lock recovery ➔ Deadlock recovery Q27.Memory Partitioning Variable size → The
is the process of resolving a deadlock situation use of unequal size partitions provides a
in an operating system. The goal of deadlock degree of flexibility to fixed partitioning. In
recovery is to release the resources that are dynamic partitioning, the partitions are of
causing the deadlock, allowing the processes variable length and number. In noncontiguous
to continue execution. There are several ways memory allocation, a program is divided into
to recover from a deadlock, including: blocks that the system may place in
1.Terminating one or more processes: This is nonadjacent slots in main memory. This
the simplest and most straightforward allocation method do not suffer from internal
approach to deadlock recovery. By terminating fragmentation, because a process partition is
a process that is involved in the deadlock, the exactly the size of the process. Operating
resources it holds are freed and can be system maintain the table which contains the
allocated to the other processes. However, this memory areas allocated to process and free
approach can lead to data loss and can have memory. Memory management unit use this
negative impacts on the overall system. information for allocating processes.
2.Rolling back: This approach involves undoing
Q28.Contiguous allocation and Non-
the actions of one or more processes that have
Contiguous allocation: →Contiguous
led to the deadlock. By rolling back the actions
allocation: 1.Program execution take place
of the processes, the resources they hold can
without overhead. 2.Swapped-in processes are
be freed and allocated to the other processes.
placed in the original area. 3.Suffer from
This approach is less destructive than
internal fragmentation. 4.Allocates single area
terminating processes, but it can be more
of memory for process. 5.Wastage of memory.
complex to implement. 3.Timeout
mechanisms: This approach involves setting a Non contiguous allocation: 1.Address
time limit for each process to acquire a translation is overhead. 2.Swapped-in
resource. If a process cannot acquire a processes can be placed any where in memory.
resource within the time limit, it is terminated. 3.Only paging, suffers from internal
This approach can prevent deadlocks from fragmentation. 4.Allocates more than one
occurring in the first place and can be useful in block of memory for process. 5.No wastage of
real-time systems. 4.Resource allocation memory.
graph: This approach involves using a graph
data structure to represent the resources and Q29.Compaction → Compaction solves
processes in the system. By analyzing the problem of external fragmentation. Operating
graph, it is possible to detect and resolve a system moves all the free holes to one side of
deadlock. This approach can be more efficient main memory and creates large block of free
than other methods, but it can be more size. It must be performed on each new
complex to implement.5.Priority-based allocation of process to memory or completion
resource allocation: This approach involves of process for memory. System must also
assigning a priority level to each process and maintain relocation information. All free blocks
each resource. When a deadlock occurs, the are brought together as one large block of free
process with the highest priority is allocated space. Compaction requires dynamic
the resource it needs, while the process with relocation. Compaction has a cost and
the lowest priority is terminated or rolled back. selection of an optimal compaction strategy is
difficult. One method for compaction is
swapping out those processes that are to be
moved within the memory and swapping them
into different memory locations. Compaction is
not always possible.
Q30.First, Best, Worst Fit →First fit: In the first segments. The code segment contains the
fit algorithm, the memory manager begins at program's instructions, the data segment
the beginning of the memory space and looks contains the program's variables and data, and
for the first available block of memory that is the stack segment contains the program's
large enough to accommodate the requested runtime stack. Each segment is assigned a
memory allocation. If the first available block is unique base address and a length, allowing the
not large enough, the memory manager operating system to keep track of the location
continues to search through the remaining and size of each segment. Segmentation
memory blocks until it finds one that is provides the following benefits: 1.Protection:
suitable. This algorithm is simple and fast, but Each segment has its own access rights, and
it can lead to fragmentation of memory over the operating system can prevent programs
time as smaller blocks of memory may be left from accessing memory segments that they
unused. are not authorized to use. 2.Sharing: Programs
can share segments that contain common data
Best fit: In the best fit algorithm, the memory
or code. 3.Reusability: Programs can share
manager searches through all available
segments that contain common data or code,
memory blocks to find the block that best fits
which reduces the amount of memory
the requested memory allocation. This means
required.
that the memory manager looks for the block
that is the closest in size to the requested Q32.Virtual Memory →Virtual memory is a
allocation without being larger. This algorithm memory management technique that allows a
tends to result in less fragmentation of computer to use more memory than is
memory as it tries to use the most efficient physically available by using a combination of
amount of memory possible. However, this physical memory and disk storage. This allows
algorithm can be slower as it has to search a computer to run multiple programs
through all available memory blocks. simultaneously and to use more memory than
the amount of physical memory available. The
Worst fit: In the worst fit algorithm, the
virtual memory system creates a virtual
memory manager searches through all
address space for each program, which is
available memory blocks to find the block that
separate from the physical memory. The
is the largest. The memory manager then
virtual addresses used by the program are then
assigns the requested memory allocation to
translated into physical addresses by the
the largest available block. This algorithm can
memory management unit (MMU) in the
result in a lot of fragmentation of memory as it
computer's central processing unit (CPU).
may leave large blocks of memory unused. It
When a program requests more memory than
can also lead to poor performance as the
is available in physical memory, the operating
memory manager may have to search through
system uses a technique called paging to
a large number of memory blocks to find the
temporarily transfer some of the data from
largest one.
physical memory to disk storage. This data is
Q31.Segmentation →Segmentation is a stored in a paging file, also known as a swap
memory management technique that divides a file. When the program needs the data again,
computer's memory into different segments, it is transferred back to physical memory. This
each of which is used for a specific purpose. process is known as swapping or page
This allows for more efficient use of memory swapping. Virtual memory also enables the
by allocating specific areas for specific types of operating system to use disk storage as an
data or programs. In a segmented memory extension of physical memory, allowing it to
system, each program is divided into multiple use more memory than is physically available.
segments, such as code, data, and stack
Q33.Thrashing →Thrashing is a situation that Q35.Swapping →Swapping is a memory
occurs in virtual memory systems when the management technique that is used by the
operating system is constantly swapping data operating system to temporarily transfer data
between disk storage and physical memory. from physical memory to disk storage when
This happens when the system is running low the computer is running low on physical
on physical memory and the operating system memory. The data that is transferred is stored
is unable to find enough free memory to run in a swap file, also known as a paging file, on
the programs that are currently in use. There the hard disk. When the operating system
are several factors that can cause thrashing: detects that physical memory is running low, it
Insufficient physical memory: If the system begins to look for data that is no longer being
does not have enough physical memory to run used by the program. This data is then
the programs that are currently in use, the transferred to the swap file and the physical
operating system will have to constantly swap memory is freed up for other programs to use.
data between disk storage and physical When the program needs the data again, the
memory. Memory leaks: Some programs may operating system transfers the data back to
consume more memory than they need, physical memory from the swap file. This
causing the system to run low on memory. process is known as swapping or page
High fragmentation: If the system has a lot of swapping. Swapping is typically used in
small, unused blocks of memory, the operating conjunction with virtual memory, which allows
system may not be able to find a large enough a computer to use more memory than is
block of memory to run a program, causing it physically available by using a combination of
to constantly swap data between disk storage physical memory and disk storage. When a
and physical memory. program requests more memory than is
available in physical memory, the operating
Q34. Explain TLB→TLB stands for Translation
system uses swapping to temporarily transfer
Lookaside Buffer. It is a memory management
some of the data from physical memory to disk
technique used to speed up the translation of
storage.
virtual memory addresses to physical memory
addresses. It is a small, high-speed cache that
stores recently used virtual-to-physical address
translations. When a program accesses
memory, the CPU first checks the TLB to see if
the virtual-to-physical address translation is
already stored there. If it is, the CPU can
quickly access the physical memory without
having to perform the translation again. This
speeds up the process of accessing memory, as
the CPU does not have to perform the
translation every time it needs to access
memory. If the virtual-to-physical address
translation is not in the TLB, the CPU must
perform the translation by consulting the page
table, which is stored in main memory. This
process is slower than accessing the TLB, so it
is important that the TLB is large enough to
store the most frequently used translations.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy