OSY Chapter 6

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

OSY Chapter 6

Q1. Describe Direct Access -4mk


Direct access, also known as relative access, is a method of accessing files that
allows programs to read and write records rapidly in no particular order.
Here are the key points about direct access:
1. Fixed Length Logical Records: A file is composed of fixed-length
logical records, enabling rapid read and write operations without
following a specific sequence.
2. Disk Model: Direct access is based on the disk model of a file, which
allows random access to any file block. This means you can directly
access any block, such as block 14 or block 53, without having to go
through other blocks sequentially.
3. Numbered Sequence of Blocks: Files are viewed as a numbered
sequence of blocks or records. This numbering allows direct access to
specific blocks.
4. Immediate Access: This method is used for immediate access to large
amounts of information, making it suitable for applications like
databases.
5. Database Access: When a query concerning a particular subject arrives,
the system computes which block contains the answer and reads that
block directly to provide the desired information.
6. Read and Write Operations: The read n operation is used to read the
nth block from the file, while the write n operation is used to write to
that block. The block numbers provided by the user to the operating
system are relative block numbers.
7. Relative Block Number: A relative block number is an index relative to
the beginning of the file. The first relative block of a file is 0, the next is
1, and so on.
8. Absolute Disk Address: The actual absolute disk address of the block is
different from the relative address. The use of relative block numbers
allows the operating system to decide where the file should be placed
and helps prevent the user from accessing portions of the file system
that may not be part of their file.
Q2. Describe Sequential access -4mk
Sequential access is a method of accessing information from a file where
data is processed in a specific order, one record after another. This is a
common access mode used by editors and compilers. Here are the key
points about sequential access:
1. Order of Processing: Information is processed in a specific sequence,
one record after another. This is typical for applications like editors and
compilers that access files in a linear fashion.
2. Read Operation: When reading from a file, the operation reads the next
portion of the file and automatically advances the file pointer to the next
position. This ensures that data is read in the correct order.
3. Write Operation: Writing to a file in sequential access mode involves
appending new information to the end of the file. The file pointer is
advanced to the end of the newly written material, maintaining the
sequence.
4. Resetting the File: A file can be reset to the beginning, allowing the
process to start over from the first record.
5. Skipping Records: In some operating systems, it is possible to skip
forward or backward by a specified number of records (n), providing
some flexibility in accessing different parts of the file.
Overall, sequential access is straightforward and efficient for
applications that process data in a linear order.
Q3. Contagious File Allocation -6mk

The contiguous allocation method is a file storage technique that requires


each file to occupy a set of contiguous addresses on the disk. Here is a
detailed explanation:
1. Linear Ordering: Disk addresses define a linear ordering on the disk.
This means that the blocks of a file are stored sequentially on the disk.
2. File Allocation: The contiguous allocation of a file is defined by the disk
address of the first block and its length. If a file is 'n' blocks long and
starts at location 'b', it will occupy blocks b, b+1, b+2, ..., b+n-1.
3. Directory Entry: The directory entry for each file indicates the address
of the starting block and the length of the area allocated for the file.
This helps in locating the file on the disk.
4. Access Methods: Contiguous allocation supports both sequential and
direct access. For sequential access, the blocks are read one after
another. For direct access, to access block 'i' of a file starting at block 'b',
the system can directly access block b+i.
5. Space Allocation: The main difficulty with contiguous allocation is
finding space for a new file. If a file to be created is 'n' blocks long, the
system must search the free space list for 'n' free contiguous blocks.
This can be challenging, especially as the disk becomes fragmented over
time.
In summary, while contiguous allocation is efficient for accessing files, it can
be problematic in terms of finding contiguous free space for new files, leading
to potential fragmentation issues.
Advantages of Contiguous File Allocation Method:
1. Supports both sequential and direct access methods.
2. Contiguous allocation is the best form of allocation for sequential
files. Multiple blocks can be brought
3. It is also easy to retrieve a single block from a file. For example, if a file
starts at block ‘n’ and the ith block of the file is wanted, its location on
secondary storage is simply n+i.
4. Reading all blocks belonging to each file is very fast.
5. Provides good performance.

Disadvantages of Contiguous File Allocation Method:


1. Suffers from external fragmentation.
2. Very difficult to find contiguous blocks of space for new files.
3. Also with pre-allocation, it is necessary to declare the size of the file at
the time of creation which many a times is difficult to estimate.
4. Compaction may be required and it can be very expensive.

Q4. Linked Allocation


1. This allocation is on the basis of an individual block. Each block contains
a pointer to the next block in the chain.

2. Scattered Disk Blocks: In linked allocation, the disk blocks that make
up a file can be scattered anywhere on the disk. This means that the
blocks do not need to be contiguous.
3. Directory Pointers: The directory entry for a file contains pointers to
the first and the last blocks of the file. This helps in locating the start
and end of the file.
4. Creating a New File: To create a new file, a new entry is simply added
to the directory. This entry will include pointers to the blocks that will
store the file's data.
5. Linked Allocation: The method involves linking each block to the next
block in the sequence. This is typically done using pointers stored
within each block.
6. No External Fragmentation: Since only one block is needed at a time,
there is no external fragmentation.
7. Dynamic File Size: The size of a file does not need to be declared when
it is created. The file can grow dynamically as long as there are free
blocks available on the disk.
8. Sequential Access: This method is primarily used for files that are
accessed sequentially.
9. Pointer Storage: Linked allocation requires additional space to store
pointers in each block. These pointers link the blocks together in the
correct order.
Clusters for Allocation: To reduce the overhead of storing pointers,
clusters (groups of blocks) are sometimes used instead of individual
blocks. However, this can lead to internal fragmentation, where there is
unused space within the allocated clusters.
In summary, linked allocation is a method of file storage where each block
points to the next block, allowing files to be stored in non-contiguous blocks
on the disk. This method avoids external fragmentation and allows files to
grow dynamically, but it is best suited for sequential access and requires
additional space for storing pointers.

Chapter 3
Q1. Define Process and PCB

A process is defined as, a


program under execution,
which competes for the CPU time and other resources. A process is a program
in execution. Process is also called as job, task and unit of work.

Process Control Block (PCB)


A Process Control Block (PCB) is a data structure used by the operating
system to store all the information about a process. Each process is
represented by a PCB, which contains several important fields:
Components of PCB
 Process Number (PID):
 Each process has a unique identifier called the Process
Identification Number (PID).
 This ensures that no two processes have the same ID.
 Priority:
 Each process is assigned a certain level of priority
 It is the preference of the one process over other process for
execution
 It can be set by users, system managers, or the operating system
itself.
 Process State:
 Displays the current status of the process (e.g., new, ready,
running, waiting, or halted).
 Program Counter:
 Indicates the address of the next instruction to be executed.
 CPU Registers:
 These are various registers (like accumulators and stack pointers)
that hold data and addresses for processing.
 They must be saved during interrupts to resume the process
correctly.
 CPU Scheduling Information:
 Contains details on process priority, scheduling queues, and other
relevant scheduling parameters.
 Memory Management Information:
 Includes details like base and limit registers, page tables, or
segment tables that the operating system uses for memory
management.
 Accounting Information:
 Tracks how much CPU time and real time the process has used,
along with time limits and job numbers.
 I/O Status Information:
 Lists the I/O devices allocated to the process, as well as open file
information.
 File Management:
 Contains information about all open files for the process and their
access rights.
 Pointer:
 Points to another PCB, which helps in managing the scheduling
list of processes.

Kernel Level Threads


Kernel Level Threads are threads managed directly by the operating
system's kernel. The key features include:
 Managed by the Kernel: The kernel handles all aspects of thread
management, including creation, scheduling, and context switching,
which means applications do not need to manage threads themselves.
 Multithreading Capability: Applications can be programmed to
support multiple threads within a single process, allowing for
concurrent execution of tasks.
 Context Information: The kernel maintains information about both the
overall process and individual threads, enabling efficient scheduling and
management.
 Performance: While kernel threads provide strong management and
resource allocation, they can be slower to create and manage than user-
level threads due to the overhead involved in kernel operations

Simplified Explanation to Avoid Circular Wait:


 Resource Ordering:
 To prevent a situation where processes wait on each other, we
can assign a specific order to resources.
 Each resource is given a priority number. For example:
 r1=1
 r2=2
 r3=3
 r4=4
 This means that if a process (let's call it Process P) wants to use
resources r1r1 and r3r3, it should first request r1r1 and then r3r3
. This helps maintain order and prevents conflicts.
 Releasing Resources:
 Another rule is that whenever a process requests a resource (let’s
say rjrj), it must first release any resources that have a lower
priority than rjrj.
 In simpler terms, if you want a resource that is higher in the
priority list, you have to let go of any resources that are lower in
priority.
Considerations:
 Although this method helps avoid circular waits, it may make things
more complicated and can lead to inefficient use of resources.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy