Computer Systems Unit 2 - Fill The Blanks
Computer Systems Unit 2 - Fill The Blanks
Process
1. An operating system (OS) is software controlling computer hardware and ____________.
2. Process is the execution of a ____________.
3. Spawning is the technique of producing child processes from a parent ____________.
4. PCB is the ____________ for each process.
5. Consists process state, program counter, ____________ CPU registers.
6. Types of queues in process are Job queue, Ready queue, and ____________ queues.
7. Throughout its execution, a process can generate numerous new processes via the create-process system ____________.
8. The process that generates new processes is termed as the ____________ process.
9. The resulting process are called as its ____________.
10. Most OS assigns a unique process identifier (PID) consisting of an integer number to each ____________.
11. Whenever a process generates a new process, there are two execution options: The parent process proceeds to execute alongside its
____________.
12. The parent awaits the termination of some or all of ____________ children.
13. Also, the address space of the new process has two options: The child process is an exact replica of the parent process, or the child is
being loaded with a new ____________.
14. In UNIX, the fork() system call is used to generate new processes; exec() system call is employed after a fork call in order to substitute
the ____________ space of the process with the new process.
15. A process ends as soon as it completes processing its last statement and requests deletion from the OS utilizing the ____________ system
call.
16. The process performs the final statement and requests the OS to determine the next ____________ (exit).
17. Transmit output data from the child process to the parent process via ____________.
18. The OS reassigns the process's ____________.
19. A parent may halt the execution of one of its offspring for several reasons, including the child has consumed all of the resources that
were assigned to it or the child process is ____________.
20. If the parent process is exiting, then the OS prohibits the continuation of its child processes. This is termed ____________ termination.
Answers
1. Software
2. Program
3. Process
4. Information block
5. Memory
6. Device
7. Function
8. Parent
9. Child
10. Process
11. Offspring
12. Its
13. Program
14. Memory
15. Exit()
16. Step
17. Wait
18. Resources
19. No longer necessary
20. Cascading
Threads
1. A thread is a basic unit of ____________ within a process.
2. Threads share the same ____________ and ____________ as the process that created them.
3. Threads within the same process can communicate via ____________ variables.
4. The main advantage of using threads is ____________ and ____________ sharing.
5. Threads within a process share the same ____________ and ____________ space.
6. Threads within a process can execute ____________ simultaneously.
7. Threads enhance the responsiveness of a system by allowing ____________ execution.
8. Multithreading can lead to ____________ conditions when multiple threads try to access the same resource simultaneously.
9. Mutexes and ____________ are used to prevent race conditions in multithreaded environments.
10. A thread can be in states like ____________, ____________, ____________, and ____________.
11. When a thread finishes its task, it enters the ____________ state.
12. The process in which a single thread is used to execute multiple tasks is known as ____________ threading.
13. Thread ____________ allows a thread to relinquish the CPU so that other threads can execute.
14. Synchronization mechanisms like ____________ ensure that only one thread can access a resource at a time.
15. Thread ____________ involves a thread waiting for a signal or notification from another thread.
16. Deadlock occurs when multiple threads are unable to proceed because each is waiting for a resource held by another in a ____________
loop.
17. A semaphore is a synchronization construct that controls access to a ____________ of resources.
18. The process of creating multiple threads within a process is called ____________.
19. Threads are lighter than processes and have ____________ overhead in terms of resource consumption.
20. Thread ____________ involves the termination of a thread's execution.
Answers
1. Execution
2. Resources, Address
3. Shared
4. Responsiveness, Resource
5. Memory, Address
6. Concurrently
7. Concurrent
8. Race
9. Semaphores
10. Ready, Running, Blocked, Terminated
11. Terminated
12. Cooperative
13. Yielding
14. Locks
15. Waiting
16. Circular
17. Pool
18. Threading
19. Lower
20. Termination
Answers
1. Process
2. Child
3. Code, Data
4. 0, Child's PID
5. 0
6. Parent
7. Process ID (PID)
8. Parent
9. Parallelism
10. Replace
Unix Signals
1. In Unix, when a registered signal handler is triggered, the ____________ stops the usual execution flow.
2. Unix signals, like SIGINT for interrupt, SIGSEGV for segmentation violation, can be sent to a process using the ____________ command or
the kill system call.
3. The signal function in Unix allows configuring a ____________ in response to a specific signal.
4. A process can take action according to a signal using a ____________ handler triggered whenever the signal arrives.
5. Signal handling in Unix is used for constructing various types of IPC, including interaction between parent and child processes and
among processes in a ____________.
6. In multi-threaded processes, signals can be utilized for synchronization and coordination among ____________.
7. Table 2.1 lists various default signals in Unix, including SIGHUP, SIGINT, SIGQUIT, and ____________.
8. Every signal in Unix is mapped to a specific ____________ that it triggers by default.
9. The default actions for signals include stopping the process, disregarding the signal, dumping ____________, halting the process, and
resuming a stopped process.
10. To send signals to a program or script in Unix, users can press ____________ on their keyboard or use the kill command with syntax like
"kill -signal pid."
1. Kernel
2. Kill
3. Signal handler
4. Signal
5. Pipeline
6. Threads
7. SIGFPE
8. Action
9. Core
10. Ctrl+C or INTERRUPT key
Inter-Process Communication
1. IPC stands for Inter-Process ____________.
2. IPC allows communication and data exchange between ____________ in an operating system.
3. In Unix, pipes are a form of IPC, enabling ____________ communication between processes.
4. Shared memory is a type of IPC where processes share a common ____________ area.
5. Message Passing is a mechanism in IPC where processes communicate through exchanging ____________.
6. Semaphores are used in IPC to control access to shared ____________.
7. IPC mechanisms are crucial for synchronization and coordination between ____________.
8. Mutexes are employed in IPC to prevent ____________ conditions when multiple processes access shared resources.
9. In the client-server model, IPC is essential for communication between ____________ and ____________ processes.
10. Signals in IPC are used for ____________ between processes.
11. A critical section in IPC is a part of code where shared resources are ____________ by only one process at a time.
12. Race conditions in IPC occur when multiple processes attempt to access and ____________ shared resources simultaneously.
13. In IPC, pipes have a ____________ end for writing and a ____________ end for reading.
14. Message queues are a form of IPC where processes exchange ____________ messages.
15. In IPC, a deadlock can occur when two or more processes are ____________ for resources held by each other.
16. IPC is essential for communication between parent and ____________ processes.
17. In IPC, events like ____________ and ____________ are used for synchronization.
18. IPC mechanisms like ____________ are used to prevent multiple processes from accessing a shared resource simultaneously.
19. Shared memory in IPC allows processes to communicate by reading and writing data in a ____________ region.
20. In IPC, the client sends a ____________ to request a service from the server.
Answers
1. Communication
2. Processes
3. Unidirectional
4. Memory
5. Messages
6. Resources
7. Processes
8. Race
9. Client, Server
10. Signaling
11. Accessed
12. Modify
13. Write, Read
14. Asynchronous
15. Waiting
16. Child
17. Locks, Semaphores
18. Mutexes
19. Shared
20. Request
CPU Scheduling
1. CPU scheduling is a vital aspect of ____________ management in an operating system.
2. The goal of CPU scheduling is to optimize CPU ____________ among processes.
3. A ____________ scheduler selects a process from the ready queue and assigns the CPU.
4. CPU scheduling algorithms aim to enhance ____________ and ____________ utilization.
5. In preemptive scheduling, a process can be forcibly ____________ from the CPU.
6. Non-preemptive scheduling allows a process to complete its ____________ time on the CPU.
7. The ____________ time is the time taken to switch from one process to another.
8. Round Robin is a widely used ____________ scheduling algorithm.
9. In Priority scheduling, the process with the ____________ priority is given the CPU first.
10. Shortest Job First (SJF) scheduling selects the process with the ____________ burst time.
11. First-Come-First-Serve (FCFS) scheduling follows the principle of ____________ come, first served.
12. In Multilevel Queue scheduling, processes are divided into ____________ priority levels.
13. The Aging technique in scheduling prevents ____________ processes from waiting indefinitely.
14. CPU ____________ is the ratio of non-idle CPU time to total time.
15. In priority scheduling, processes with ____________ priorities can potentially starve.
16. In Priority scheduling, a ____________ value is assigned to each process to determine its priority.
17. Multilevel Feedback Queue scheduling allows processes to ____________ between queues.
18. In SJF scheduling, the ____________ burst time is unpredictable, leading to difficulties in implementation.
19. ____________ scheduling algorithms are those that can adapt their behavior based on the characteristics of the workload.
20. Priority Inversion is a phenomenon where a low-priority process holds a resource needed by a ____________ process.
21. CPU ____________ is the time a process spends actively executing instructions on the CPU.
22. ____________ is the time it takes for a process to move from the new state to the ready state.
23. The ____________ time is the time a process spends waiting in the ready queue.
24. In ____________ scheduling, the process with the highest priority is selected first.
25. In ____________ scheduling, each process gets a small unit of CPU time in turn.
26. In ____________ scheduling, the ready queue is treated as a circular queue.
27. The ____________ time is the total time elapsed from the submission of a process to its completion.
28. In ____________ scheduling, the process with the smallest remaining burst time is selected.
29. The turnaround time is the total time taken by a process, including both ____________ and ____________ time.
30. In Multilevel Queue scheduling, processes in the ____________ priority queue are executed first.
Answers
1. Process
2. Utilization
3. Short-term
4. Throughput, CPU
5. Removed
6. Burst
7. Context
8. Time-sharing
9. Highest
10. Shortest
11. First
12. Distinct
13. Aging
14. Utilization
15. Lower
16. Priority
17. Move
18. Next
19. Adaptive
20. Higher-priority
21. Execution
22. Waiting
23. Ready
24. Priority
25. Round-robin
26. Round-robin
27. Turnaround
28. Shortest Job First
29. Waiting, Execution
30. Highest
Memory Management
1. Memory management is a crucial aspect of ____________ systems.
2. The primary goal of memory management is to provide an efficient ____________ space to processes.
3. In a multiprogramming environment, ____________ is shared among multiple processes.
4. The OS uses a ____________ to keep track of the status of each memory location (free, allocated, etc.).
5. Contiguous memory allocation involves allocating a ____________ block of memory to a process.
6. Paging is a memory management scheme that divides memory into ____________ blocks.
7. A page table is used in paging to map ____________ addresses to physical addresses.
8. The ____________ size and frame size must be powers of 2 in a paging system.
9. Fragmentation can occur in memory, leading to ____________ of available memory.
10. Virtual memory allows processes to execute larger than the ____________ size.
11. Thrashing occurs when the OS spends more time ____________ pages than executing processes.
12. In segmentation, a process is divided into ____________ segments, each with a different purpose.
13. The Translation Lookaside Buffer (TLB) is a cache for storing ____________ page table entries.
14. In demand paging, pages are only brought into ____________ when they are needed.
15. Page replacement algorithms like FIFO and LRU are used to decide which ____________ to replace.
16. The working set model helps determine the ____________ set of pages a process needs.
17. A memory leak occurs when a program fails to ____________ memory it no longer needs.
18. The OS uses the ____________ and base register to implement relocation in memory.
19. Compaction is a technique used to reduce ____________ fragmentation in memory.
20. The buddy system is a memory allocation technique based on ____________ block sizes.
Answers
1. Operating
2. Address
3. Memory
4. Memory table
5. Contiguous
6. Equal-sized
7. Logical
8. Page
9. Wastage
10. Physical
11. Swapping
12. Disjoint
13. Recently-accessed
14. Memory
15. Pages
16. Current
17. Release
18. Limit
19. External
20. Power of 2
Paging
1. Paging is a memory management scheme that breaks ____________ into fixed-sized blocks called pages.
2. Each page in paging has a ____________ number that is used for mapping to physical memory.
3. The size of a page in paging is determined by the ____________ size of the computer architecture.
4. A ____________ table is used to keep track of the mapping between logical and physical addresses in paging.
5. The translation of logical addresses to physical addresses involves using the ____________ table.
6. A page ____________ is a register that holds the page number of the current page being referenced.
7. In paging, internal fragmentation is reduced, but ____________ fragmentation may still occur.
8. One advantage of paging is that it allows for ____________ loading of processes into memory.
9. The process of bringing a page into memory when it is needed is called ____________.
10. The ____________ algorithm is a common page replacement algorithm that replaces the oldest page in memory.
Answers
1. Memory
2. Page
3. Hardware
4. Page table
5. Page
6. Table
7. External
8. Dynamic
9. Page fault
10. FIFO (First-In-First-Out)