0% found this document useful (0 votes)
162 views4 pages

HW4

Advanced Operating Systems 1. What is thrashing? Why does it occur? Explain the two approaches to prevent it from happening. Thrashing occurs when too many processes are run on a processor at a given time. What occurs is the the number of page faults increase dramatically and the virtual memory subsystem is constantly paging pages in and out of memory. This occurs when the working set of all processes is larger than the amount of RAM available on a system. This can be detected by monitoring the page fault frequency and CPU utilisation. If increasing the number of processes results in increasing page fault rate and decreasing CPU utilisation, then the system is thrashing. To recover from this condition the number of processes currently in the running/ready queue must be reduced. This can be accomplised by suspending processes (pushing them onto the sleeping queue), so that pressure on physical memory is reduced (suspended processes eventually get swapped out), and thrashing subsides. • Approach 1: working set – thrashing viewed from a caching perspective: given locality of reference, how big a cache does the process need? – Or: how much memory does process need in order to make “reasonable” progress (its working set)? – Only run processes whose memory requirements can be satisfied. • Approach 2: page fault frequency – thrashing viewed as poor ratio of fetch to work – PFF = page faults / instructions executed – if PFF rises above threshold, process needs more memory • not enough memory on the system? Swap out. – if PFF sinks below threshold, memory can be taken away 2. What is memory-mapped-file? Explain the advantage of the approach. A memory-mapped file is a segment of virtual memory which has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource. This resource is typically a file that is physically present on-disk, but can also be a device, shared memory object, or other resource that the operating system can reference through a file descriptor. Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory. Adavantage: Primary benefit is increase in I/O performance. A possible benefit of memory-mapped files is a "lazy loading", thus using small amounts of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously writes pages from memory back to disk. Memory-mapping may not only bypass the page file completely, but the system only needs to load the smaller page-sized sections as data is being edited, similarly to demand paging scheme used for programs. 3. With an average page-fault service time of 4 milliseconds and a memory access time of 200 nanoseconds, what is the effective memory access time in nanoseconds for 0.0001% page-fault rate? 4. Consider a demand-paging system with the following time-measured utilizations: CPUutilization 20% Paging disk 97.7% OtherI/Odevices 5% For each of the following, indicate whether it will (or is likely to) improve CPUutilization. Explain your answers. a. Install a fasterCPU. b. Install a bigger paging disk. c. Increase the degree of multiprogramming. d. Decrease the degree of multiprogramming. e. Install more main memory. f. Install a faster hard disk or multiple controllers with multiple hard disks. h. Increase the page size. 5. Answer to the textbook question “10.7. It is sometimes said …” 6. Answer to the textbook question “10.11. Suppose that a disk …” 7. Answer to the textbook question “11.12. Provide examples of …” Advanced Operating Systems (CS5500) HW #6, Fall 2015 1. What is thrashing? Why does it occur? Explain the two approaches to prevent it from happening. Thrashing occurs when too man

Uploaded by

Sam Rocky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
162 views4 pages

HW4

Advanced Operating Systems 1. What is thrashing? Why does it occur? Explain the two approaches to prevent it from happening. Thrashing occurs when too many processes are run on a processor at a given time. What occurs is the the number of page faults increase dramatically and the virtual memory subsystem is constantly paging pages in and out of memory. This occurs when the working set of all processes is larger than the amount of RAM available on a system. This can be detected by monitoring the page fault frequency and CPU utilisation. If increasing the number of processes results in increasing page fault rate and decreasing CPU utilisation, then the system is thrashing. To recover from this condition the number of processes currently in the running/ready queue must be reduced. This can be accomplised by suspending processes (pushing them onto the sleeping queue), so that pressure on physical memory is reduced (suspended processes eventually get swapped out), and thrashing subsides. • Approach 1: working set – thrashing viewed from a caching perspective: given locality of reference, how big a cache does the process need? – Or: how much memory does process need in order to make “reasonable” progress (its working set)? – Only run processes whose memory requirements can be satisfied. • Approach 2: page fault frequency – thrashing viewed as poor ratio of fetch to work – PFF = page faults / instructions executed – if PFF rises above threshold, process needs more memory • not enough memory on the system? Swap out. – if PFF sinks below threshold, memory can be taken away 2. What is memory-mapped-file? Explain the advantage of the approach. A memory-mapped file is a segment of virtual memory which has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource. This resource is typically a file that is physically present on-disk, but can also be a device, shared memory object, or other resource that the operating system can reference through a file descriptor. Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory. Adavantage: Primary benefit is increase in I/O performance. A possible benefit of memory-mapped files is a "lazy loading", thus using small amounts of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously writes pages from memory back to disk. Memory-mapping may not only bypass the page file completely, but the system only needs to load the smaller page-sized sections as data is being edited, similarly to demand paging scheme used for programs. 3. With an average page-fault service time of 4 milliseconds and a memory access time of 200 nanoseconds, what is the effective memory access time in nanoseconds for 0.0001% page-fault rate? 4. Consider a demand-paging system with the following time-measured utilizations: CPUutilization 20% Paging disk 97.7% OtherI/Odevices 5% For each of the following, indicate whether it will (or is likely to) improve CPUutilization. Explain your answers. a. Install a fasterCPU. b. Install a bigger paging disk. c. Increase the degree of multiprogramming. d. Decrease the degree of multiprogramming. e. Install more main memory. f. Install a faster hard disk or multiple controllers with multiple hard disks. h. Increase the page size. 5. Answer to the textbook question “10.7. It is sometimes said …” 6. Answer to the textbook question “10.11. Suppose that a disk …” 7. Answer to the textbook question “11.12. Provide examples of …” Advanced Operating Systems (CS5500) HW #6, Fall 2015 1. What is thrashing? Why does it occur? Explain the two approaches to prevent it from happening. Thrashing occurs when too man

Uploaded by

Sam Rocky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Advanced Operating Systems (CS5500)

HW #4, Fall 2015


1. Explain the difference between preemptive and nonpreemptive scheduling.

The running task cannot be stopped until it finishes its execution. This is called Non-Preemptive
Scheduling.
The running task is interrupted for some time and resumed later when the priority task has
finished its execution. This is called preemptive scheduling.

2. Which of the following scheduling algorithms could result in starvation?


a. First-come, first-served
b. Shortest job first
c. Round robin
d. Priority

b. Shortest Job First


d.Priority

3. CPU scheduling such as SJF (shortest job first) is based on the precise prediction of CPU
burst times. One of the algorithm is exponential averaging. Check out the following
implementation and answer to questions (20 points)
float absolute(float n){
if (n < 0) return -n;
return n;
}
float eval_error(float *pred, float *actual, int size){
float err = 0;
int i;
for (i=0;i<size;i++){
// add each error value
err += absolute(pred[i] - actual[i]);
}

return err;
}
int main() {
float predicted_burst[100];
float actual_burst[]={20, 8, 7};
float alpha = 0.2;
int num_burst = 3;
int i;
predicted_burst[0] = 10;
for(i=0; i<num_burst; i++){
predicted_burst[i+1] = alpha * predicted_burst[i] + (1-alpha) * actual_burst[i];
printf("%f ", predicted_burst[i+1]); // Line P
}
printf("error = %f\n", eval_error(predicted_burst, actual_burst, num_burst)); // Line X
}
(a) What output will be displayed to the screen by Line P?
Put down all three values in order.

18.000000

10.000000

7.600000

(b) What output will be displayed to the screen by Line X?

error = 23.000000
4. Consider the following four processes of varying arrival times and CPU burst times.
* Assumptions
When there are multiple processes that arrive at the same time, the process with smaller
index goes first to the ready queue.
If a process goes back to the ready queue after running when new processes arrive,
new processes go first to the ready queue.
Process

Arrival Time

CPU Burst Time

P1

P2

P3

P4

(a) Draw a Gantt chart that illustrates the execution of these processes using preemptive SJF
(or shortest-remaining-time-first) scheduling algorithms. Calculate its average turnaround

time, waiting time, and average response time. When a new process has the same
remaining time with the running process, the running process keeps running. When there
are multiple processes with the same remaining time in the ready queue, the one that
comes first in the ready queue is selected.

P1
0

P1
1

P3

P1
5

P1

P2

P4
13

20

Avg wait time = 4.25


Avg Turn around time = 9
Avg Response time = 3.5

(b) Draw a Gantt chart that illustrates the execution of these processes using RR (quantum = 3)
scheduling algorithms. Calculate its average turnaround time, waiting time, and average
response time.

P1
0

P2
3

P3
6

P1
8

P4
11

P2
14

P4
16

P4
19

20

Avg waiting time = 6.25


Avg turn around time = 11.25
Avg response time = 2.5

5. Check out the following real time CPU scheduling information and answer to each question.

There are two processes P1. P2. The periods for P1 and P2 are 50 and 80. The processing
times are t1 = 30 for P1 and t2 = 25 for P2. The deadline for each process requires that it
complete its CPU burst by the start of its next period.
(a) Draw a Gantt chart when rate-monotonic scheduling is used. Does this algorithm satisfy
all the deadlines? Yes/No

No
(b) Draw a Gantt chart when earliest-deadline-first-scheduling is used. Does this algorithm
satisfy all the deadlines? Yes/No

Yes

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy