0% found this document useful (0 votes)
66 views

VM Page Replacement: Hank Levy

The document discusses virtual memory page replacement techniques. It begins by explaining the concepts of temporal and spatial locality that paging systems rely on. It then describes demand paging and how it loads pages on demand when they are referenced. When memory is full, a page replacement algorithm like FIFO, LRU, or clock is used to select a page to remove. The goal is to reduce page faults by selecting pages that will not be used soon. The document examines several algorithms and their effectiveness.

Uploaded by

Junaid Akram
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

VM Page Replacement: Hank Levy

The document discusses virtual memory page replacement techniques. It begins by explaining the concepts of temporal and spatial locality that paging systems rely on. It then describes demand paging and how it loads pages on demand when they are referenced. When memory is full, a page replacement algorithm like FIFO, LRU, or clock is used to select a page to remove. The goal is to reduce page faults by selecting pages that will not be used soon. The document examines several algorithms and their effectiveness.

Uploaded by

Junaid Akram
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 21

VM Page Replacement

Hank Levy

12/11/2013

All Paging Schemes Depend on Locality


Processes tend to reference pages in localized patterns Temporal locality


locations referenced recently likely to be referenced again

Spatial locality
locations near recently referenced locations are likely to be referenced soon. Goal of a paging system is
stay out of the way when there is plenty of available memory dont bring the system to its knees when there is not.

12/11/2013

Demand Paging


Demand Paging refers to a technique where program pages are loaded from disk into memory as they are referenced. Each reference to a page not previously touched causes a page fault. The fault occurs because the reference found a page table entry with its valid bit off. As a result of the page fault, the OS allocates a new page frame and reads the faulted page from the disk. When the I/O completes, the OS fills in the PTE, sets its valid bit, and restarts the faulting process.

12/11/2013

Paging

Demand paging
dont load page until absolutely necessary commonly used in most systems doing things one at a time can be slower than batching them.

Prepaging
anticipate fault before it happens overlap fetch with computation hard to predict the future some simple schemes (hints from programmer or program behavior) can work.
vm_advise larger virtual page size sequential pre-paging from mapped files
12/11/2013 4

High Level

Imagine that when a program starts, it has: no pages in memory a page table with all valid bits off The first instruction to be executed faults, loading the first page. Instructions fault until the program has enough pages to execute for a while. It continues until the next page fault. Faults are expensive, so once the program is running they should not occur frequently, assuming the program is well behaved (has good locality).

12/11/2013

Page Replacement


When a fault occurs, the OS loads the faulted page from disk into a page of memory. At some point, the process has used all of the page frames it is allowed to use. When this happens, the OS must replace a page for each page faulted in. That is, it must select a page to throw out of primary memory to make room. How it does this is determined by the page replacement algorithm. The goal of the replacement algorithm is to reduce the fault rate by selecting the best victim page to remove.

12/11/2013

Finding the Best Page


A good property
if you put more memory on the machine, then your page fault rate will go down. Increasing the size of the resource pool helps everyone.

The best page to toss out is the one youll never need again
that way, no faults.

Never is a long time, so picking the one closest to never is the next best thing.
Replacing the page that wont be used for the longest period of time absolutely minimizes the number of page faults. Example:

12/11/2013

Optimal Algorithm

The optimal algorithm, called Beladys algorithm,


has the lowest fault rate for any reference string. Basic idea: replace the page that will not be used for the longest time in the future. Basic problem: phone calls to psychics are expensive. Basic use: gives us an idea of how well any implementable algorithm is doing relative to the best possible algorithm.
compare the fault rate of any proposed algorithm to Optimal if Optimal does not do much better, then your proposed algorithm is pretty good. If your proposed algorithm doesnt do much better than 12/11/2013 Random, go home.

Evaluating Replacement Policies


Random
execution time
Up here, forget it. In here, you can expect to have some effect. Down in this range, it doesnt matter so much what you do.

Effective Access.Time = (1-p)*Tm + p*Td Tm = time to access main memory Td = time to fault Execution time = (roughly) #memory refs * E.A.T.

LRU

Opt

Few frames

Lots of frames

# of physical page frames


12/11/2013 9

FIFO

FIFO is an obvious algorithm and simple to implement. Basic idea, maintain a list or queue of pages in the order in which they were paged into memory. On replacement, remove the one brought in the longest time ago. This page was faulted Why might it work? a long time
Maybe the one brought in the longest ago is one were not using now.

ago.

Why it might not work?


Maybe its not. We have no real information to tell us if its being used or not.

This page was faulted recently

FIFO suffers from Beladys anomaly


the fault rate might actually increase when the algorithm is given more memory -- a bad property.
12/11/2013 10

An Example of Optimal and FIFO in Action


Reference stream is A B C A B D A D B C
OPTIMAL A

5 Faults A B C D A B C

toss C

toss A or D

FIFO A B C A B D A D B C B

7 Faults

toss A

toss ?

12/11/2013

11

Least Recently Used (LRU)



Basic idea: we cant look into the future, but lets look at past experience to make a good guess. LRU: on replacement, remove the page that has not been used for the longest time in the past. Implementation: to really implement this, we would need to time stamp every reference, or maintain a stack thats updated on every reference. This would be too costly. So, we cant implement this exactly, but we can try to approximate it.
why is an approximate solution totally acceptable?
12/11/2013 12

This page was least recently used.

This page was most recently used. Our bet is that pages which you used recently are ones which you will use again (principle of locality) and, by implication, those that you didnt, you wont.

Using the Reference Bit

Various LRU approximations use the PTE reference


bit.
keep a counter for each page at regular intervals, do: for every page:
if ref bit = 0, increment its counter if ref bit = 1, zero its counter zero the reference bit

the counter will thus contain the number of intervals since the last reference to the page. the page with the largest counter will be least recently used one.

If we dont have a reference bit, we can simulate it using the VALID bit and taking a few extra faults.
therefore want impact when there is plenty of memory to be low.
12/11/2013 13

LRU Clock (Not Recently Used)



Basic idea is to reflect the passage of time in the actual data structures and sweeping method. Arrange all of physical pages in a big circle (a clock). A clock hand is used to select a good LRU candidate:
sweep through the pages in circular order like a clock. if the ref bit is off, its a good page. else, turn the ref bit off and try next page.
P0 P1 P2

Arm moves quickly when pages are needed. Low overhead when plenty of memory If memory is big, accuracy of information degrades. add in additional hands
12/11/2013 14

Fixed Space Vs. Variable Space

In a multiprogramming system, we need a way to


allocate memory to the competing processes.

In a fixed-space algorithm each process is given a

Question is: how to determine how much memory to give to each process?

limit of pages it can use; when it reaches its limit, it starts replacing new faults with its own pages. This is called local replacement.

In variable-spaced algorithms, each process can


grow or shrink dynamically, displacing other process pages. This is global replacement.
one process can ruin it for the rest.
12/11/2013 15

some processes may do well while others suffer.

Working Set Model

Peter Denning defined the working set of a program


as a way to model the dynamic locality of a program in execution. Definition: WS(t,w) = {pages i s.t. i was referenced in the interval (t,t-w)} t is a time, w is the working set window, a backward looking interval,measured in references. So, a page is in the WS only if it was referenced in the last w references.
12/11/2013 16

Working Set Size


references t time

The working set size is the number of pages in the


working set; i.e., the number of pages touched in the interval (t, tw). The working set size changes with program locality.
during periods of poor locality, you reference more pages. so, within that period of time, you will have a larger working set size.

For some parameter w, we could keep the working sets of


each process in memory. Dont run process unless working set is in memory.
12/11/2013

17

WS

But, we have two problems:


So, working set is not used in practice.
how do we select w? how do we know when the working set changes?

12/11/2013

18

Page Fault Frequency



PFF is a variable space algorithm that uses a more ad hoc approach. Basic idea:
monitor the fault rate for each process if the fault rate is above a high threshold, give it more memory
should fault less but it doesnt always

Fault Rate

if the rate is below a low threshold, take away memory should fault more but it doesnt always

Hard to tell between changes in locality and changes in size of working set.
TIME

12/11/2013

19

What do you do to pages?

If the page is dirty, you have to write it out to disk.


record the disk block number for the page in the PTE.

If the page is clean, you dont have to do anything.


just overwrite the page with new data make sure you know where the old copy of the page came from

Want to avoid THRASHING


When a paging algorithm breaks down Most of the OS time spent in ferrying pages to and from disk no time spent doing useful work. the system is OVERCOMMITTED
no idea what pages should be resident in order to run effectively

Solutions include:
SWAP Buy more memory
12/11/2013 20

Page Traces

It is a sequence of page frame numbers (PFNs) generated during the execution of a given program. Uses:
To analyze the performance of a paging memory system page trace experiments are often performed. To determine Occurrence of page hits and flops. To determine hit Ratio of memory management system.

12/11/2013

21

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy