Virtual Memory Policies by pptfiles

VIEWS: 4 PAGES: 15

									Allocation of Page Frames
• •
–

How many page frames should be allocated to a process? Should allocate according to size of the working set of the process
Remember that working set is set of pages that process has referenced in the last N seconds or last N references (not easily known at startup) Allocating a process fewer pages than its working set can quickly lead to page faults

–

–

One way to prevent many page faults is to not schedule a process unless it can be allocated enough page frames for its working set

Allocation of Page Frames
• Local vs. global allocation
– Local – when a process needs more pages, evict its own pages (not another process’)
• • Can lead to wasted memory if a process’ working set decreases in size When a process thrashes, it will not cause other processes to thrash

–

Global – on a page fault, evict a page from any process’ page set
• • Better for dynamic working set which may grow or shrink Domino-style thrashing may occur – one process page faults, evicting another process’ pages, and when this process runs, it will evict yet another process’ pages, etc.

Allocation Policies
1) Periodically determine number of running processes and allocate each an equal share (hybrid of local & global)

2) Allocate number of pages proportional to size of process
• Waste and thrashing may still occur

Allocation of Page Frames
• Page fault frequency – measure of page fault rate of a process
– – – Too low – take pages from that process Too high – allocate additional page frames to the process Too many processes have high page fault frequency – swap out one or more of these processes (which process to swap out determined by scheduling algorithm)

•

May also use process priority to affect page allocation (thereby, affecting page replacement)

Relationship of Allocation to Replacement Policy
• FIFO and LRU can be run either globally or locally on the process’ pages Working set makes sense only as a local policy or to initialize the number of pages given to a process

•

Policies to Prevent Thrashing
• • Adopt a local allocation policy Do not schedule a process unless its working set of pages is in memory (implies prepaging) Use Page Fault Frequency approach

•

Page Size
• • • Want to set page size to reduce internal fragmentation Smaller page size leads to more pages leads to larger page table What is optimal page size?
p = page size = ? s = average process size, e = size of each page table entry s/p = average number of pages per process (s/p)*e = space taken up by average process in page table p/2 = average wasted memory in last page of process due to internal fragmentation overhead = (s/p)*e + p/2

•

Find optimal page size by taking derivative and setting to 0 p = 2se

Page Fault Handling
• If present/absent bit = 0, means page is not addressable by the process (protected addresses) or it is not in memory Consult backing store map to distinguish between the cases
– – Absent from map means illegal address Otherwise, backing store map indicates where to find the page on disk

•

Backing Store
• • Swap area on disk where page that is evicted is placed How should space in swap area be allocated to a process?
1) When process started, reserve chunk in swap space equal to process size
–
–

Either initialize chunk with copy of entire process, or load entire process into memory and let it be paged out Disadvantage: size of process may grow. Better to have separate swap areas for text, data, and stack

2) Do not reserve space in swap area in advance
– – Allocate disk space for each page when it is swapped out (de-allocate when swapped in) Disadvantage: disk address needed for each page (instead of per process)

Backing Store Map: 2 Implementations
•
–

Store disk address in PTE
Problems: • Can be wasteful since page table is stored in memory • PTE structure is influenced by hardware, so need hardware cooperation to store disk addresses • Disk addresses are big (device & block number) and OSspecific • Would need to have disk address for every page not in memory, so more difficult to compress page tables

•
– – –

Use separate data structure
Associates addressable regions (text, stack, data) of virtual address space with starting block number on disk Each region of virtual address space is stored contiguously on disk Backing store map data structure stored in memory has only one entry per region

OS Involvement with Paging
•
– – – – –

Process creation
OS determines program and data size Allocates and initializes page table Allocates swap area on disk to store page table when process is swapped out Record page table and swap area in process table Possibly preload pages

•
– – –

Process scheduled for execution
Reset MMU for process Flush TLB Copy page table or start/end addresses to hardware registers

•
–

Process termination
Release page table, page frames, and disk swap space

OS Involvement with Paging
• Page fault
– – – – Determine which virtual address caused page fault Compute which table is needed to locate it on disk Possibly evict a page to make room in main memory Back-up program counter to point to faulting instruction so that it can be executed again (but this time, the needed page is in memory)

Unix Paging Policy
• • Demand paging Page replacement algorithm
– Maintain a certain number of free page frames (within a min/max range) – Swaps out pages when number of free pages is below min – Some Unix variants uses 2-handed clock

Linux Paging Policy
• • • Demand paging Maintain a certain range of free page frames Each process on a 32-bit machine is given 3 GB of virtual address space and 1 GB reserved for page tables and other kernel data 3-level page table Kernel is never paged out Buddy system for memory partitioning

• • •

Windows Paging Policy
• • • • •
– –
• • •

Demand paging without pre-paging Maintain a certain number of free page frames For 32-bit machine, each process has 4 GB of virtual address space Backing store – disk space is not assigned to page until it is paged out Uses working sets (per process)
Consists of pages mapped into memory and can be accessed without page fault Has min/max size range that changes over time
If page fault occurs and working set < min, add page If page fault occurs and working set > max, evict page from working set and add new page If too many page faults, then increase size of working set

•
– –

When evicting pages,
Evict from large processes that have been idle for a long time before small active processes Consider foreground process last


								
To top