Virtual Memory_7_ by pptfiles

VIEWS: 16 PAGES: 18

									Virtual Memory
Virtual Memory Management in Mach  Labels and Event Processes in Asbestos


Ingar Arntzen

Introduction


Virtual Memory


Decoupling processes and physical address space
Virtual address Mapping Function Physical address



Two problems – One solution
 

Memory Management


Process footprint grows to fill available memory Processes need to be isolated from other Processes Resources need to be protected from Processes

Isolation & Protection
 



Papers: Focus on different issues

Memory Management


Problem: Not enough main memory


Copy data between main memory and disk Need only parts of process to execute Process footprint -> pages On demand pagein x




Virtual Memory / Paging
  
Main Memory

Page table
 

One entry per virtual page X: Absent Virtual address

x x

x x

MMU

Physical address

Importance


Memory Management & CPU Utilization
 

Interleave i/o-bound processes Effective MM => more processes may run concurrently


Decrease probability that all block at the same time



Future


Processes will continue to fight for physical memory

Isolation & Protection


Isolate processes from each other


Mapping : Process address-spaces map into disjunct physical address-spaces



Resource protection


Mapping : References protected by access control bits.

Referenced

Modified

Present/Absent
Page frame #

Protection

Research


Tradeoff pagesize


Too small


huge pagetable, frequent page faults Less concurrency, costly pagein



Too big




0,5 – 64 KB
NRU, FIFO, CLOCK, LRU, …



Page Replacement Algorithms


Research cont.


Effective Pagetable implementation


Huge pagetable


Pagesize 4KB, 32-bit address space => 1M entries Evaluated on every memory reference




Fast pagetable lookup


More than 1 memory reference per instruction



Design options


Hardware registers


+ fast, - expensive, - context-switch penalty



Main memory


+ cheap, - extra memory ref, - steals precious memory



Something in between

Research cont. cont.


Optimizations


Multilevel pagetable


Pageout unused parts TLB exploits locality. Very effective! (physical page frame -> virtual page) + Smaller, - Expensive to search TLB cashe management, pagefault handling



Cashing Pagetable-entries in hardware




Inverted Page table
 



Software control of hardware


Research cont. cont. cont.


Result
  

Multitude of hardware solutions Multitude of software designs Discussions
  

What are the best solutions? Where to draw the line between HW and SW? User-level meddling in HW business?



=> Context of paper 1

Virtual MM in Mach


Problem


Portability


Strong dependencies between HW and OS



Mach goals


Virtual Memory Management


…on top of diverse HW architectures
   

Few HW assumptions Clean HW/SW separation Easy to port No performance loss



Approach


Experiences with building and porting Mach

Mach Virtual Memory
  

Microkernel OS Integrated Message Passing and Virtual Memory


Send = memory remap (cheap!)


Threads may…

 

Allocate, de-allocate virtual memory Share address spaces Copy address spaces Pagein and pageout

Implementation


Address Map (PageTable)


Ordered, linked list or refs to Memory Objects (E.g. files)
 

+ Only entries for used addresses - More searching Address Map (Page Table)

Memory Object List

Main Memory



Pagefault handling


User level Pager Services


Message Passing between Kernel and Pager

Evaluation


Ported to



VAX, IBM RT PC, SUN3, … UnixPT, InvertedPT, PT/Segment
HW dependent and HW independent

  

Clean separation


TLB not required


But may be used by Mach
Comparison UNIX - Mach on different architectures Mach equal or better Clean separation is possible and has no cost!

Performance
 



Conclusion


Labels and Event Processes


Isolation and Protection in WebServices
 

Stateful services with many concurrent users Isolation between users (not processes) Execute user requests in isolated address spaces Restrict information flow
   



Goals
 

Avoid leaking private user data User data only communicated to privileged system parts Principle of least privilege Application specific policies

Labels


Basic Idea


Restrict access to communication primitives, send & recv
 

A can talk to B, if B is equally privileged If B receives a message from A, this may restrict B’s ability to speak with others



Labels define send & recv privileges relative to domains (compartments)


Kernel support


operations + checking of privileges



Applications define information flow policies

Event Processes
 

Execute user requests in isolated address spaces? Problem
 

Address spaces are associated with processes Forking 1 process per user does not scale!


One reason: Huge pagetables



Threads scale better, but provide no isolation Isolate data from multiple users within one address space Event Process Abstraction




Solution
 

Event handler executes in the context of a given user

Implementation


Base Process
 

Address space divided between event processes Event processes


Context





Receive Ports Communication privileges (Labels) Private user data

 

Bind to private ports Scheduled by kernel within private context


(On incoming message)

Evaluation


Experiments on WebService


Memory consumption


Extra 1.5 Page (4KB) per event process! Modest overheads on throughput, latency Throughput decrease as with increasing number of event processes




Cost of Isolation (Labels)
 

Database costs due to label storage growth



Importance
 

Virtual address is to big How do we implement small virtual address spaces for lightweigth processes?


								
To top