Docstoc

lec07

Document Sample
lec07 Powered By Docstoc
					CHAPTER 8: MAIN MEMORY

   Objectives
     To provide a detailed description of various

      ways of organizing memory hardware
     To discuss the memory management

      technique of paging
Background
   The CPU fetches instructions from memory
    according to the value of program counter.
   The instruction is then decoded and fetches operands
    from memory.
   Execution results may be stored back in memory.
   The memory unit sees a stream of memory address.
 Background: Basic Hardware
 Main memory and registers are the only storage
that the CPU can access directly.
 Cache:Fast memory between the CPU and
main memory.
 Each process has a separate memory space. Its
legal address can be determined by base register
and limit register. (Figure 8.1)
 Any attempt by a program executing in user
mode to access OS memory or other users’
memory results in a trap to the OS. (Figure 8.2)
 Background: Address Binding
 Input queue: the process on the disk that are
waiting to be brought into memory for execution.
 The procedure is to select one of processes in
the input queue, and to load that process into
memory.
 The binding of instructions and data to memory
addresses can be done at any of three different
steps: compile time, load time and execution time.
(Figure 8.3)
Background : Address Binding
  Compile time: If you know at compile time where the
   process will reside in memory, then absolute code can
   be generated.
  Load time: If it is not known at compile time where
   the process will reside in memory, then the compiler
   must generate relocatable code. Final binding is
   delayed until load time.
  Execution time: If the process can be moved during
   its execution from one memory segment to another,
   then binding must be delayed until run time.
Background: Logical vs. Physical Address Space
   CPU  Memory
   Logical address (逻辑地址) and Physical address (物理
    地址)
      Logical address (逻辑地址) – generated by the CPU;

      Physical address (物理地址) – address seen by the
       memory unit.
   Logical (virtual) and physical addresses are the same in
    compile-time and load-time address-binding schemes;
    logical and physical addresses differ in execution-time
    address-binding scheme.
Background: Memory-Management Unit
(MMU)
 The run-time mapping from virtual to physical
  address is done by a hardware device called
  memory-management unit (MMU).
 In MMU scheme, the value in the relocation
  register is added to every address generated by
  a user process at the time it is sent to memory.
 The user program deals with logical addresses;
  it never sees the real physical addresses.
Background:
Dynamic relocation using a relocation register
Background: Dynamic Loading
   Should the entire program and data of a process be in
    physical memory for the process to execute?
   Dynamic loading: routine is not loaded until it is
    called.
   Discussion
      Better memory-space utilization: unused routine is
       never loaded.
      Useful when large amounts of code are needed to
       handle infrequently occurring cases.
      No special support from the operating system is
       required.
      The OS can help programmer by providing library
       routines to implement dynamic loading.
Background: Dynamic Linking

   Static linking  Dynamic linking
   Linking postponed until execution time.
   Small piece of code, stub, used to locate the appropriate
    memory-resident library routine.
   Stub replaces itself with the address of the routine, and
    executes the routine.
   Operating system needs to check if routine is in processes’
    memory address.
   Dynamic linking is particularly useful for libraries.
Swapping
   A process can be swapped temporarily out of memory
    to a backing store, and then brought back into
    memory for continued execution.
   Backing store – fast disk large enough to
    accommodate the copies of all memory images for all
    users; must provide direct access to these memory
    images.
   Roll out, roll in – swapping variant used for priority-
    based scheduling algorithms; lower-priority process
    is swapped out so higher-priority process can be
    loaded and executed.
Swapping: Schematic View
Swapping
 The system maintains a ready queue consisting
  of all processes whose memory images are on
  the backing store or in memory.
 When CPU scheduler decides to execute a
  process, it calls the dispatcher to check
  whether the next process in the queue is in
  memory or not.
 If it is not and there is no free memory region,
  the dispatcher swaps out a process currently in
  memory and swaps in the desired process.
Swapping: Performance

   Major part of swap time is transfer time; total transfer
    time is directly proportional to the amount of memory
    swapped.
      10000KB/40000KB per S = 1/4 seconds = 250ms

      average latency, 250 + 8 = 258 ms

      swap in and swap out, 258 * 2 = 516 ms

   Standard swapping requires too much swapping time
    and provides too little execution time to be a
    reasonable solution. Modified versions of swapping
    are found on many systems, i.e., UNIX and Windows.
Contiguous Memory Allocation
   The main memory must accommodate
      operating system

      various user processes

   The main memory is usually divided into two
    partitions:
      One for the resident operating system, usually held
       in low memory with interrupt vector
      The other for user processes, then held in high
       memory
   For contiguous memory allocation, each process is
    contained in a single contiguous section of memory.
Contiguous Memory Allocation:
Memory Mapping and Protection
   Why to protect memory?
     To protect the OS from user processes

     To protect user process from each other

   How to protect memory?
     To use relocation registers and limit registers to
      protect user processes from each other, and from
      changing operating-system code and data.
     The relocation register contains the value of
      smallest physical address; the limit register
      contains the range of logical addresses – each
      logical address must be less than the limit register.
Contiguous Memory Allocation:
Hardware Support for Relocation and Limit Registers
Contiguous Memory Allocation:
Memory Allocation
   Main memory is divided into a number of fixed-sized
    partitions at system generation time. A process may
    be loaded into a partition of equal or greater size.
      Equal-size partitions

      Unequal-size partions

   Equal-size partitions:
      A program may be too big to fit into all partitions.

      Main-memory utilization is extremely inefficient.
Contiguous Memory Allocation:
Memory Allocation
   Dynamic Storage Allocation Algorithms (unequal
    size)
      First Fit: allocate the first hole that is big enough

      Best Fit: allocate the smallest hole that is big
       enough
      Worst Fit: allocate the largest hole

   Comparisons of these three algorithms
      Both first fit and best fit are better than worst fit in
       terms of decreasing time and storage utilization
      First fit and best fit can not be compared clearly in
       storage utilization, but first fit is generally faster.
Contiguous Memory Allocation:
Fragmentation
   External Fragmentation – total memory space exists to
    satisfy a request, but it is not contiguous.
   Internal Fragmentation – memory is internal to a
    partition, but is not being used.
   Reduce external fragmentation by compaction
      Shuffle memory contents to place all free memory
       together in one large block.
      Compaction is possible only if relocation is dynamic,
       and is done at execution time.
   Another solution to external fragmentation is to permit
    logical address space of processes to be noncontiguous. e.
    g. paging and segmentation.
    Paging
   Paging is a memory-management scheme that
    permits the physical address space of a process to
    be noncontiguous.
     Divide physical memory into fixed-sized blocks called
      frames.
     Divide logical memory into fixed-sized blocks called

      pages. Page size is equal to frame size.
     Divide the backing store into blocks of same size as
      memory called clusters.
     To run a program of size n pages,

        to find n free frames in physical memory

        to load program from backing store
Paging
How to translate logical address to physical address
 Logical address (page number, page offset)
    Page number (p) – used as an index into a
     page table which contains base address of each
     page in physical memory.
    Page offset (d) – combined with base address
     to define the physical memory address that is
     sent to the memory unit.
 Page table contains the base address of each page
  in physical memory.
 Physical address (frame number, page offset)
Paging
Paging
Paging: Page size

   How to partition logical addresses?
     The page size is selected as a power of 2.

     Possible page sizes: 512B  16MB

     m bit logical address

        = m-n bit page number + n bit page offset
     page size = 2 power n
Paging: Page size
Paging: Page size

   Logical address 0 (00p 00d)  (5f x 4page size) + 0d
    physical address
   Logical address 1  (5x4)+1 physical address
   Logical address 2  (5x4)+2 physical address
   Logical address 3  (5x4)+3 physical address
   Logical address 4  (6x4)+0 physical address
   Logical address 5  (6x4)+1 physical address

   Logical address 13  (2x4)+2 physical address
Paging: Page size

   How to select page sizes?
     Internal fragmentation

     Page table size

     Disk I/O

     Generally page sizes have grown over time as
      processes, data, and main memory have become
      larger.
Paging:The OS Concern
   What should the OS be aware of?
      Which frames are allocated?

      Which frames are available?

      How many total frames there are?

      All the information is kept in frame table which has one
       entry for each physical frame, indicating whether it is free
       or allocated, if it is allocated, to which page of which
       process.
   How to allocate frames for a newly arrived process?
      If a process requires n pages, n frames must be available
       and allocated to the process.
      The first page is loaded into the first allocated frame, and
       the frame number is put in the page table for this process.
      The next page is loaded into the next frame, and its frame
       number is put into the next entry of page table, and so on.
Paging:The OS Concern
Paging: Hardware Support


   How to implement the page table
     Small, fast registers (page table is

      reasonably small)
     Main memory

     Main memory + Translation look-aside

      buffer (special, small, fast lookup hardware
      cache)
Paging: Hardware Support

   Page table is kept in main memory.
   Page-table base register (PTBR) points to the page
    table.
   In this scheme every data/instruction access requires
    two memory accesses.
      One for the page table entry access

      One for the data/instruction access
    Paging: Hardware Support
   The two memory access problem can be solved by the
    use of a special, small, fast-lookup hardware cache,
    called a translation look-aside buffer (TLB)
      TLB is associative, high speed memory with two parts:

       key and value.
      TLB contains a few page table entries. When a

       logical address is generated, its page number is
       presented to TLB. If the page number is found, its
       frame number is immediately returned to access
       memory.
      If TLB miss, a memory reference to the page table

       must be made to access memory. In addition, we add
       the page number and frame number to TLB.
Paging: Hardware Support
Paging: Hardware Support
   Hit ratio: percentage of times that a particular page
    number is found in TLB.
   Effective Access Time (EAT)
    hit ratio = 0.8, 20 ns to search TLB, 100 ns to access
    memory.
         EAT = 0.80*120 + 0.20*220 = 140 ns

    hit ratio = 0.98
         EAT = 0.98*120 + 0.02*220 = 122 ns
Paging: Memory Protection
   Memory protection implemented by associating
    protection bit with each frame.
      Read-write or read-only

      Read-write or read-only or executed only or what
       ever
   Valid-invalid bit attached to each entry in the page
    table:
      “valid” indicates that the associated page is in the
       process’ logical address space, and is thus a legal
       page.
      “invalid” indicates that the page is not in the
       process’ logical address space.
Paging: Memory Protection
Valid (v) or Invalid (i) bit in a page table
A system: 14bit address space(0-16383),
A process: use only address of (0-10468), page size 2k
Paging: Shared Pages
   In a time-sharing environment,
      40 users each executes a text editor. One text
       editor process consists 150KB code and 50 KB data.
      Without sharing

        (150KB + 50KB)*40 = 200KB * 40 = 8000KB
      With sharing

        50KB*40 + 150KB = 2000KB + 150KB =
           2150KB
   Some other heavily used programs can also be shared
    – compilers, window systems, run-time libraries,
    database systems, and so on.
   The code must be reentrant (never changes during
    execution)
Paging: Shared Pages
    Segmentation: User’s view of a program

   A program has many segments.
     Segment can be method, procedure, function and
      various data structures.
     Every segment has a name.

     Every segment is of variable length, its length is

      intrinsically defined by the purpose of the segment
      in the program.
     Elements within a segment are identified by their

      offset from the beginning of the segment.
Segmentation: User’s View of a Program
Segmentation

   Segmentation is a memory-management scheme that
    supports user view of memory.
   A logical address space is a collection of segments
   Each segment has a name and a length.
   The address specifies both the segment name and the
    offset within the segment.
   In implementation, the address specifies both the
    segment number and the offset within the segment.
Segmentation
   When a program is compiled, the compiler automatically
    constructs segments.
   A C compiler create separate segments as follows
      The code

      Global variables

      The heap, from which memory is allocated

      The stacks used by each thread

      The standard C library

   Libraries linked in during compile time might be
    assigned separate segments.
   The loader would take all these segments and assign
    them segment numbers.
Segmentation: Hardware
 How to map logical addresses <segment-
  number, offset> to physical address?
 Each entry of the segment table has a segment
  base and a segment limit; each table entry has:
    Segment base contains the starting physical

     address where the segments reside in
     memory.
    Segment limit specifies the length of the

     segment.
Segmentation: Hardware
Segmentation: Hardware
   A process has 5 segments numbered from 0 through 4
   For the logical space, segment table, physical
    memory layout, see the next slide.
   Address mapping
      Segment 0 Byte 1222 => invalid

      Segment 2 Byte 53      => 4300+53
      Segment 3 Byte 852 => 3200+852
Segmentation: Hardware
Homework
 8.3
 8.4

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:0
posted:2/14/2012
language:
pages:48