lecture24

Document Sample
lecture24 Powered By Docstoc
					                             CMPT 300
              Introduction to Operating
                                     Systems


             Memory: Mono an multiprogramming




                                                0
© Janice Regan, CMPT 300, May 2007
 Memory Hierarchy
 Very fast access, expensive, limited, volatile
   Registers
   Cache
   RAM
   Disk (internal hard disks)
   Backup media (DVD, tape, USB memory, … )
 Relatively inexpensive, slower access,
   persistent, abundant
 © Janice Regan, CMPT 300, May 2007           1
Memory manager
 Efficiently keeps track of
   The parts of available memory in use
   The parts of available memory not in use
   The particular memory used by each process
 Manages
   Allocation/de-allocation of memory
   Bases its management on a model of memory
    understandable to the programmer
   (address space / memory map)

© Janice Regan, CMPT 300, May 2007               2
Simple system:
 Mono-programming with direct
    memory access
     One process in memory (RAM only) at a time
     One process completes then next process
      runs
     All programs are loaded beginning at the
      same known address in physical memory


© Janice Regan, CMPT 300, May 2007               3
Direct access
 Mono-programming
   When we compile a program, we need to
    refer to addresses in instructions inside the
    compiled file.
   All programs use the same set of addresses
    (the actual addresses of the physical
    memory). This means that only 1 program
    can run at a time or address conflicts occur
   Program can access all available memory
    (possible for a process to damage the OS)       4
© Janice Regan, CMPT 300, May 2007
Example
                                                     Junp to 3   0
 Multiple programs at once                                      1
                                                                 2
 Addressing is a problem                                        3
                                                                 4
                                                                 5
                                                                 6
                                                                 7
     Jmp to 7      0                 Junp to 3   0
                                                     Jmp to 7    8
                   1                             1
                                                                 9
                   2                             2
                                                                 A
                   3                             3
                                                                 B
                   4                             4
                                                                 C
                   5                             5
                                                                 D
                   65                            6
                                                                 E5
                   7                             7
                                                                 F
© Janice Regan, CMPT 300, May 2007
Simple system:
 Mono-programming with direct
    memory access
     One process in memory (RAM only) at a time
     One process completes then next process
      runs
     All programs are loaded beginning at the
      same known address in physical memory
      (blue on next slide)

© Janice Regan, CMPT 300, May 2007               6
Memory use: Direct Access
             Embedded systems          Older mainframes         Early PC’s
Address 0
                                       RAM                Reserved, device drivers
              Reserved for OS                               Read only memory
             Read only memory                process            (BIOS …)

                                                              RAM
             RAM                       RAM
                                              data


                                       RAM
                                          Stack/heap


Increasing
addresses                               Reserved for OS      Reserved for OS
                                                                    RAM
                                       RAM
  © Janice Regan, CMPT 300, May 2007                                             7
Variations: direct access
 Even with such a simple model of the system
    there are many possible variations
     The OS may occupy the beginning or the end of the
      block of RAM (illustrated in the memory map)
     The process may be one large image, or the image
      may be broken into regions for different purposes,
      data, process instructions, dynamic memory for
      execution of the program
     The OS may be stored in one part of the RAM, or
      may be separated into more than one section, say
      drivers, and the remaining OS
© Janice Regan, CMPT 300, May 2007                         8
Swapping: the simplest system
 Even in the simple one partition, direct access,
  system it is possible to multi-program if we use
  swapping.
 When we consider programs A and B sharing this
  system swapping requires that
    the entire contents of memory for A is saved to disk
    the context of program A is saved
    The context of program B is loaded
    The memory image of B (previously saved) is loaded
     into memory
 © Janice Regan, CMPT 300, May 2007                         9
Problems with direct access
 No security, possible to access all of memory,
    even the memory holding the OS
     Can corrupt the OS
     Prevents dividing the memory between multiple
      programs
 Can not run many programs ‘at the same time’.
   Swapping helps but is a slow way to service multiple
    programs


© Janice Regan, CMPT 300, May 2007                     10
Other approaches
 Any more complicated (realistic) approaches
    require relocation of process images at locations
    other the the beginning of a single partition and
    require the use of memory abstraction.
        Multiprocessing with fixed partitioning
        Multiprocessing with dynamic partitioning
        Multiprocessing with paging
        Multiprocessing with segmentation
        Multiprocessing with virtual memory and paging
        Multiprocessing with virtual memory and
         segmentation
© Janice Regan, CMPT 300, May 2007                        11
Memory abstraction
 Create a ‘model of memory’ easy for the
    programmer to understand. It should
     Allow multiple processes to share memory.
     Make sure each process can use only the area of
      memory allocated to it (protect other areas of
      memory from the process)
     Remove the need for using absolute hardware
      addresses
     Make it easier to transparently use different types of
      memory (RAM, ROM, registers, …)
© Janice Regan, CMPT 300, May 2007                             12
Address Space: memory map
 An address space maps all available
  memory in the system to a single
  contiguous list of addresses
 A memory map shows us which parts of
  address space (which list of memory
  addresses) are reserved for the OS and
  which parts of address space are in use
  by particular processes
© Janice Regan, CMPT 300, May 2007          13
Address spaces
 The abstraction of an address space can
  be used to describe physical memory
  directly. This is referred to as a physical
  address space
 We will use the term ‘address space’ to
  refer to a logical address space, an
  abstraction providing a list of addresses
  that transparently access all memory in
  the system (can access more than RAM)
© Janice Regan, CMPT 300, May 2007              14
Logical / Physical
 Addresses used and generated by the
  CPU (to interface with the logical memory
  model the programmer is using) are
  logical addresses
 Addresses used in the memory address
  register to actually access a particular
  location in physical memory are physical
  addresses.
© Janice Regan, CMPT 300, May 2007        15
Basic Idea: A map of memory
 The user of the system needs to be able to refer
    to parts of available memory to
     Access or save information
     Refer to a particular location within memory
 The memory model (address space) may refer
    transparently to any combination of
     RAM / ROM memory
     Registers
     Extended portions of memory stored on disk

© Janice Regan, CMPT 300, May 2007                   16
Address space / Memory map
          Address 0
                        Reserved for OS
                       Read only memory




         Increasing
         addresses                        Reserved for OS


© Janice Regan, CMPT 300, May 2007                          17
Other approaches
 Any more complicated (realistic) approaches
    require relocation of process images at locations
    other the beginning of a single partition.
        Multiprocessing with fixed partitioning
             Fixed size partitions (equal or varying sizes)
             Variable size partitions
        Multiprocessing with dynamic partitioning
        Multiprocessing with paging
        Multiprocessing with segmentation
        Multiprocessing with virtual memory and paging
        Multiprocessing with virtual memory and
         segmentation
© Janice Regan, CMPT 300, May 2007                             18
Memory map: multiple processes

                                     Operating system



                                           P1




                                           P2


                                           P3

                                           p4
© Janice Regan, CMPT 300, May 2007                      19
Types of addresses
 Logical address: a reference to a memory
  location in a logical address space
 Relative address: address is expressed as
  a location relative to some point
  (reference address) in physical memory
 Physical address, absolute address in
  physical memory

© Janice Regan, CMPT 300, May 2007        20
Address Binding:
Mapping logical to physical
 Compile time: Create absolute addresses that
  refer directly to physical memory. You must
  know where you program will begin in physical
  memory before compiling
 Load time: Compiler generates relocatable code.
  The starting address is determined at load time.
 Execution time: The process can be loaded at
  different memory locations at different times
  during its execution. Need special hardware
© Janice Regan, CMPT 300, May 2007               21
Relocation: at load time
 Programs are compiled with relative addresses (often
  relative to location 0 of physical memory)
 When the program is loaded all addresses in the code
  are incremented by an offset to the start of the memory
  block being used. (Done by the loader at the time the
  program is loaded)
 When the program is executed the addresses in the
  program all refer to the location at which the program is
  running
 Can we relocate again to a different location after
  swapping?
        Not generally, there are problems (next slide)
© Janice Regan, CMPT 300, May 2007                        22
Relocation: example / problem
 After the first instruction in the program has been
    executed relocation may break code
        Consider a pointer (one of the variables in the code)
        When it is set by the code it points to a particular location in
         memory. The address in the pointer is the physical address of
         the desired variable.
        When the code is relocated, the start address of the code is
         moved from A to B.
        All the addresses that are part of the instructions are updated by
         the loader to be consistent with the new location of the process
         image in physical memory
        However, when the pointer is used it points to the physical
         location of the variable relative to A, not relative to B and the
         code breaks
 Fix using dynamic relocation special hardware MMU
© Janice Regan, CMPT 300, May 2007                                       23
Relocation: at execution time
 Programs are compiled with relative addresses
  (often relative to location 0 of physical memory)
 When the program is loaded can use a base
  register to indicate where the program starts in
  physical memory
 Each relative address encountered in the
  program must be shifted to its physical address
  by adding the base register address before the
  instruction containing the address is executed.
  Addresses are managed by the MMU
© Janice Regan, CMPT 300, May 2007                24
Protection
 All parts of a process image should be protected
  from all other processes.
 Can use an additional register, the limit register
  to help do this.
     The size of the address is placed in the limit register
     Each time an address is calculated (base register
      added to relative address) the relative address is
      compared to the value in the limit register
             If the address is >the value in the limit register then the OS
              knows the memory that would be accessed belongs to
              another process, and can prevent the access
© Janice Regan, CMPT 300, May 2007                                         25
Other approaches
 Now lets consider other approaches, no swapping
     Multiprocessing with fixed partitioning
       Fixed size partitions (equal or varying sizes)
       Variable size partitions
     Multiprocessing with dynamic partitioning
     Multiprocessing with paging
     Multiprocessing with segmentation
     Multiprocessing with virtual memory and
      paging
     Multiprocessing with virtual memory and
      segmentation
 © Janice Regan, CMPT 300, May 2007                      26
Multiprocessing: fixed partitions
 Partitions of equal size
                                                   Operating system
   Any program whose image size is <= to
    the partition size can be loaded into any            P1
    available partition
   Internal fragmentation will occur since              P2
    not all processes will fill their partitions
   Some processes may be too larger for                 P3

    the partition size: requires overlays,
                                                         P4
    programmer breaks program into units
    that can fit in one partition and alternates         p5
    them in his partition

© Janice Regan, CMPT 300, May 2007                               27
Multiprocessing: fixed partitions
 Partitions of different sizes
   Any program whose image size is <= to the partition
    size of a partition can be loaded into that partition
   Now need to choose how to assign a process to a
    particular partition
   It makes sense to place a process in the smallest
    partition into which it will fit, either
     Into partition with next largest physical size
     Into partition with next largest physical size which
         is presently available

© Janice Regan, CMPT 300, May 2007                       28
Fixed partitions, varying sizes

                                     Operating system



                                           P1




                                           P2


                                           P3

                                           p4


© Janice Regan, CMPT 300, May 2007                      29
Fixed partitions: variable sizes
 Each process is placed in a queue for the
     smallest partition that it fits into
         FIFO queue for each size process
 Advantages:
   minimum internal fragmentation, optimal use of
    memory in each partition
   Always know which partition will be used by a
    particular process, make relocation easier
   Know by size of process which partitions offset
    should be used (with 1 partition of each size), so
    compiler can produce absolute addresses or OS
    (loader) can take care of relative addressing
 © Janice Regan, CMPT 300, May 2007                      30
Fixed partitions: variable sizes
 Each process is placed in a queue for the
     smallest partition that it fits into
      FIFO queue for each size process
 Disadvantages:
   If there are no jobs in a particular size range
    a partition is left idle.
              Jobs smaller the ideal size range for the partition
               could execute and use the idle slot improving the
               systems efficiency
      Job can only swap in or out of one partition.
      Partitions may be idle while jobs are waiting
 © Janice Regan, CMPT 300, May 2007                              31
Fixed partitions varying size: 1 FIFO

                                      Operating system



                                            P1




                                            P2


                                            P3

                                            p4
 © Janice Regan, CMPT 300, May 2007                      32
Fixed partitions varying size: 1 FIFO
  All processes put in a single FIFO queue
  When a process is to be loaded it is
     loaded into the smallest available partition
     that will hold it
      OS (loader) must be able to take care of
          relative addressing once a partition has been
          chosen (cannot swap to another partition later)
  Advantage:
    no idle partitions when small jobs are waiting
 © Janice Regan, CMPT 300, May 2007                     33
Fixed partitions varying size: 1 FIFO
  All processes are placed in a single FIFO queue
  When a process is to be loaded it is loaded into
   the smallest available partition that will hold it
  Disadvantages:
    Increased internal fragmentation
    Large jobs may need to wait for small jobs
      using the large partition to finish
    Compiler cannot produce absolute addresses
    Load time relocation, can only swap into or
      out of a single partition (even if >1 with that size)
 © Janice Regan, CMPT 300, May 2007                      34
Multiprocessing: dynamic partitions
 All processes are placed in a single FIFO queue
 When a process is to be loaded it is loaded into a
     portion of memory the correct size to hold it (minimal
     internal fragmentation <1 block/file)
    The portion of memory may start at any location in
     memory
    Leads to external fragmentation. Can compact but slow
     and not generally done
    Must know how much memory is needed so the ‘right’
     part of memory can be chosen
    How do we choose the right part of memory? Need a
     replacement algorithm

 © Janice Regan, CMPT 300, May 2007                       35
Multiprocessing: dynamic partitions
     1
                                            2    2
   OS           OS            OS      OS   OS   OS   OS



                P1            P1      P1
                                                     P4



                              P2      P2   P2   P2



                                      P3   P3   P3   P3




 © Janice Regan, CMPT 300, May 2007                       36
Memory management of
dynamic partitions
 Like working with disk can keep track
    of free and used partitions (blocks)
    using a bit map or a linked list
.
 Same advantages and disadvantages




© Janice Regan, CMPT 300, May 2007         37
Dynamic partitions:
 Over time fragmentation of free space will
    occur and
     Can compact: expensive
     Can used structure linked lists holding free
         partition information.
             Sort links in list with respect to address (in
              address space)
             Use doubly linked list allows for easier
              aggregation of adjacent free partitions (works
              better for buddy system) that will be in adjacent
              links

© Janice Regan, CMPT 300, May 2007                                38
dynamic partitions: placement
 Best fit: choose from among the available blocks of
  memory the smallest one the new process will fit into
   worst performer, must search entire memory space
      each time
   Leaves smallest free blocks between processes
 First fit: scan memory from address 0 and find the first
  block of memory big enough to hold the new process
   usually best and fastest
   Leads to less frequent allocation at end of memory
      space (a chance for larger blocks to become
      available there)
© Janice Regan, CMPT 300, May 2007                           39
dynamic partitions: placement
 Next fit: begin to scan memory from the location
    of the last processes placement and find the first
    block of memory big enough to hold the new
    process
        Not as good a first fit, breaks up potential large
         blocks at end of memory space
 Quick-fit: (works particularly well with buddy
    system)
     Keep separate lists of available partitions in different
      size ranges.
     To find a partition go to the list with the next highest
      partition size and take the next partition
© Janice Regan, CMPT 300, May 2007                            40
dynamic partitions: ‘buddy’ system
 Memory blocks are available in groups of size 2N, from 2L
  to 2U
 When all the memory is available will get a partition of the
  correct 2N for the first process. Only one partition in list
 If the process fits in half of the memory break the memory
  into two partitions
        Put both halves in list of partitions
        Consider the first of the pair.
        Repeat this step until a partition the correct size is produced
 When you have an area of the correct size put your
    process into the first of the pair of partitions

             P1

 © Janice Regan, CMPT 300, May 2007                                        41
dynamic partitions: ‘buddy’ system
      Subsequently find the first block in the list of partitions
       >= to the size needed
          repeat previous two steps
           P1     P3                  P2   P4


      When multiple processes finish and leave a possible
       larger block free aggregate pairs into single blocks
          Process P3 finishes, the empty partition between P3 and P2 is
           added to the freed partition to create one twice as large
          This recreates a partition that was previously split
          The larger partition replaces the one previously in the list

 © Janice Regan, CMPT 300, May 2007                                    42
Memory management
Contiguous allocation
 Fixed partitions
   More internal fragmentation
   Large processes must be built with overlays
 Dynamic partitions
   External fragmentation
   More complicated OS
             To minimize external fragmentation
             To deal with dynamic relocation (can also be
              resolved with additional hardware, MMU)
© Janice Regan, CMPT 300, May 2007                           43
How many processes?
 CPU utilization will increase with more
  processes
 How many processes are needed before we can
  assume most CPU time will be used
 Simplest approximation
       CPUutilization  1  p n

 For better approximation use queuing theory

© Janice Regan, CMPT 300, May 2007              44
Swapping
 No matter how many processes we have in memory at
  one time (within reason) there will be times when all
  processes are blocked
 When all processes are blocked the CPU is idle. This is
  inefficient.
 To avoid this inefficiency we can introduce swapping.
        When all processes are blocked move one or more out of
         memory and move a process ready to execute into the memory
         that is made available
 Later the process swapped out will be reloaded into
    memory
        Need to be able to dynamically relocate the process
        Next time the process is loaded into memory it may not be
         loaded at the same location
© Janice Regan, CMPT 300, May 2007                                   45
dynamic partitions: swapping
    1
                                                2
            OS            OS         OS   OS   OS


                                          P2   P2
                          P3         P3
             P4


                                     P1   P1   P1
             P2           P2


                          P5         P5   P5
                                               P3



© Janice Regan, CMPT 300, May 2007                  46
Maintenance of free memory
 Bitmap: one bit for each unit of memory
   Fast lookup for fixed partitions (can use 1 bit per
     partition)
   Expensive for dynamic partitions (use one bit per
     word)
             Large table, expensive to search, difficult to search for large
              blocks of words
 Linked list: one link for each partition (each
    contiguous group of words) that are free
        Link contains location and length
        More links as disk becomes fragmented
        How to aggregate links when they contain adjacent
         partitions? Use a doubly linked list
© Janice Regan, CMPT 300, May 2007                                         47
Choosing partition size
 Static partition
   Must be chosen when system is configured
   Based on user statistics for similar systems
 Dynamic partitions
   Size of memory image of process
             But what about dynamically allocated variables
                  Leave space in each processes image for dynamic
                   variables
                  Have a shared heap area for dynamic allocation from
                   running processes. (is heap swapped?)

© Janice Regan, CMPT 300, May 2007                                       48
Growing data segments

                Operating system

                                       Code segment

                      P1

                                       Data segment
                                         All local
                      P2                Variables +


                      P3
                                        Dynamically
                      p4             Allocated variables
© Janice Regan, CMPT 300, May 2007                         49
Partition size
 What about dynamically allocated
    variables
     Leave space in each processes image for
         dynamic variables
             What happens if you need more than the
              allocated heap space?
             Waste space if you do not use your dynamic
              allocation


© Janice Regan, CMPT 300, May 2007                         50
Growing data segments
                Operating system

                                     Code segment

                      P1




                      P2
                                        Stack
                      P3


                  Shared heap


© Janice Regan, CMPT 300, May 2007                  51

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:3
posted:3/21/2012
language:Latin
pages:52
yaohongm yaohongm http://
About