Docstoc

management and disk scheduling

Document Sample
management and disk scheduling Powered By Docstoc
					Chapter 11:
I/O management and disk scheduling

        CS 472 Operating Systems
   Indiana University – Purdue University
                Fort Wayne


                                            1
I/O management and disk scheduling
 We have already discussed I/O techniques
   Programmed I/O
   Interrupt-driven I/O
   Direct memory access (DMA)
 Also discussed of value of logical I/O where the
 OS hides most of the details of device I/O in
 system service routines.
   Processes see logical devices in general terms such
   as read, write, open, close, lock, unlock

                                                         2
Other issues . . .
 I/O buffering
 Physical disk organization
 Need for efficient disk access
 Disk scheduling policies
 RAID




                                  3
I/O buffering
 Can be block-oriented or stream-oriented
 Block-oriented buffering
   Information is stored in fixed-size blocks
   Transfers are made a block at a time
   Used for disks and tapes




                                                4
I/O buffering
 Stream-oriented
   Transfer information as a stream of bytes
   Used for . . .
        Terminals
        Printers
        Communication ports
        Mouse and other pointing devices
        Most other devices that are not secondary storage
   Line-at-a-time I/O
        Typical for user input from scroll-mode terminals (DOS command prompt)
           • Carriage return signals the end of line
        Output to the terminal or printer is one line at a time
   Byte-at-a-time I/O
        Used for input from forms-mode terminals (each keystroke significant)
        Example – editing a Word document

                                                                             5
I/O buffering
I/O is a problem under paged virtual memory
   The target page of any I/O operation must be present in a
   page frame until the transfer is complete
   Otherwise, there can be single-process deadlock
Example of single-process deadlock
   Suppose process P is blocked waiting for I/O event to
   complete
   Then suppose the target page for the I/O is swapped out
   to disk
   The I/O operation is subsequently blocked waiting for the
   target page to be swapped in
   This won’t happen until P is runs and causes a page fault

                                                        6
I/O buffering
 Resulting OS complications
    The target page of any I/O operation must be locked in memory
    until the transfer is complete
    But then, a process with pending I/O on any page may not be
    suspended
 Solution: Do I/O through a system I/O buffer in main
 memory assigned to the I/O request
    System buffer is locked in memory frame
    Input transfer is made to the buffer
    Block moved to user space when needed
 This decouples the I/O transfer from the address space of
 the application

                                                              7
I/O buffering for throughput
 With I/O buffers, an application can process the
 data from one I/O request while awaiting another
 Time needed to process a block of data . . .
     without buffering = C + T
     with buffering = M + max{ C, T }
 where:
    C = computation time
    T = I/O memory/disk transfer time
    M = memory/memory transfer time (buffer to user)


                                                       8
Double buffering
 Uses two system buffers instead of one
 A process can transfer data to or from one buffer
 while the operating system empties or fills the
 other buffer




                                                 9
Circular buffering
 More than two buffers are used
 List of system buffers are arranged in a circle
 Appropriate in applications where there are bursts
 of I/O requests




                                                 10
Physical disk organization
                  sectors




                            read-write
                            head




                  track




                                         11
Physical disk organization



                             boom




    cylinder




                                    12
Physical disk organization
 To read or write, the disk head must be positioned
 on the desired track and at the beginning of the
 desired sector
 Seek time is the time it takes to position the head
 on the desired track
 Rotational delay or rotational latency is the
 additional time its takes for the beginning of the
 sector to reach the head once the head is in
 position
 Transfer time is the time for the sector to pass
 under the head
                                                 13
Physical disk organization
 Access time
 = seek time + rotational latency + transfer time
 Efficiency of a sequence of disk accesses
 strongly depends on the order of the requests
 Adjacent requests on the same track avoid
 additional seek and rotational latency times
 Loading a file as a unit is efficient when the file
 has been stored on consecutive sectors on the
 same cylinder of the disk

                                                       14
Example:
Two single-sector disk requests
 Assume
   average seek time = 10 ms
   average rotational latency = 3 ms
   transfer time for 1 sector = 0.01875 ms
 Adjacent sectors on same track
   access time = 10 + 3 + 2*(0.01875) ms = 13.0375 ms
 Random sectors
   access time = 2*(10 + 3 + 0.01875) ms = 26.0375 ms

                                                   15
Disk scheduling policies
 Each policy assumes
   a queue of waiting disk requests exists
   disk requests are entered the queue in random order
 Policies we will consider are:
     random             FIFO
     PRI                SSTF
     SCAN               C-SCAN
     N-Step-SCAN        FSCAN


                                                     16
Disk scheduling policies
 Random – Just a benchmark for comparison

 FIFO
   Next disk request has been in queue the longest
   Same as random if disk requests are queued
    randomly (true for many processes)
   Fair to all processes


                                                     17
Disk scheduling policies
 PRI
   PRIority given to requests, based on process class
          Interactive
          Batch
          Etc.
   Scheduling largely outside of disk management control
          Goal is not to optimize disk use but to meet other objectives
   Short batch jobs may have higher priority
          This provides good interactive response time


                                                                    18
Disk scheduling policies
 SSTF (Shortest Service Time First)
   From requests currently in the queue, choose the
   request that minimizes movement of the arm
       Read/write head movement
   Always chooses minimum seek time
   Resolves ties in a fair manner
       Both inward and outward
   Doesn’t guarantee minimum total arm movement
   Starvation possible

                                                      19
Disk scheduling policies
 SCAN
   Arm moves in one direction only until it reaches the
   last request in that direction
   Then the arm reverses and repeats
   Avoids starvation
 C-SCAN
   Like SCAN, but in one direction only
   Then returns arm to the opposite side and repeats
   Reduces maximum wait in the queue near the disk
   edge

                                                          20
Disk scheduling policies
 N-step-SCAN
   Divide queue into N-request segments
   Use SCAN on each
   New requests are added to the rear of the queue to
   form a new N-request segment
   Reduces maximum waiting time in a high-volume
   situation
   Causes head to move more frequently from one
   cylinder to the next

                                                        21
Disk scheduling policies
 FSCAN
   Like N-step-SCAN but with two queues
   One queue fills while the other is processed using
   SCAN




                                                        22
Disk Scheduling Algorithms




                             23
RAID
 Redundant Array of Independent Disks
 OS views N physical disk drives as a single
 logical drive
 Data are distributed across the physical drives of
 an array
 Redundant disk capacity can be used to store
 error correction information



                                                  24
RAID Level 0 (non-redundant)
 Multiple parallel independent disks
 Increased block transfer rate for consecutive strips




  strip = one unit of data
                                                 25
RAID Level 1 (mirrored)
 Two copies of Level 0 disks (mirror each other)
 A single read request is served by the disk with the
 minimum access time
 A write is done in parallel to both disks
   Write access time is maximum of both write times
 Data redundancy but twice the cost




                                                      26
RAID
 Levels 2 through 6 exist
 Read if interested: pp. 520-523




                                   27