9781439079201 PPT ch12 by v143d0S9

VIEWS: 20 PAGES: 72

									Understanding Operating Systems
         Sixth Edition

            Chapter 12
       System Management
                 Learning Objectives

    After completing this chapter, you should be able to
    describe:
•   The tradeoffs to be considered when attempting to
    improve overall system performance
•   The roles of system measurement tools such as
    positive and negative feedback loops
•   Two system monitoring techniques
•   The fundamentals of patch management
•   The importance of sound accounting practices by
    system administrators
Understanding Operating Systems, Sixth Edition             2
        Evaluating an Operating System
• Most OSs were designed to work with a certain
  piece of hardware, a category of processors, or
  specific groups of users.
• Although most evolved over time to operate multiple
  systems, most still favor some users and some
  computing environments over others.
• To evaluate an OS, you need to know:
    –   Its design goals and history;
    –   How it communicates with its users;
    –   How its resources are managed;
    –   What tradeoffs were made to achieve its goals.

Understanding Operating Systems, Sixth Edition           3
      Evaluating an Operating System
• An Operating system’s strengths and weaknesses
  need to be weighed in relation to:
    – Who will be using the operating system;
    – On what hardware;
    – For what purpose.




Understanding Operating Systems, Sixth Edition     4
        Cooperation Among Components
• The performance of any one resource depends on
  the performance of the other resources in the
  system.
• If you managed an organization’s computer system
  and were allocated money to upgrade it, where
  would you put the investment to best use?
    –   A faster CPU
    –   Additional processors
    –   More disk drives
    –   A RAID system
    –   New file management software
• Or, if you bought a new system, what characteristics
  would you look for that would make it more efficient
  than the old one?
Understanding Operating Systems, Sixth Edition       5
        Cooperation Among Components
                    (cont’d)
• Any system improvement can be made only after
  extensive analysis of:
    –   The needs of the system’s resources;
    –   Requirements;
    –   Managers;
    –   And users.
• Whenever changes are made to a system, often
  you’re trading one set of problems for another.
• The key is to consider the performance of the entire
  system and not just the individual components.



Understanding Operating Systems, Sixth Edition           6
         Role of Memory Management
• Memory management schemes were discussed in
  Chapters 2 and 3.
• If you increase memory or change to another
  memory allocation scheme, you must consider the
  actual operating environment in which the system
  will reside.
• There’s a trade-off between memory use and CPU
  overhead.
• As the memory algorithms grow more complex, the
  CPU overhead increases and overall performance
  can suffer.
• Some OS perform remarkably better with additional
  memory.
Understanding Operating Systems, Sixth Edition        7
       Role of Processor Management
• Processor management was covered in Chapters
  4,5, and 6.
• Let’s say you decide to implement a
  multiprogramming system to increase your
  processor’s utilization.
    – You’d have to remember that multiprogramming
      requires a great deal of synchronization between:
        • The Memory Manager;
        • The Processor Manager;
        • The I/O devices.




Understanding Operating Systems, Sixth Edition            8
       Role of Processor Management
                  (cont'd.)
   – The tradeoff:
       • Better use of the CPU versus increased overhead;
       • Slower response time;
       • Decreased throughput.
• Problems to watch for:
   – A system could reach a saturation point if the CPU is
     fully utilized but is allowed to accept additional jobs.
       • This would result in higher overhead and less time to
         run programs.




Understanding Operating Systems, Sixth Edition                   9
       Role of Processor Management
                  (cont'd.)
• Problems to watch for:
   – Under heavy loads, the CPU time required to manage
     I/O queues (which under normal circumstances don’t
     require a great deal of time) could dramatically
     increase the time required to run jobs.
   – With long queues forming at the channels, control
     units, and I/O devices, the CPU could be idle waiting
     for processes to finish their I/O.
• Likewise, increasing the number of processors
  necessarily increases the overhead required to
  manage multiple jobs among multiple processors.
• The payoff can be faster turnaround time.
Understanding Operating Systems, Sixth Edition           10
          Role of Device Management
• Device management, covered in Chapter 7, contains
  several ways to improve I/O device utilization
  including:
    – Blocking, buffering, and rescheduling I/O requests to
      optimize access time.
• Tradeoffs
    – Each of these options also increases CPU overhead
      and uses additional memory space.
• Blocking
    – Reduces the number of physical I/O requests (good).
    – But it’s the CPU’s responsibility to block and later.
      deblock the records, and that’s overhead (bad).
Understanding Operating Systems, Sixth Edition            11
  Role of Device Management (cont'd.)
• Buffering
    – Helps the CPU match slower I/O device speeds and
      vice versa, but it requires memory space for the
      buffers, either dedicated space or a temporarily
      allocated section of main memory
        • This reduces the level of processing that can take
          place.
    – Tradeoff
        • Reduced multiprogramming versus better use of I/O
          devices.




Understanding Operating Systems, Sixth Edition                 12
  Role of Device Management (cont'd.)
• Rescheduling requests
    – A technique that can help optimizes I/O times;
    – It’s a queue reordering technique.
    – It’s also an overhead function so the speed of both
      the CPU and the I/O device must be weighed against
      the time it would take to execute the reordering
      algorithm.




Understanding Operating Systems, Sixth Edition         13
  Role of Device Management (cont'd.)




Understanding Operating Systems, Sixth Edition   14
  Role of Device Management (cont'd.)
• Let’s assume that a system consisting of CPU1 and
  Disk Drive A has to access Track 1, Track 9, Track
  1, and then Track 9 and the arm is already located
  at Track 1.
• Without reordering, Drive A requires approximately
  35 ms for each access:

   35 + 25 + 35 = 105 ms (Figure 12.2)




Understanding Operating Systems, Sixth Edition     15
  Role of Device Management (cont'd.)

• Example: without reordering
    – CPU 1 and disk drive A
        • Access track 1, track 9, track 1, track 9
        • Arm already located at track 1




Understanding Operating Systems, Sixth Edition        16
  Role of Device Management (cont'd.)
• After reordering (which requires 30 ms), the arm can
  perform both accesses on Track 1 before traveling,
  in 35 ms, to Track 9 for the other two accesses,
  resulting in a speed nearly twice as fast :

   30 + 35 = 65 ms (Figure 12.3)




Understanding Operating Systems, Sixth Edition      17
  Role of Device Management (cont'd.)

• Example: after reordering
    – Arm performs both accesses on Track 1 before
      traveling Track 9 (35 ms)




Understanding Operating Systems, Sixth Edition       18
  Role of Device Management (cont'd.)
• However, when the same situation is faced by CPU
  1 and the much faster Disk Drive C, we find the disk
  will again begin at Track 1 and make all four
  accesses in 15 ms (5 + 5 + 5), but when it stops to
  reorder these accesses (which requires 30 ms), it
  takes 35 ms (30 + 5) to complete the task.
• Therefore, reordering requests not always
  warranted.




Understanding Operating Systems, Sixth Edition      19
  Role of Device Management (cont'd.)
• Remember that when the system is configured, the
  reordering algorithm is either always on or always
  off.
• It can’t be changed by the systems operator without
  reconfiguration, so the initial setting, on or off, must
  be determined by evaluating the system based on
  average system performance.




Understanding Operating Systems, Sixth Edition           20
             Role of File Management
• The discussion of file management in Chapter 8
  looked at how secondary storage allocation
  schemes help the user organize and access the files
  on the system.
• Almost every factor discussed in that chapter can
  affect overall system performance.
• File organization is an important consideration,
    – If a file is stored noncontiguously and has several
      sections residing in widely separated cylinders of a
      disk pack, sequentially accessing all of its records
      could be a time-consuming task.



Understanding Operating Systems, Sixth Edition               21
             Role of File Management
    – Such a case would suggest that the files should be
      compacted (defragmented) so each section of the file
      resides near the others.
        • Recompaction takes CPU time and makes the files
          unavailable to users while it’s being done.
• Another file management issue that could affect
  retrieval time is the location of a volume’s directory.
    – Some systems read the directory into main memory
      and hold it these until the user terminates the session.




Understanding Operating Systems, Sixth Edition              22
             Role of File Management
    – Looking at Figure 12.1:
        • The first retrieval would take 35 ms when the system
          retrieves the directory for Drive A and loads it into
          memory.
        • Every subsequent access would be performed at the
          CPU’s much faster speed without the need to access
          the disk.
    – This poses a problem if the system crashes before
      any modifications have been recorded permanently in
      secondary storage.
    – Similarly, the location of a volume’s directory on the
      disk might make a significant difference in the time it
      takes to access it.


Understanding Operating Systems, Sixth Edition                    23
             Role of File Management
        • If the directories are stored on the outermost track, then
          the disk drive arm has to travel farther to access each
          file than it would if the directories were kept in the
          center tracks.
• File management is closely related to the device on
  which the files are stored.
• Different schemes offer different flexibility, but the
  trade-off for increased file flexibility is increased
  CPU overhead.




Understanding Operating Systems, Sixth Edition                   24
     Role of File Management (cont'd.)

• File management related to device where files
  stored




Understanding Operating Systems, Sixth Edition    25
         Role of Network Management
• The discussion if network management in Chapters
  9 and 10 examined the impact of adding networking
  capability to the OS and the overall effect on the
  system performance.
• The Network Manager:
    – Routinely synchronizes the load among remote
      processors;
    – Determines message priorities;
    – Tries to select the most efficient communication paths
      over multiple data communication lines.



Understanding Operating Systems, Sixth Edition            26
         Role of Network Management
        • When an application program requires data from a disk
          drive at a different location, the Network Manager
          attempts to provide this service seamlessly.
        • When networked devices (printers, plotters, disk drives)
          are required, the Network Manager has the
          responsibility of allocating and deallocating the required
          resources correctly.
• In addition, the Network Manager allows a network
  administrator to monitor the use of individual
  computers and shared hardware, and ensure
  compliance with software license agreements.



Understanding Operating Systems, Sixth Edition                   27
         Role of Network Management
• The Network Manager also simplifies the process of
  updating data files and programs on networked
  computers by coordinating changes through a
  communications server instead of making the
  changes on each individual computer.




Understanding Operating Systems, Sixth Edition    28
       Measuring System Performance
• Total system performance can be defined as the
  efficiency with which a computer system meets its
  goals – how well it serves its users.
• System efficiency is not easily measured because
  it’s affected by three major components:
        • User programs
        • Operating system programs
        • Hardware
• In addition, system performance can be very
  subjective and difficult to quantify.
    – How can anyone objectively gauge ease of use.



Understanding Operating Systems, Sixth Edition        29
                 Measurement Tools
• Throughput
    – A composite measure that indicates the productivity
      of the system as a whole.
    – Usually measured under steady-state conditions and
      reflects quantities such as :
        • The number of jobs processed per day;
        • The number of online transactions handled per hour.
    – Can also be a measure of the volume of work
      handled by one unit of the computer system.
        • An isolation that’s useful when analysts are looking for
          bottlenecks in the system.


Understanding Operating Systems, Sixth Edition                   30
                 Measurement Tools
• Capacity
    – Bottlenecks tend to develop when resources reach
      their capacity (maximum throughput level.
        • Thrashing is a result of a saturated disk.
    – Bottlenecks also occur when main memory has been
      overcommitted and the level of multiprogramming has
      reached a peak point.
        • The working sets for the active jobs can’t be kept in
          main memory, so the Memory Manager is continuously
          swapping pages between main memory and secondary
          storage.



Understanding Operating Systems, Sixth Edition               31
                 Measurement Tools
• Capacity (cont’d)
    – Throughput and capacity can be monitored by either
      hardware or software.
        • Bottlenecks can be detected by measuring the queues
          forming at each resource.
        • When a queue starts to grow rapidly, this is an
          indication that the arrival rate is greater than, or close
          to, the service rate and the resource is saturated
          (Feedback Loop) .
        • Once a bottleneck is detected, the appropriate action
          can be taken to resolve the problem.



Understanding Operating Systems, Sixth Edition                     32
                 Measurement Tools
• Response time (Online Interactive Users)
    – An important measure of system performance.
    – The interval required to process a user’s request:
        • From when the user presses the key to send the
          message until the system indicates receipt of the
          message.
• Turnaround time (Batch Jobs)
    – The time the submission of a job un til its output is
      returned to the user.
    – Whether in an online or batch context, this measure
      depends on both the workload being handled by the
      system at the time the request and the type of job or
      request being submitted.
Understanding Operating Systems, Sixth Edition                33
            Measurement Tools (cont'd.)
• Resource utilization
    – A measure of how much each unit is contributing to
      the overall operation.
    – Usually given as a percentage of time that a resource
      is actually in use.
        •   CPU busy 60 percent of the time
        •   The line printer busy 90 percent of the time
        •   Terminal usage?
        •   Seek mechanism on a disk?
    – This data helps determine whether there is balance
      among the units of a system or whether a system is
      I/O-bound or CPU-bound.
Understanding Operating Systems, Sixth Edition             34
          Measurement Tools (cont'd.)
• Availability
    – Indicates the likelihood that a resource will be ready
      when a user needs it.
        • For online Users, it may mean the probability that a port
          is free or a terminal is available when they attempt to
          log on.
        • for those already on the system, it may mean the
          probability that one or several specific resources will be
          ready when their programs make requests.
    – A unit will be operational and not out of service when
      a user needs it.



Understanding Operating Systems, Sixth Edition                   35
          Measurement Tools (cont'd.)
• Availability (cont’d)
    – Is Influenced by two factors:
    – Mean time between failures (MTBF)
        • The average time that a unit is operational before it
          breaks down.
    – Mean time to repair (MTTR)
        • The average time needed to fix a failed unit and put it
          back in service.




Understanding Operating Systems, Sixth Edition                      36
          Measurement Tools (cont'd.)
• If you buy a terminal with an MTBF of 4,000 hours
  (Number given by the manufacturer), and you plan
  to use it for 4 hours a day for 20 days a month (or
  80 hours per month), then you would expect it to fail
  every 50 months (4,000/80).
• Assuming the MTTR is 2 hours:
                                     MTBF
               Availability (A) 
              Availability(A) =
                                  MTBF  MTTR

               Availability =    4000    = 0.9995
                                4000 + 2

Understanding Operating Systems, Sixth Edition       37
          Measurement Tools (cont'd.)

• Reliability
    – Similar to availability.
    – Measures the probability that a unit will not fail during
      a given time period (t)
    – It’s a function of MTBF

                                   (1 MTBF )(t )
                  R( t )  e
                             = 0.9999584


Understanding Operating Systems, Sixth Edition               38
          Measurement Tools (cont'd.)
• Performance measures can’t be taken in isolation
  from the workload being handled by the system
  unless you’re simply fine-tuning a specific portion of
  the system.
• Overall system performance varies from time to
  time, so it’s important to define the actual working
  environment before making generalizations.




Understanding Operating Systems, Sixth Edition         39
                    Feedback Loops
• To prevent the processor from spending more time
  doing overhead than executing jobs, the OS must
  continuously monitor the system and feed this
  information to the Job Scheduler.
• The Scheduler can either allow more jobs to enter
  the system or prevent new jobs from entering until
  some of the congestion has been relieved
  (A Feedback Loop) and it can be either negative or
  positive.




Understanding Operating Systems, Sixth Edition    40
            Feedback Loops (cont'd.)
• Negative feedback loop
    – Monitors the system and, when it becomes too
      congested, signals the Job Scheduler to slow down
      the arrival rate of the processes (Figure 12.4).
    – A negative feedback loop monitoring I/O devices
      would inform the Device Manager that Printer 1 has
      too many jobs in its queue, causing the Device
      Manager to direct all newly arriving jobs to Printer 2,
      which isn’t as busy.
    – The negative feedback helps stabilize the system and
      keeps queue lengths close to expected mean values.


Understanding Operating Systems, Sixth Edition             41
            Feedback Loops (cont'd.)




Understanding Operating Systems, Sixth Edition   42
            Feedback Loops (cont'd.)
• Positive feedback loop
    – Monitors the system, and when the system becomes
      underutilized, causes the arrival rate to increase
      (Figure 12.5).
    – Used in paged virtual memory systems
    – Must be used cautiously because they’re more
      difficult to implement than negative loops.




Understanding Operating Systems, Sixth Edition         43
            Feedback Loops (cont'd.)
• Positive feedback loop (cont’d)
    – How it works:
        • The positive feedback loop informs the Job Scheduler
          that the CPU is underutilized.
        • The Scheduler allows more jobs to enter the system to
          give more work to the CPU.
        • As more jobs enter, the amount of main memory
          allocated to each job decreases.
        • If too many jobs are allowed to enter the system, the
          result can be an increase in page faults
            – This may cause the CPU to deteriorate.
    – The monitoring mechanisms for positive feedback
      loops must be designed with great care.
Understanding Operating Systems, Sixth Edition                44
            Feedback Loops (cont'd.)
• Positive feedback loop (cont’d)
    – An algorithm for a positive feedback loop should
      monitor the effect of new arrivals in two places:
        • The Processor Manager’s control of the CPU;
        • The Device Manager’s read and write operations.
    – Both areas experience the most dynamic changes,
      which can lead to unstable conditions.
    – Such an algorithm should check to see whether the
      arrival produces the anticipated result and whether
      system performance is actually improved.



Understanding Operating Systems, Sixth Edition              45
            Feedback Loops (cont'd.)
• Positive feedback loop (cont’d)
    – If the arrival causes performance to deteriorate then
      the monitoring algorithm could cause the OS to adjust
      its allocation strategies until a stable mode of
      operation has been reached again.




Understanding Operating Systems, Sixth Edition           46
            Feedback Loops (cont'd.)




Understanding Operating Systems, Sixth Edition   47
                 Patch Management
• The systematic updating of the operating system
  and other system software.
• A patch is a piece of programming code that
  replaces or changes code that make up the
  software.
• There are three primary reasons for the emphasis
  on software patches for sound system
  administration:
    – The need for vigilant security precautions against
      constantly changing system threats;
    – The need to assure system compliance with
      government regulations regarding privacy and
      financial accountability;
Understanding Operating Systems, Sixth Edition             48
                 Patch Management
    – The need to keep systems running at peak efficiency.
• The task of keeping computing systems patched
  correctly has become a challenge because of the
  complexity of the entire system (The OS, network,
  various platforms, remote users), and the speed with
  which software vulnerabilities are exploited by
  worms, viruses, and other system assaults.
• Overall responsibility lies with the CIO, the CSO, the
  network administrator or individual users.
• It is only through rigorous patching that the system’s
  resources can reach top performance, and its
  information can be best protected.
Understanding Operating Systems, Sixth Edition          49
          Patch Management (cont'd.)

• Manual and automatic patch technologies
    – Among top eight used by organizations




Understanding Operating Systems, Sixth Edition   50
              Patching Fundamentals
• While the installation of the patch is the most public
  event, there are several essential steps that take
  place before that happens:
    –   Identify the required patch;
    –   Verify the patch’s source and integrity;
    –   Test the patch in a safe environment;
    –   Deploy the patch throughout the system;
    –   Audit the system to gauge the success of the patch
        deployment.
• All changes to the OS or other critical system must
  be undertaken in an environment that makes regular
  system backups, and tests restoration from
  backups.
Understanding Operating Systems, Sixth Edition               51
      Patching Fundamentals (cont'd.)
• Patch availability
    – Identify the criticality of the patch.
        • If the patch is critical it should be applied ASAP.
        • If the patch is not critical, you might choose to delay
          installation until a regular patch cycle begins.
• Patch integrity
    – Authentic patches will have a digital signature or
      patch validation tool.
    – Before applying a patch, validate the digital signature
      used by the vendor to send the new software.



Understanding Operating Systems, Sixth Edition                      52
      Patching Fundamentals (cont'd.)
• Patch testing
    – Before installation on a live system, test the new
      patch on a sample system or an isolated machine
      (development system) to verify its worth.
    – Tests
        • Test to see if the system restarts after the patch is
          installed.
        • Check to see if the patched software performs its
          assigned tasks.
            – The tested system should resemble the complexity of the
              target system as closely as possible.
        • Test the contingency plans to uninstall the patch and
          recover the old software if it becomes necessary to do
          so.
Understanding Operating Systems, Sixth Edition                    53
      Patching Fundamentals (cont'd.)
• Patch deployment
    – Single-user computer
        • Install the software and reboot the computer.
    – Multiplatform system (many users)
        • Exceptionally complicated task
        • Maintain an accurate inventory of all hardware and
          software on those computers that need the patch.
        • On a large network, this information can be gleaned
          from network mapping software that surveys the
          network and takes a detailed inventory of the system.
        • Because its impossible to use the system during the
          patching process, schedule the patch deployment when
          system use is low (evenings or weekends).
Understanding Operating Systems, Sixth Edition              54
      Patching Fundamentals (cont'd.)
• Audit finished system
    – Confirm that the resulting system meets expectations:
        • Verify that all computers are patched correctly and
          perform fundamental tasks as expected.
        • Verify that no users had unexpected or unauthorized
          versions of software that may not accept the patch.
        • Verify that no users are left out of the deployment.
    – This process should include documentation of the
      changes made to the system and the success or
      failure of each stage of the process.
    – Get feedback from the users to verify the
      deployment’s success.

Understanding Operating Systems, Sixth Edition                   55
                   Software Options
• Patches can be installed manually, one at a time, or
  via software that’s written to perform the task
  automatically.
• Deployment software falls into two groups:
    – Those programs that require an agent (agent-based
      software);
    – Those programs that do not (agentless software).
• If the deployment software uses an agent (software
  that assists in patch installation):
    – The agent software must be installed on every target
      computer system before patches can be deployed.

Understanding Operating Systems, Sixth Edition            56
                   Software Options
    – On a very large or dynamic system, this can be a
      daunting task.
    – For administrators of large, complex networks,
      agentless software may offer some time-saving
      efficiencies.




Understanding Operating Systems, Sixth Edition           57
              Timing the Patch Cycle
• While critical system patches must be applied
  immediately, less-critical patches can be scheduled
  at the convenience of the systems group.
    – These patch cycles can be based on calendar events
      or vendor events.
• The advantage of having routine patch cycles is that
  they allow for thorough review of the patch and
  testing cycles before deployment.




Understanding Operating Systems, Sixth Edition        58
                  System Monitoring
• Several techniques for measuring the performance
  of a working system have been developed as
  computer systems have evolved, which can be
  implemented using either hardware or software
  components.
• Hardware monitors are more expensive but they
  have the advantage of having a minimum impact on
  the system because they’re outside of it and
  attached electronically.
    – Examples: Hard-wired counters, clocks, comparative
      elements.



Understanding Operating Systems, Sixth Edition         59
           System Monitoring (cont’d)
• Software monitors are relatively inexpensive.
    – Because they become part of the system, they can
      distort the results of the analysis.
    – The software must use the resources it’s trying to
      monitor.
    – Software tools must be developed for each specific
      system, so it’s difficult to move them from system to
      system.
• In early systems, performance was measured simply
  by timing the processing of specific instructions.
    – The system analysis might have calculated the
      number of times an ADD instruction could be done in
      one second.

Understanding Operating Systems, Sixth Edition                60
           System Monitoring (cont'd.)
    – They might have measured the processing time of a
      typical set of instructions.
    – These measurements monitored only the CPU speed
      because in those days the CPU was the most
      important resource, so the remainder of the system
      was ignored.
• Today, system measurements must include the
  other hardware units as well as the OS, compilers,
  and other system software.
• Some measurements are made in a variety of ways.
• Some are made using real programs, usually
  production programs that are used extensively by
  the users of the system, which are run with different
  configurations of CPUs, OS, and other components.
Understanding Operating Systems, Sixth Edition        61
           System Monitoring (cont'd.)
• The results are called benchmarks and are useful
  when comparing systems that have gone through
  extensive changes.
• Benchmarks are often used by vendors to
  demonstrate to prospective clients the specific
  advantages of a new CPU, OS, compiler, or piece of
  hardware.
• Benchmark results are highly dependent upon:
        • The system’s workload;
        • The system’s design and implementation;
        • The specific requirements of the applications loaded on
          system.
Understanding Operating Systems, Sixth Edition                 62
           System Monitoring (cont'd.)
• Performance data is usually obtained in a rigorously
  controlled environment so results will probably differ
  in real-life operation.
• Benchmarks offer valuable comparison data.
    – A place to begin a system evaluation.
• If it’s not possible to experiment with the system
  itself, a simulation model can be used to measure
  performance.
• A simulation model is a computerized abstraction of
  what is represented in reality.
    – The amount of detail built into the model is dictated by
      time and money.
Understanding Operating Systems, Sixth Edition             63
                         Accounting
• The accounting function pays the bills and keeps
  system financially operable.
• Most computer resources are paid for by the users.
• In a single-user environment, it’s easy to calculate
  the cost of the system.
• In a multiuser environment, computer costs are
  usually distributed among users based on how much
  each one uses the system’s resources.




Understanding Operating Systems, Sixth Edition      64
                 Accounting (cont'd.)
• To do this distribution, the OS must be able to:
    –   Set up user accounts
    –   Assign passwords
    –   Identify which resources are available to each user
    –   Define quotas for available resources (disk space or
        maximum CPU time allowed per job).




Understanding Operating Systems, Sixth Edition                 65
                 Accounting (cont'd.)
• Pricing policies vary from system to system. Typical
  measurements include some or all of the following:
    – Total amount of time spent between job submission
      and completion. In interactive environments this is the
      time from logon to logoff (connect time).
    – CPU time is the time spent by the processor
      executing the job.
    – Main memory usage is represented in units of time,
      bytes of storage, or bytes of storage multiplied by
      units of time.
        • Depends on the configuration of the OS.
        • A job that requires 200K for 4 seconds followed by
          120K for 2 seconds could be billed for 6 seconds of
          main memory usage, or 320K of memory usage or a
          combination of K/second of memory usage.
Understanding Operating Systems, Sixth Edition                  66
                 Accounting (cont'd.)
• Pricing policy measurements (cont’d)
        • A job that requires 200K for 4 seconds followed by
          120K for 2 seconds could be billed for 6 seconds of
          main memory usage, or 320K of memory usage or a
          combination of K/second of memory usage.

          [(200 * 4) + (120* 2)] = 1040K/second of memory usage

    – Secondary storage used during program execution,
      like main memory use, can be given in units of time,
      or space or both.
    – Secondary storage used during billing period is
      usually given in terms of the number of disk tracks
      allocated.

Understanding Operating Systems, Sixth Edition                  67
                 Accounting (cont'd.)
• Pricing policy measurements (cont’d)
    – Use of system software includes utility packages,
      compilers, and/or databases.
    – Number of I/O operations is usually grouped by
      device class (line printer, terminal, disks).
    – Time spent waiting for I/O completion
    – Number of input records read usually grouped by type
      of input device.
    – Number of output records printed usually grouped by
      type of output device.
    – Number of page faults is reported in paging systems.



Understanding Operating Systems, Sixth Edition          68
                 Accounting (cont'd.)
• Pricing policies are sometimes used as a way to
  achieve specific operational goals.
• By varying the price of system services, users can
  be convinced to distribute their workload to the
  system manager’s advantage.
    – By offering reduced rates during off-hours, some
      users might be persuaded to run long jobs in batch
      mode inexpensively overnight instead of interactively
      during peak hours.
• Pricing incentives can also be used to encourage
  users to access more plentiful and cheap resources
  rather than those that are scarce and expensive.
    – By putting a high price on printer output, users might
      be encouraged to order a minimum of printouts.
Understanding Operating Systems, Sixth Edition             69
                 Accounting (cont'd.)
• Should the system give each user billing information
  at the end of each job or at the end of each online
  session?
    – Depends on the environment
        • Some systems only give information on resource
          usage.
        • Other systems also calculate the price of the most
          costly items (CPU utilization, disk storage use,
          supplies) at the end of each job.
    – This gives the user an up-to-date report of expenses
      and calculates how much is left in the user’s account.
• The advantage of maintaining billing records online
  is that the status of each user can be checked
  before the user’s job is allowed to enter the READY
  state.
Understanding Operating Systems, Sixth Edition                 70
                 Accounting (cont'd.)
• The disadvantage is overhead.
    – When billing records are kept online and an
      accounting program is kept active:
        • Memory space is used
        • CPU processing is increased.
• One compromise is to defer the accounting program
  until off-hours, when the system is lightly loaded.




Understanding Operating Systems, Sixth Edition      71
                          Summary
• The OS is more than the sum of its parts – it’s the
  orchestrated cooperation of every piece of hardware
  and every piece of software.
• When one part of the system is favored, it’s often at
  the expense of the others.
• The system’s managers must make sure they’re
  using the appropriate measurement tools and
  techniques to verify the effectiveness of the system
  before and after modification and then evaluate the
  degree of improvement.


Understanding Operating Systems, Sixth Edition       72

								
To top