Risk- Management- QA- CM by SanjuDudeja

VIEWS: 19 PAGES: 12

									                               UCLA CS130 – Software Engineering
                                         Winter, 2002
                                        Lecture Notes
                                       January 23, 2002


Announcements


Today’s Lecture

1.   Risk Management, SQA, CM
     1.1. Risk Management
     1.2. Software Quality Assurance
     1.3. Software Configuration Management
2.   Risk Management
     2.1. Definition – Techniques to help software team understand and manage uncertainty.
     2.2. Risk involves two characteristics:
           • Uncertainty – risk may or may not occur
           • Loss – unwanted losses if risk occurs
     2.3. Strategies
           2.3.1. Reactive – something’s already gone wrong, and software team has to de-
                   vise strategies on the spot. Disrupts work, and increases likelihood of fail-
                   ure due to unanticipated risks.
           2.3.2. Proactive – plan for risks before they occur. Strategies begin before techni-
                   cal work starts. Steps are:
                   • Identify potential risks, probability of occurrence, and impact
                   • Rank risks by importance (usually cost)
                   • Develop plan to manage risk. Emphasizes risk avoidance
                   • Develop contingency plans
                   Risk management ideas discussed today make up a proactive strategy
     2.4. Categories of Risk
           2.4.1. Project risks – threaten project plan. Affect project schedule and cost (both
                   may increase if risks occur). Include funding, staffing, organization, re-
                   source, customer, and requirements problems
           2.4.2. Technical Risk – threaten quality and timeliness of SW. Problem is harder
                   to solve than we thought. Includes design, implementation, verification, in-
                   terface, technology maturity.
           2.4.3. Business Risk – threaten viability of SW to be built. Includes:
                   • Building a good product that nobody needs
                   • Product no longer fits business strategy
                   • Sales people don’t understand product, so can’t sell it
                   • Loss of management support (change in focus, change in people)
                   • Losing funding or personnel commitment
                   Different risk categorization – complements above classification
                   • Known – can be identified after careful examination of project plan


                                                   1
               • Predictable – can be extrapolated from previous experience
               • Unknown – just that
               Risks can be:
               • Generic
               • Product-specific
2.5.   Risk Identification
       2.5.1. Systematic attempt to specify threats to project plan. Knowing risks enables
               risk avoidance, control.
       2.5.2. Risk Item Checklist – one method for identifying risks. Checklist items
               categories include:
               • Product size – e.g., is the product too big for us to build w/resources,
                   staff we have?
               • Business impact – management/marketplace constraints (e.g, market
                   windows, launch windows, strategies).
               • Customer characteristics – e.g., is the customer sophisticated enough to
                   specify product, use it? Can customer communicate effectively
                   w/developers?
               • Process definition – is the process appropriate? Does everyone know
                   what to do, when? Do people use the process?
               • Development environment – tool availability, quality
               • Technology to be built – is the product too complex, is the technology
                   mature enough to use?
               • Staff size/experience – do we understand the problem domain and the
                   technology to be used?
               Although many detailed checklists appear in the literature, a relatively short
               checklist can be used effectively to assess risk.
2.6.   Risk Projection
       Devise Risk Table as shown below:

              Risks                Category Probability Impact               RMMM
 Size estimate too low             PS          50%             2       Contains a pointer to
 Larger number of simulta-         PS          30%             3       the Risk Mitigation,
 neous users than planned                                              Monitoring, and
 System engineers not avail- ST                40%             2       Management Plan
 able when needed
 Missed marketing opportu-         BU          35%             2
 nities
 High requirements change          PS          90%             2
 traffic
 Customer will revoke fund- CU                 10%             1
 ing
 …
       1 = catastrophic, 2 = critical, 3 = marginal, 4 = negligible
       Categories come from generic checklist just discussed.
       Question – why do we list negligible risks?


                                               2
          A: To keep a record of what we’ve done. Suppose someone in the future asks
       “Have you considered …?”. Even if the impact is negligible, we can show them that
       this risk has been considered and its impact analyzed.

       Once risks have been identified and impacts determined, we establish a cut-off point
       (based on expected impact). Risks above the cut-off point must be managed; those
       below don’t.

       Risk probabilities can also start off as level values (e.g., impossible, unlikely, prob-
       able, and frequent). Numerical values can then be assigned to these levels. Previous
       experience or consensus can be used to obtain numerical values.

2.7.   Assessing Risk Impact
       2.7.1. Risk impact depends on:
               • Nature of risk – what type of problem do we encounter (e.g., poorly-
                   defined requirements, customer pulling funding, immature technology)
               • Scope – Combines severity of risk with how much of project will be af-
                   fected
               • Timing – when might risk occur, and for how long (e.g., unavailability
                   of required engineers, development tools, component being produced by
                   another organization).
       2.7.2. To determine overall consequences of risk:
               • Determine most likely or average value of occurrence for each risk
               • Using figure 6.1 in text, determine impact for each component. (Draw
                   figure 6.1 skeleton below)
                         Com Performance         Support –      Cost – can    Schedule –
                          po-     – is product can product      the SW be      can devel-
                          nent     fit for use,  be main-      built within     opment
                                  perform as tained/enhan        budget?      schedule be
                     Impact         required?       ced                       maintained?
                Catastrophic
                – mission
                will fail if
                risk occurs
                Critical –
                mission suc-
                cess is ques-
                tionable if
                risk occurs
                Marginal –
                secondary
                mission will
                be degraded
                if risk occurs
                Negligible –
                incon-


                                                3
                         Com     Performance           Support –    Cost – can     Schedule –
                         po-     – is product         can product   the SW be       can devel-
                         nent     fit for use,         be main-     built within     opment
                                  perform as         tained/enhan    budget?       schedule be
                    Impact         required?              ced                      maintained?
                venience or
                nonopera-
                tional impact
                if risk occurs

               • Complete risk table as above
               • Risk Exposure, RE, is expected value of impact (P*C).
2.8. Risk Assessment
      2.8.1. Performed after identification, impact projection
      2.8.2. During assessment:
               2.8.2.1. Examine accuracy of risk projection estimates
               2.8.2.2. Rank identified risks
               2.8.2.3. Identify risk aversion/control methods
                         2.8.2.3.1.     Fifth Level
      2.8.3. Risk Referent Level – needed to make assessment useful. Establishes toler-
               ance for pain
               2.8.3.1. Risk components (performance, support, cost, schedule) represent
                         risk referent levels. There’s a level of performance degradation,
                         cost overrun, support difficulty, or schedule slippage that will
                         cause the project to be terminated. If a combination of risks causes
                         one of the referent levels to be exceeded, work will stop.
2.9. Risk Refinement – as more information is gathered through the project, risks are re-
      fined into more detailed risks. More detailed risks are easier to understand, monitor,
      mitigate, and manage.
      2.9.1. Condition-Then-Consequence can be used: “Given <condition> then con-
               cern of possibly <consequence>.” Conditions and consequences can be re-
               fined as more information becomes available.
2.10. Risk Mitigation, Monitoring, and Management
      2.10.1. Risk analysis strategies presented have one goal – assisting project team in
               developing strategy for dealing with risk
      2.10.2. Risk avoidance always the best strategy. Achieved by developing risk miti-
               gation plan.
      2.10.3. Risk monitoring occurs as project proceeds. Monitor:
               • Risk factors
               • Effectiveness of risk mitigation steps
      2.10.4. Risk Management and Contingency Planning
               2.10.4.1. Assume mitigation has failed and risk has become real
               2.10.4.2. Contingency plans describe what will be done for each risk if it oc-
                         curs.
      2.10.5. RMMM costs money and effort – cost of RMMM must be balanced against
               Risk Exposure. If RMMM < RE, then do it, else don’t.


                                                 4
3.   Software Quality Assurance
     3.1. For SQA encompasses:
            3.1.1. Defining quality (may be part of institutional policies)
            3.1.2. Creating activities ensuring all workproducts exhibit required quality (again,
                    may be part of institutional policies)
            3.1.3. Performing defined activities
            3.1.4. Monitoring quality using metrics
           SQA is an umbrella activity
     3.2. Definition of quality – conformance to:
          • explicitly stated functional and performance requirements – SW requirements are
              foundation from which quality is measured
          • explicitly documented development standards – development process is according
              to specified standards. If standards not followed, expected result is poor quality
              software
          • implicit characteristics that are expected of all professionally-developed software
              – e.g., ease of use, maintainability.
     3.3. Technical staff, SQA group both perform quality activities
            3.3.1. SW engineers – use specified development process, use well-founded de-
                    sign/implementation techniques, conduct formal technical reviews, conduct
                    appropriate testing.
            3.3.2. SQA group – independent group that assists SW engineers in achieving re-
                    quired quality. Recommended activities address quality planning, oversight,
                    record keeping, analysis, reporting. Activities of SQA group include:
                    • Preparing project SQA plan – address evaluations, audits and reviews,
                        applicable standards, error reporting and tracking procedures, SQA
                        group documents that will be produced, type and amount of feedback to
                        be provided to SW engineers, managers.
                    • Participate in developing SW process description – SQA group reviews
                        project plan to determine whether selected process is likely to result in
                        quality SW. Also reviews project plan to ensure compliance with organ-
                        izational and externally-imposed standards and policies.
                    • Reviewing SW engineering activities – verify compliance with defined
                        SW process
                    • Audit specific SW workproducts – verify compliance with required
                        documentation content, standards
                    • Ensure deviations in SW workproducts handled according to docu-
                        mented procedure.
                    • Record noncompliances and report to senior management
                    Some SQA groups (e.g., JPL’s) also involved in research/technology infu-
                    sion.
     3.4. Software Reviews –SQA activity involving SW engineers, SQA group. Goal is to
           identify and remove faults in software workproducts at various points during SW
           development to minimize number of faults remaining in fielded product. Formal re-
           views also serve as training ground, and promotes backup of info and project conti-
           nuity.
            3.4.1. SW error propagation:


                                                   5
               3.4.1.1. Without reviews, SW passes through all stages with no way of
                        identifying/removing faults prior to testing phases. Typical fault
                        density might be 12 faults per 1000 LOC.
               3.4.1.2. With reviews, faults are identified and removed at each phase of
                        the development. Fewer faults pass through to next phase. Typi-
                        cal fault density might be 3 faults per 1000 LOC.
                                 Detailed design
      0
                                                                Code and unit
                      10    6
      0        0%                                                   test
                                      6
      10                    4                       37 10
                                  4x1.5     0%                      10
                                                                                94
                                   25                    27     27x3      20%
      Preliminary
        design                                                      25



 94                                         Validation test
                                 47                                        System test
                 0         50%
                                                               24                               Residual
                 0                                                                               faults
                                              0          50%
                                                                                           12
              Integration test                0                             0        50%

                                                                            0

Above – error propagation with no reviews
Below – error propagation with reviews
                                 Detailed design
      0
                                                                Code and unit
                       3    2
      0       70%                                                   test
                                      2
      10                    1                       15    5
                                  1x1.5     50%                     5
                                                                                24
                                   25                    10     10x3      60%
      Preliminary
        design                                                      25



 24                                         Validation test
                                 12                                        System test
                 0         50%
                                                               6                                Residual
                 0                                                                               faults
                                              0          50%
                                                                                           3
              Integration test                0                             0        50%

                                                                            0


      3.4.2. Conduct of a Formal Review
             3.4.2.1. Formal review meeting should obey following constraints


                                                   6
        •   3-5 people in review
        •   Advance preparation – should occur (e.g., each person reviews
            materials off-line), but no more than about 2 hours
         • Review to be no more than two hours
         FTR focuses on specific, small piece of SW.
3.4.2.2. Steps to FTR:
         • Producer (author) completes workproduct to be reviewed, in-
            forms manager.
         • Manager works with review leader (may be part of SQA group)
            to schedule review. Review leader may review workproduct
            for review readiness.
         • Review leader schedules review, sends copies of materials to
            reviewers
         • Reviewers review materials before meeting
         • Review meeting is held. Producer or designated presenter
            walks through material. Recorder writes down important is-
            sues that come up.
         • At end of review, reviewers either:
            • Accept workproduct with no modifications
            • Providionally accept workproduct (errors have to be cor-
                 rected, but no further inspection required)
            • Reject the workproduct. Errors must be corrected and the
                 product re-reviewed.
3.4.2.3. Review Reporting
         • Recorder produces and distributes
            • Review issues list:
                 • Identifies problems that were found during review
                 • Serves as checklist guiding producer in correcting er-
                     rors and reviewers for follow-up review
            • Review summary report:
                 • Identifies what was reviewed
                 • Identifies reviewers
                 • Findings and conclusions
                 • Review issues list usually attached
3.4.2.4. Review Guidelines
         • Review the product, not the producer
         • Set an agenda and maintain it.
         • Limit debate and rebuttal
         • Identify problems, but don’t attempt to solve them
         • Take written notes
         • Limit number of participants, insist on advance preparation –
            increases defect-finding effectiveness
         • Develop checklists to guide review of product
         • Allocate time and resources for reviews in the project plan.
         • Train reviewers in review procedures.


                             7
                         • Review early reviews to increase review effectiveness
3.5.   Statistical SQA – Quantitative approach to assuring quality
       3.5.1. Implies following steps:
                • Collect and categorize software defect information
                • Trace each defect to underlying cause
                • Identify most frequent defect causes – can use error classification table
                    below. Pareto principle is frequently used (80% of defects can be traced
                    to 20% of defects).

                     Total                 Serious          Moderate           Minor
            Error      Number       %    Number %          Number %          Number %
            Type


               •  Identify corrective methods for the most frequently-occurring types of
                  problems that have caused the defects.
       3.5.2. Types of underlying causes - examples
              • Incomplete/missing specification
              • Misinterpreting customer communications
              • Intentional deviation from specs
              • Data representation error
              • Inconsistent component interfaces
              • Incomplete/erroneous testing
              • Ambiguous user interface
              • Inaccurate/incomplete documentation
              • …
       3.5.3. Error index – can be used to indicate trends in software quality. Lower in-
              dex value implies higher quality
              Obtain counts of errors by severity class, then use this information to com-
              pute “error index:”
              EI = weighted sum of phase indices, PI, divided by product size.
              PIi = wsi(Si/Ei)+ wMi(Mi/Ei)+ wTi(Ti/Ei) where
              SI = # serious errors, MI = # moderate errors, TI = # minor (trivial) errors,
              wsi = serious error wgt, wMi = moderate error wgt, wTi = minor error wgt.
              Weights should increase with increasing phase. Text recommends that ratio
              of weights be 10:3:1.
       3.5.4. Statistical SQA helps identify which things really matter so we can effec-
              tively apply resources.
3.6.   Mistake-Proofing (poka-yoke)
       3.6.1. Mechanisms that lead to
              • Prevention of a potential quality problem
              • Rapid detection of quality problems if they occur
       3.6.2. Effective devices are
              • Simple and cheap
              • Part of the process


                                              8
                   • Located near process task where mistakes occur
4.   Software Configuration Management – dealing with change
     4.1. SW changes for many reasons
           • Business needs change
           • Funding levels change
           • Customer needs change
     4.2. CM Set of activities designed to control change by:
           • Identifying workproducts likely to change
           • Establishing relationships between them
           • Defining mechanisms for managing different versions of workproducts
           • Controlling changes imposed on workproducts
           • Auditing and reporting on changes made
     4.3. Baselines
           4.3.1. Definition – “specification or product that has been formally reviewed and
                   agreed upon, that thereafter serves as the basis for further development, and
                   that can be changed only through formal change control procedures
           4.3.2. Informal changes to workproduct possible before establishing baseline
           4.3.3. Formal changes only after baseline established
     4.4. Software Configuration Items
           4.4.1. SCI is information created as part of SW engineering process – we want to
                   control changes to SCIs. Examples of SCIs are:
                   • Document (e.g., project plan, requirements document)
                   • Architectural design
                   • Set of test cases
                   • Named program component (e.g. source file)
                   • SCIs might also include SW tools (compilers, debuggers, specifica-
                       tion/design tools). Why? New versions of tools might produce different
                       results than original versions. Have seen this with compilers developed
                       for specific spacecraft processors.
           4.4.2. SCIs organized to form configuration objects – COs are catalogued in pro-
                   ject database with single name
                   4.4.2.1. Configuration object has:
                             • Name
                             • Attributes
                             • Relationships to other objects – allows SW engineers to iden-
                                 tify objects and SCIs that may need to be changed if this object
                                 changes
     4.5. SCM tasks
           • Identification
           • Version control
           • Change control
           • Configuration auditing
           • Reporting
           4.5.1. Identification



                                                   9
       4.5.1.1. Configuration objects must be separately named and organized
                with an object-oriented approach
       4.5.1.2. Two types of objects:
                • Basic – “unit of code or text” – may be section of requirements
                    spec, data model, architectural design, source listing for com-
                    ponent, suite of test cases for a component
                • Aggregate – collection of basic objects and other aggregate ob-
                    jects
       4.5.1.3. Objects identified uniquely by:
                • Name – character string
                • Description – list of data items identifying:
                    • SCI type (document, program, data, …)
                    • Project identifier
                    • Change/version information
                • List of resources – items required by the object. Includes spe-
                    cific functions, data types, variable names
                • Realization – pointer to “basic unit of text” for basic object,
                    null for aggregate object
       4.5.1.4. Identification must also include relationships among objects
                • Object_1 <part of> object_2
                • Object_2 <part of> object_3
                • Object_3 < interrelated> object_4
       4.5.1.5. Identification must take into account object evolution
                • Draw evolution graph on board.
                • Evolution graph describes change history of object
       4.5.1.6. SCM tools such as CVS, RCS, SCCS aid in identification.
                • Tools store copies of all versions in some fashion:
                    • SCCS does “forward deltas” – full copy of first version
                        stored, subsequent versions created by applying “deltas” to
                        first version.
                    • RCS, CVS do “backward deltas” – full copy of latest ver-
                        sion stored, earlier versions created by applying deltas to
                        latest version.
                4.5.1.6.1.     Fifth Level
4.5.2. Version control – combines procedures, tools to manage different versions
       of configuration objects.
       4.5.2.1. Evolution graph is one representation
       4.5.2.2. Object pool representation – see figure below




                                      10
                         Variants




                                          Entities
                   Versions

               One or more attributes assigned to each variant (e.g., color vs.
               B&W)
4.5.3. Change control – need to make changes to baseline in a controlled fashion.
       One change process is as follows – differs in details from text:
                                      Recognize need for change
                                      Submit request to change control board
                                      Change control board evaluates proposal
                                      Change control board (CCB) decides
               Approved – queue request                        Denied
                for action                                     Inform user
               Assign individuals to configuration objects
               Check out objects
               Make changes
               Review changes (may be local)
               Check in changed items
               Establish testing baseline
               Test changes
               Promote changes into next release
               Rebuild SW
               Review changes to all configuration itmes
               Include changes into new version
               Distribute new version

4.5.4. Configuration auditing – helps ensure changes have been properly imple-
       mented. Formal technical reviews also ensure this by focusing on technical
       correctness of change. Control mechanisms track change only until change
       has been approved by CCB
       4.5.4.1. Complements formal technical review – assesses config object for
                characteristics not considered during review
       4.5.4.2. Audit topics:
                • Has the change in the ECO been made? Have any additional
                    mods been incorporated?
                • Formal technical review held?
                • SW process followed and standards applied?
                • Change identified in SCI? Change info includes author, date of
                    change, nature of change.


                                     11
                             •   SCM procedures for noting, recording, and reporting change
                                 followed?
                             • All related SCIs updated?
                             Audit conducted by SQA group when SCM is formal activity.
          4.5.5. Reporting – communication mechanism to keep all parties informed of
                  change status
                  4.5.5.1. SCM reporting deals with:
                             • What changed?
                             • Who made changes?
                             • When did changes occur?
                             • What else is affected by change?
                  4.5.5.2. Configuration Status Reporting (CSR) entries made when:
                             • SCI given new/updated identification (e.g., new version num-
                                 ber)
                             • Change approved by CCB (ECO is issued)
                             • Configuration audit conducted – results reported
                             CSR output may be placed on line (e.g., project web page). Re-
                             ports generated on regular basis to keep managers/engineers in-
                             formed of change status.
5.   Summary
     5.1. Covered following topics
          5.1.1. Introduction to Project Management – covered 4P’s of project management,
                  team structures, roles, scoping and bounding project, selecting process,
                  warning signs, WWWWWHH.
          5.1.2. Project Planning – Scoping and bounding, effort and cost estimation tech-
                  niques, resource estimation, make or buy
          5.1.3. Risk Management – risk identification, risk categories, impact analysis, as-
                  sessment, mitigation, monitoring, and management
          5.1.4. Software Quality Assurance – definition of quality, technical staff quality
                  activities, activities of SQA group, error propagation, holding a formal tech-
                  nical review, review guidelines, statistical SQA
          5.1.5. Software Configuration Management – identification, version control,
                  change control, configuration audit, configuration status reporting
     5.2. Next lecture – start of structured analysis – Ch 11 and 12 in text




                                                  12

								
To top