Docstoc

testing

Document Sample
testing Powered By Docstoc
					What is testing?
Why Software Testing?

What are we aiming for?
Error distribution
Critical choice: what, when and how to test,
Framework for Testing,
Test deliverables,
Current practices, trends, challenges,


Who has to do it? And why?
The Testing objectives:
The Testing Principles:
How testing is related with Cost of Quality?
Testing Strategies:
Developing Testing Strategy:
Testing Techniques:
Testing Methods:
Hierarchy of Test Documents:
Test Plan:
Phases of Software System:
Why it is not recommended that a developer to do testing?
Test Procedure:
Test Policy:
Testing Tactics:
Distinguish Functional v/s Structural Testing.
Distinguish Static v/s Dynamic Testing.
Distinguish Manual v/s Automated Testing.
Test Case Design:
Test completion criteria:
Why the need of Software Testing?
       IT industry has becoming more competitive,
       Stay in the business,
       Delight and Satisfy the customer,
       Deliver Quality Product,
       Quality is absence of defects and meeting the customers’ expectations,
       By a good Test Process,
       The Quality of Test Process determines the success of the Test Effort,
       Prevent defect migration by early life cycle testing techniques,
       Use proper Test Tools at appropriate time,
       Testing is a professional discipline requiring trained and skilled persons,
    

What is testing then?
       Testing is a process of demonstrating that errors are not present,
       The purpose of testing is to show a program performs its intended function
        correctly,
       Testing is a process of establishing confidence that a program does what it is
        supposed to do,
       Testing is a process of executing a program with the intent of finding the
        errors.


Error Distribution
       Errors are concentrated in the early stages of the development process,
       Undetected errors will migrate down in the process,
       Errors detected late in the process are very costly,
       Time for Continues Testing Process Throughout the life cycle,

                                     Code
                                      7%      Others
                Design                         10%
                 27%




                                         Requirements
                                             56%


Ref: Software Testing in the Real World, Improving the Process – Edward Kit.
Economics of Software Development Life Cycle (SDLC)
      Traditional Testing                                             Continuous Testing

 Test Cost       Accumulated                                       Accumulated Test Cost
                    Errors                                            Errors
                                         Requirement
      0              20                   20 Errors,                   10             10
                                         Defect Cost 1


                                            Design
      0              40                   20 Errors                    15             25
                                         Defect Cost 1


                                            Code
      0              60                   20 Errors                    18             42
                                         Defect Cost 1


                                            Test
      480            12                 80 % Detected                  4              182
                                        Defect Cost 10



                                         Production
   1680              0                   „0‟ Defects                   0              582
                                       Defect Cost 100


Ref: Effective Methods of Software Testing – William Perry.



How testing is related with Cost of Quality?
   Cost of Quality - Total cost of Preventive, Appraisal and Failure,
   Increases mainly because of the rework,
   Can be reduced by Continuous Testing Approach,

                                                          Fix               Failure

                     Cost of Quality                      Test              Appraisal

                                                         Process            Prevention

                                                         Build              Production cost
Role of Testers in Software Life Cycle:
      Concept Phase,
          o Evaluate Concept Document,
          o Learn as much as possible about the product and project,
          o Analyze Hardware/software Requirements,
          o Strategic Planning,
      Requirement Phase,
          o Analyze the Requirements,
          o Verify the Requirements,
          o Prepare Test Plan,
          o Identify and develop requirement based test cases,
      Design Phase,
          o Analyze design specifications,
          o Verify design specifications,
          o Identify and develop Function based test cases,
          o Begin performing Usability Tests,
      Coding Phase,
          o Analyze the code,
          o Verify the code,
          o Code Coverage,
          o Unit test,
      Integration & Test Phase,
          o Integration Test,
          o Function Test,
          o System Test,
          o Performance Test,
          o Review user manuals,
      Operation/Maintenance Phase,
          o Monitor Acceptance Test,
          o Develop new validation tests for confirming problems,
   

The Testing objectives:
   As per Pressman,
    Testing is a process of executing a program with the intent of finding the errors.
    A good test case is one that has a high probability of detecting an as yet
       undiscovered error,
    A successful test case is one that detects an as yet undiscovered error,

   As per CSTE material,
    The main objective of the testing is to reduce the risks inherent in the computer
       system.
    Determine whether the system meets the specifications (Developers‟ view)
      Determine whether the system meets the business and user needs (customers‟
       view),
      Instilling the confidence in the system,
      Providing insight into the software delivery process,
      Continuously improving the software test process,



Testing Techniques:
    Human Testing,
         o Inspection,
         o Walkthrough,
    White Box Testing,
          o  Statement coverage,
          o  Decision coverage,
          o  Condition coverage,
          o  Decision / condition coverage,
         o Multiple condition coverage,
    Black Box Testing,
         o Equivalence Partitioning,
         o Boundary Value Analysis,
         o Error guessing.
         o Comparison testing.
         o Functional testing,
    Integration (Incremental) Testing,
         o Top Down Integration,
         o Bottom Up Integration,
    System Testing,
         o Recovery Testing,
         o Security Testing,
         o Volume Testing,
         o Stress Testing,
         o Performance Testing,
         o Alpha and Beta Testing,
         o
   
   

White Box Testing: (Logic Driven)
      Also known as Glass Box Testing, structural test, code based test, design based
       test,
      The tests are based on “how the system operates”,
      This needs the detailed knowledge of the system,
      Usually the test cases are designed by looking at the code,
   
Types of White box test:
      Statement coverage,
      Decision coverage,
      Condition coverage,
      Decision / condition coverage,
      Multiple condition coverage,

Black Box Testing: (Data Driven)
      Focus on the feature testing,
      Used to find the bugs at the level of features, operational profile, customers
       scenario,
      The tests are based on “What the system do”
      The tester will not be knowing the internal implementation,
      He will be knowing only what is the input data set and what is the out put data set
       for a program/module,
   

Types of Black box test:
      Equivalence Partitioning,
      Boundary Value Analysis,
      Error guessing.
      Comparison testing.
      Functional testing


Integration Testing:
      Some times known as Incremental Testing,
      This involves the testing the interfaces between the tested unit programs or system
       components, by adding one by one and testing the resultant combination,
   
      Top Down Integration,
          o Begins from the top of the module hierarchy and works down,
          o Modules are added in the descending order,
          o Stubs are used as bottom modules,
      Bottom Up Integration,
          o Begins from the bottom of the module hierarchy and works up,
          o Modules are added in the ascending order,
          o Drivers are used in place of higher modules, which give input data for the
             module to be tested,

   

System Testing:
      Recovery Testing,
       Tests how well a system recovers from crashes, hardware failures, or other
   catastrophic problems.
    Security Testing,
       Tests how well the system protects against unauthorized internal or external
       access, willful damage, etc; may require sophisticated testing techniques.
    Volume Testing,
    Stress Testing,
    Performance Testing,
    Alpha and Beta Testing,




Testing Methods:
Unit Testing:
     Unit testing is the verification of a smallest unit of the software development or
      module.
    White Box testing Techniques are used for the Unit testing.
The Unit test involves the following sequence of testing:
    Interfaces – ensures that the information is properly flow into and out of the
      program module under test,
    Local Data Structure – ensures that the data stored temporarily maintains its
      integrity during all the steps execution of the program,
    Boundary Conditions – ensures that the module operates at all boundary values,
    Independent Paths – ensures that all the independent paths executed at least once,
    Error Handling Paths – ensures the all the error handling paths are tested,

      Unit testing is performed once the coding is completed.
      Usually the developers perform the Unit Testing.
      Since the modules are not standalone program, Drivers and stub are used for the
       Unit testing.
      Usually the driver is nothing but the main program, which takes the test case data
       and passes the data to the module.
      Stubs serve to replace the modules that are subordinate to the module to be tested.
       The stubs are dummy programs, may be a print statement.


Installation Testing:

Acceptance Testing:

Regression Testing:
Why it is not recommended that a developer to do testing?
      Misunderstandings will not be detected, because the checker will assume that
       what the other individual heard from him or her is correct.
      Improper use of the development process may not be detected because the
       individual may not understand the process.
      The individual may be “Blinded” into accepting erroneous system specifications
       and coding.
      Without a formal division between development and test, an individual may be
       tempted to improve the system structure and documentation, rather than allocate
       that time and effort to test.

Test completion criteria:
      Wrong criteria:
          o Stop when the scheduled time for testing expires,
          o Stop when all the test cases executes without detecting errors,
      Correct Criteria:
          o When the test manager is able to report, with some confidence, that the
              application will perform as expected in production,
          o This can be decided based on whether the quality goals, defined at the
              starting of the project have been met,
          o Is there any opened defects and their severity level,
          o The risks associated with the product moving to the production,



How to acquire Skills and get recognize in the testing profession?
      IT industry has become more competitive,
      Time to distinguish the professional and skilled individuals,
      Certified Software Test Engineer (CSTE) Certification is a formal
       reorganization across the World,
      Started in 1996 at Quality Assurance Institute (QAI),
      In 1999 examination process started,
   

QAI emphasizes on 3 C’s
      Change: Skill improvement,
      Complexity: IT becoming more complex, so as testing to achieve Quality,
      Competition: In present competitive atmosphere, CSTE is a one of
       Reorganization across the world,
   

What CSTE talks about?
      General management skills,
      Quality Principles and Concepts,
      QA and QC Roles,
      Testing Skills, Approaches, Planning, Execution, Defect Tracking, Analysis,
       Reporting and then Improvements in Test Process,
      Evaluates the Principles and Practices of Software Testing,
      Emphasizes on Continues Software Testing Process,
   

Best Practices
      One among 2500 across the world,
   




What are we aiming for? And what is our ultimate goal?
      Delight and Satisfy the customer,
      Stay in the business,
      Aim for the quality,
      Quality is absence of defects and meet the customers expectations,
      Defect prevention at the early phases of the development,
      QA and QC role is important,
   


What does testing mean to a Tester?
      Testers hunt errors,
      Testers are destructive,
      Testers pursue errors, not people,
      Testers add value,
   

How to Test?
      By examining the internal structure and design,
      By examining the Functional user interface,
      By examining the design objectives,
      By examining the user’s requirements,
      By executing the code,
      Many more…
   
Critical choice: what, when and how to test,
      Testing is never ending process,
      Exhaustive testing is impossible,
      So start the testing early,
   
The Testing Principles:
      All the tests should be traceable to customers requirements,
      Tests should be planned long before the testing begins,
      The Pareto Principle applies to the software testing,
      Testing should begin “in the small” and progress towards testing “in the large”,
      Exhaustive testing is not possible,
      To be most effective the testing should be conducted by the third party,


How testing is related with Cost of Quality?
   The cost of quality is the total cost of Preventive, Appraisal and Failure
     associated with the product,
    The cost of a software product increases mainly because of the rework,
    The cost of quality can be reduced by applying the concept of continuous testing
     to the software development process,

                                                  Fix            Failure

                  Cost of Quality                Test            Appraisal

                                                Process          Prevention

                                                 Build           Production cost

Testing Strategies:
    The objective of the testing is to reduce the risks inherent in the computer, the
     strategy must address the risk and present a process that can reduce those risks,
    Two components of the testing strategy are:
         o Test Factor: The risk or issue that needs to be addressed as part of the test
             strategy. The strategy will select those factors that need to be addressed in
             the testing of a specific application,
         o Test Phase: The phase of the system development life cycle in which the
             testing will occur.
    A Strategy for the software testing integrates the software test case design
     methods into a well planned series of steps that result in the successful
     construction of the software.
    It provides a road map for –
         o Software developers,
         o The quality assurance organizations,
         o The customer,
    The road map describes the steps to be undertaken while testing, and the effort,
     time and resources required for the testing,
    The test Strategy should incorporate test planning, test case design, test execution,
     resultant data collection and data analysis,
    In designing a test strategy, the Risk factors becomes the basic or the objective of
     the testing,
    A strategy must provide a guidance for the tester and a set of milestones for the
     manager,
  
Developing Testing Strategy:
   Select and rank the test factors,
   Identify the system development phases,
   Identify the business risks associated with the system under development,
   Place the risks in the Test Factor / Test Phase matrix,



                            Test Phase    Requirements    Design   Build   Dynamic   Integration
                                                                           testing

                   Test Factors




     Factors                                      Risks




Testing Techniques:
    Human Testing,
    White Box Testing,
    Black Box Testing,
    Integration (Incremental) Testing,
    Validation Testing,
   
Human Testing:
      Inspections:
      Walkthroughs:
   


      Inspection and Walkthroughs involve reading or visual inspection of a program
       (code) by a team of people. This type of the testing is known as “Static Testing”,
      The difference between the inspection and walkthrough is in the procedure that is
       followed in doing and the different error detection techniques used.
      A walkthrough is an informal meeting for evaluation or informational purpose,
       while Inspection is some what formal procedure,
      The objective is to find the errors but not the solutions.
      This is done by people other than the author,
      This type of “Static Test” should be conducted even for the code modification,

Inspections:
      The objective of the inspection is find the errors in the program,
      Inspection usually consists of a team of four people,
           o Moderator,
                    Not an author,
                    Distributing the material,
                    Scheduling,
                    Inspecting the session,
                    Leading the session,
                    Recording all errors found,
                    Ensuring that the errors are subsequently corrected,
                    Is a quality control engineer,
           o Programmer,
           o Designer,
           o Test Specialist,
      The moderator distributes the material well in advance,
      Programmer will go through each of the statement by statement, checks for
       logical errors,
      The program is analyzed with respect to the check list of historically common
       programming errors,
      The moderator is responsible for ensuring that the discussion is proceeding along
       productive lines and that the participants focus their attention on identifying the
       errors, not correcting them,
      The programmer is given a list of errors,
      He will fix the errors,

Walkthrough:
      Like inspection, this is also a static testing,
      It consists of 3-5 members,
      One person play a role of moderator,
      One person play a role of secretary, who records all the errors,
      One is a tester,
      And remaining are programmers,
      The walkthrough is like “play computer”.
      The tester will bring a test case with set of input and expected output,
      Other will mentally execute the module to be tested,
      By this the logical path can be checked,

White Box Testing: (Logic Driven)
      Also known as Glass Box Testing, structural test, code based test, design based
       test,
      The tests are based on “how the system operates”,
      This needs the detailed knowledge of the system,
      Usually the test cases are designed by looking at the code,
   

Types of White box test:
      Statement coverage,
      Decision coverage,
      Condition coverage,
      Decision / condition coverage,
      Multiple condition coverage,

Black Box Testing: (Data Driven)
      Focus on the feature testing,
      Used to find the bugs at the level of features, operational profile, customers
       scenario,
      The tests are based on “What the system do”
      The tester will not be knowing the internal implementation,
      He will be knowing only what is the input data set and what is the out put data set
       for a program/module,
   

Types of Black box test:
      Equivalence Partitioning,
      Boundary Value Analysis,
      Error guessing.
      Comparison testing.
      Functional testing

Equivalence Partitioning:
    It is impossible to define test cases with an extensive input test data (input
      domain).
      Hence the input data is divided in to subsets of all possible inputs.
      Then testing is done by selecting a data from each data set.
      Thus this contains two steps:
          o Identifying the equivalence classes,
          o Defining test cases.

Boundary value analysis:
Boundary value analysis leads to a selection of test cases that exercise boundary values.
The input data includes the on, above and beneath the edges of input and output
equivalence classes.

Integration Testing:
      Some times known as Incremental Testing,
      This involves the testing the interfaces between the tested unit programs or system
       components, by adding one by one and testing the resultant combination,
   
      Top Down Integration,
          o Begins from the top of the module hierarchy and works down,
          o Modules are added in the descending order,
          o Stubs are used as bottom modules,
      Bottom Up Integration,
          o Begins from the bottom of the module hierarchy and works up,
          o Modules are added in the ascending order,
          o Drivers are used in place of higher modules, which give input data for the
             module to be tested,

   

System Testing:
      Recovery Testing,
       Tests how well a system recovers from crashes, hardware failures, or other
   catastrophic problems.
    Security Testing,
       Tests how well the system protects against unauthorized internal or external
       access, willful damage, etc; may require sophisticated testing techniques.
    Volume Testing,
    Stress Testing,
    Performance Testing,
    Alpha and Beta Testing,




Testing Methods:
Unit Testing:
     Unit testing is the verification of a smallest unit of the software development or
      module.
    White Box testing Techniques are used for the Unit testing.
The Unit test involves the following sequence of testing:
    Interfaces – ensures that the information is properly flow into and out of the
      program module under test,
    Local Data Structure – ensures that the data stored temporarily maintains its
      integrity during all the steps execution of the program,
    Boundary Conditions – ensures that the module operates at all boundary values,
    Independent Paths – ensures that all the independent paths executed at least once,
    Error Handling Paths – ensures the all the error handling paths are tested,

      Unit testing is performed once the coding is completed.
      Usually the developers perform the Unit Testing.
      Since the modules are not standalone program, Drivers and stub are used for the
       Unit testing.
      Usually the driver is nothing but the main program, which takes the test case data
       and passes the data to the module.
      Stubs serve to replace the modules that are subordinate to the module to be tested.
       The stubs are dummy programs, may be a print statement.


Installation Testing:

Acceptance Testing:

Regression Testing:

Hierarchy of Test Documents:
      Test Plan: Defines overall direction for all testing activities,
      Test Design Specification: Gives the Test approaches and identifies the features to
       be covered by the design and its associated tests,
      Test Case Specification: Documents the input data set and expected data set,
      Test Procedure Specification: Identifies all the steps required to exercise the
       specified test cases,

Test Plan:
      What we are going to do,
      How we are going to do it,
      What testing methods we are going to use,
      What are the documents we are referring,
      What resources required,
      How work is distributed,
      How long it will take,
      What is the test completion criteria,
      How we are measuring testing effectiveness,
   


Test Plan:
          a. Test Scope: Answers two questions - What is covered in the test? And
             what is not covered in the test?
          b. Test Objective: State the goal of the Testing. States what the tester is
             expected to accomplish or validate during the testing. This guides in
             development of test cases, procedures and test data. Enable the tester and
             manager to gauge the testing progress and success.
          c. Assumptions: This states the prerequisites that are to be needed. This
             could be the Entrance and Exit criterion for each stage of the testing. If it
             is not documented, it may have impact on the risk.
          d. Reference: List the applicable References.
          e. Budgets: Funds allocated to the testing.
          f. Software Description: Provide a chart and briefly describe the inputs,
             outputs and functions of the software being tested as a frame of references
             for the test descriptions.
          g. Risk Analysis: This documents the risks associated with the testing and
             their possible impact on the test effort. The possible risks are system
             integration, regression testing, new tools used, new technology used, skill
             level of the tester, testing techniques used, etc.,
          h. Test Design: States what type of tests must be conducted, what sequence
             and how much time,
          i. Roles and Responsibilities: States who is responsible for each stage of
             testing,
          j. Test Schedule and Planned Resources: States the major test activities,
             sequence, dependence on other project activities, initial estimation on each
             activity. Resource planning include the people, tools, facilities etc.,
          k. Test Data Management: States the data set required for the Testing, and
             the infrastructure required maintaining the data. Includes the methods for
             preparing the teat data,
          l. Test Environment: States the environment Required for each stage of the
             testing,
          m. Tools: States the tools needed for the Testing in different phases,
          n. Expected Defect Rates: State the estimated number of defects of this type
             of system.
          o. Specifications:
          p. Evaluation: Describes the evaluation criteria of the test results,

Phases of Software System:
          a. Requirements
                 i. Determinate the verification approach
                ii. Determine the adequate of requirements
                  iii. Generate the functional test data.
                  iv. Determine the consistency of the design with requirements.
          b.   Design
                    i. Determine the adequate of design.
                   ii. Generate the structural and functional test data.
                  iii. Determine the consistency with the design.
          c.   Coding
                    i. Determine the adequacy of implementation.
                   ii. Generate the structural and functional test data for programs.
          d.   Testing
                    i. Test application systems.
          e.   Installation, Operation and maintenance
                    i. Place tested system into production.
                   ii. Modify and retest.

Why it is not recommended that a developer to do testing?
          a. Misunderstandings will not be detected, because the checker will assume
             that what the other individual heard from him or her is correct.
          b. Improper use of the development process may not be detected because the
             individual may not understand the process.
          c. The individual may be “Blinded” into accepting erroneous system
             specifications and coding.
          d. Without a formal division between development and test, an individual
             may be tempted to improve the system structure and documentation, rather
             than allocate that time and effort to test.



Test Procedure:
      Recommended steps in the Test Process:
         a. Test Criteria: The questions to be answered by the test team.
         b. Assessment: The test team‟s evaluation of the test criteria,
         c. Recommended Tests: Recommended test to be conducted,
         d. Test Techniques: The recommended test Techniques to be used in
            evaluating the test criteria,
         e. Test Tools: The tools to be used to accomplish the test techniques.

Test Policy:
      The testing policy will contain:
          e. Definition of testing: A clear, brief and unambiguous definition of the
              testing.
          f. Testing Systems: The method through which testing will be achieved and
              enforced
          g. Evaluation: How information service management will measure and
              evaluate testing.
          h. Standards: The standards against which the testing is measured
Testing Tactics:
      The eight steps to develop the Testing Tactics
          a. Acquire and study the test strategy.
          b. Determine the type of the development project.
          c. Determine the type of the software system.
          d. Determine the project scope.
          e. Identify the tactical risks.
          f. Determine when the testing should occur.
          g. Build the system test plan.
          h. Build the unit test plans.

Distinguish Functional v/s Structural Testing.
       Functional testing ensures that the requirements are properly satisfied by the
       application system. The functions are those tasks that the system is designed to
       accomplish. Functional testing is not concerned with how processing occurs, but
       rather, with the results of the processing.
       Structural testing is designed to verify that the developed system and programs
       work. The intent of the structural testing is to assess the implementation by
       finding test data that will force the sufficient coverage of the structures present in
       the implemented application. Structural testing evaluates both that all aspects of
       the structure have been tested and that structure is sound.
Distinguish Static v/s Dynamic Testing.
       In Static testing, the verification is performed without executing the system‟s
       code, such as syntax testing. Requirements phase and design phase testing are
       examples for the static testing.
       In Dynamic testing the verification and validation is performed by executing the
       system‟s code. This involves running the program by some test cases and
       comparing the results with the predefined data.
Distinguish Manual v/s Automated Testing.
       The tests performed by the manual are known as manual testing, such as walk-
       through, code inspection.
       The tests performed by the computer are known as Automated testing.

Test Case Design:
      While designing the Test Cases, keep the “Test Objective” in mind.
      Design the test cases which have highest likelihood of finding the errors with a
       minimum time and effort,
      Tests cases can be designed by two approaches:
          o By knowing the Functions / Requirements, (Black Box Approach)
          o By knowing the internal implementations, (White Box Approach)
      Apply Black Box and White Box techniques while designing the Test Cases,
   
Test Data:
      The in put data used for testing is known as the Test Data,
      The test data set contains both the valid and invalid data for the test case,
      The test data are generated during the design and analysis phase for all test cases
       which are identified during the Requirement analysis phase,
      The test cases along with the test data ensures the test team that all the
       requirements are in the testable form, If it is not so, then the requirements are
       rewritten in the testable form,
      Exhaustive testing with all possible test data is impracticable, so some techniques
       (Equivalence partitioning, Boundary value analysis, etc.,) are used selecting the
       test data,
      The test data should check for the form, format, value and unit types,

Testing Tools:
      Tools are needed to help the testing,
      The kind of tools needed depends upon the kind of the testing to performed and
       the environment in which the test will be performed,
      The tool selection depends upon the following criteria:
           o Test Phase,
           o Test Objective,
           o Test Targets or Deliverables,
           o Test Techniques,
           o Software Category,
           o Test History (Error / defect History),
   

Code Coverage:
      The purpose of the Code Coverage tools is to find out the statement, path which is
       not covered at least once during the execution of the program,
      This gives the simplest metric of number of computer statements executed during
       the test compared to the total number of statements in the program,
      This help in finding the dead code in the program, logical errors,
      The tools help in finding the paths in the program that have been designed so that
       no data will cause the execution of those paths,
   

Managing Risks – Elaine M. Hall
      Risk is a condition that can result in a loss,
      The risk is related with the probability that loss may occur,
      Risk situation is always exists, but the chances of loss may not be occur,
      Risks can not be eliminated but the occurrence and/or the impact of loss can be
       reduced,
      Risk management is a general procedure for resolving the risks,
   Software Risk Management is the practice of assessing and controlling risk that
    affects the software project, process, or product,
   Risk management is said to resolve a risk if, when it is applied to any instance, the
    possible consequences are all acceptable,
   Acceptance Risk: Acceptance Risk means that we can live with the worst-case
    outcome,
   There are two major activities in any risk management process: Risk
    Assessment and Risk Control,
   Risk Assessment: Defines a Risk. Risk assessment is a discovery process of
    identifying sources of risk and evaluating their potential effects,
   Risk Control: Resolves the Risk. Risk control is a process of developing the risk
    resolution plans, monitoring risk status, implementing the risk resolution plans
    and correcting the deviations from the plan,
   You do not need to know what the risks are to begin risk management. It is
    normal to start the risk management process with the fuzzy issues, concerns,
    doubts and unknowns. The process of risk management transforms this
    uncertainty into acceptable risk,
   Software Risks:
      Software risk is a measure of the likelihood and loss of an unsatisfactory
      outcome affecting the software project, process, or product,
      I.     Software Project Risk:
          i.    Project Risk is primarily a management Responsibility,
          ii. This defines the operational, organizational and contractual software
                development parameters,
          iii. Project Risk includes: Resource constraints, external interfaces,
                suppliers relationships, contract restrictions, unresponsive vendors and
                lack of organizational support,
          iv. Funding is the most significant project risk reported in the risk
                management.
     II.     Software Process Risk:
          i.    Process Risk includes both management and Technical work
                procedures,
          ii. Process Risk associated with the management are: planning, staffing,
                tracking, quality assurance, and configuration management,
          iii. Process Risk associated with the Technical work (Engineering
                activities) are: requirement analysis, design, code, and test,
          iv. Planning Risk in the Process Management and Development Risk in
                the Process Technical are the most often reported risks,
    III.     Software Product Risk:
          i.    Product Risk is primarily a Technical Responsibility,
          ii. Product Risk contains: requirement stability, design performance, code
                complexity, test specifications,
          iii. Because the requirements are often perceived as flexible, it is difficult
                to manage the Product Risk,
          iv. Requirements are the most significant Risk in the Software Product
                Risk.
       Usually the software risk are discussed in terms of Potential cost, schedule, and
        technical consequences,

Software Risk Management:
    Software Risks can be discovered by working back word,
    First define the goals and objectives,
    Then describe the uncertainty, loss and time clearly,
    This will help in sort out the priority and provides knowledge to make intelligence
      decisions,


   Risks are dynamic, meaning that they change over time,
   Edwards Deming is known as the father of the Statistical Process Control,
   He proposed two models on the managing the Product development:
    b. Continuous Process Improvement, based on the evolutionary key process area
         of the SEI Capability Maturity Model for Software,
    c. Re-engineering: based on the revolutionary innovation,
    Both approaches are based on the Deming‟s Quality work, Plan-Do-Check-Act, is a
    closed loop approach for process Optimization. This is a evolutionary model for
    product improvement,




Test completion criteria:
       Wrong criteria:
           o Stop when the scheduled time for testing expires,
           o Stop when all the test cases executes without detecting errors,
       Correct Criteria:
           o When the test manager is able to report, with some confidence, that the
               application will perform as expected in production,
           o This can be decided based on whether the quality goals, defined at the
               starting of the project have been met,
           o The test manager look in to the test metrics, including Mean Time
               Between Failure or % of coverage achieved by the tests,
           o Is there any opened defects and their severity level,
           o The risks associated with the product moving to the production is also
               considered,


Some important points:
       Testing is a destructive process,
       A programmer should not test his or her own program,
       Testing is an unnecessary and unproductive activity if its goal is to validate that
        the specifications are implemented as written,
   Test cases should be written for valid and expected, as well as invalid and
    unexpected input conditions,
   A good test case is one that has a high probability of detecting an as yet
    undiscovered error,
   A successful test case is one that detects an as yet undiscovered error,
   A test case is a document that describes an input, action, or event and an expected
    response to determine if a feature of an application is working correctly.
   Good testing does not just happen, it must be planned, and a testing policy should
    be the corner stone of that plan.
   When a testing should occur:
             Testing should occur through out the project life cycle. The type of the
    testing is determined after identifying the type of the software system, project
    scope and the technical risks.
   A tool is a vehicle for performing a test process. A tool is a resource to the tester,
    but by itself is insufficient to conduct testing.
   First start developing the test cases using the black box methods and then develop
    supplementary test cases as necessary by using the white box methods,
   Functional testing is a process of attempting to find discrepancies between the
    program and its external specifications,
   System testing is not just testing all the functions of the complete system or the
    program; it tests for its initial objective, such as information, structural and quality
    requirements.
   Cyclomatic complexity is a software metric that provides a quantitative measure
    of the logical complexity of a program.
   Verification refers to the set of activities that ensures that software correctly
    implements a specific function.
   Validation refers to a different set of activities that ensures that the software that
    has been built is traceable to customer‟s requirements.
   Verification – “Are we building the product right”
   Validation – “Are we building the right product”.


   Black box testing - not based on any knowledge of internal design or code. Tests
    are based on requirements and functionality.
   White box testing - based on knowledge of the internal logic of an application's
    code. Tests are based on coverage of code statements, branches, paths, conditions.
   Unit testing - the most 'micro' scale of testing; to test particular functions or code
    modules. Typically done by the programmer and not by testers, as it requires
    detailed knowledge of the internal program design and code. Not always easily
    done unless the application has a well-designed architecture with tight code; may
    require developing test driver modules or test harnesses.
   Incremental integration testing - continuous testing of an application as new
    functionality is added; requires that various aspects of an application's
    functionality be independent enough to work separately before all parts of the
    program are completed, or that test drivers be developed as needed; done by
    programmers or by testers.
   Integration testing - testing of combined parts of an application to determine if
    they function together correctly. The 'parts' can be code modules, individual
    applications, client and server applications on a network, etc. This type of testing
    is especially relevant to client/server and distributed systems.
   Functional testing - black-box type testing geared to functional requirements of
    an application; this type of testing should be done by testers. This doesn't mean
    that the programmers shouldn't check that their code works before releasing it
    (which of course applies to any stage of testing.)
   System testing - black-box type testing that is based on overall requirements
    specifications; covers all combined parts of a system.
   End-to-end testing - similar to system testing; the 'macro' end of the test scale;
    involves testing of a complete application environment in a situation that mimics
    real-world use, such as interacting with a database, using network
    communications, or interacting with other hardware, applications, or systems if
    appropriate.
   Sanity testing - typically an initial testing effort to determine if a new software
    version is performing well enough to accept it for a major testing effort. For
    example, if the new software is crashing systems every 5 minutes, bogging down
    systems to a crawl, or destroying databases, the software may not be in a 'sane'
    enough condition to warrant further testing in its current state.
   Regression testing - re-testing after fixes or modifications of the software or its
    environment. It can be difficult to determine how much re-testing is needed,
    especially near the end of the development cycle. Automated testing tools can be
    especially useful for this type of testing.
   Acceptance testing - final testing based on specifications of the end-user or
    customer, or based on use by end-users/customers over some limited period of
    time.
   Load testing - testing an application under heavy loads, such as testing of a web
    site under a range of loads to determine at what point the system's response time
    degrades or fails.
   Stress testing - term often used interchangeably with 'load' and 'performance'
    testing. Also used to describe such tests as system functional testing while under
    unusually heavy loads, heavy repetition of certain actions or inputs, input of large
    numerical values, large complex queries to a database system, etc.
   Performance testing - term often used interchangeably with 'stress' and 'load'
    testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in
    requirements documentation or QA or Test Plans.
   Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and
    will depend on the targeted end-user or customer. User interviews, surveys, video
    recording of user sessions, and other techniques can be used. Programmers and
    testers are usually not appropriate as usability testers.
   Install/uninstall testing - testing of full, partial, or upgrade install/uninstall
    processes.
   Recovery testing - testing how well a system recovers from crashes, hardware
    failures, or other catastrophic problems.
   Security testing - testing how well the system protects against unauthorized
    internal or external access, willful damage, etc; may require sophisticated testing
    techniques.
   Compatability testing - testing how well software performs in a particular
    hardware/software/operating system/network/etc. environment.
   Exploratory testing - often taken to mean a creative, informal software test that
    is not based on formal test plans or test cases; testers may be learning the software
    as they test it.
   Ad-hoc testing - similar to exploratory testing, but often taken to mean that the
    testers have significant understanding of the software before testing it.
   User acceptance testing - determining if software is satisfactory to an end-user or
    customer.
   Comparison testing - comparing software weaknesses and strengths to
    competing products.
   Alpha testing - testing of an application when development is nearing
    completion; minor design changes may still be made as a result of such testing.
    Typically done by end-users or others, not by programmers or testers.
   Beta testing - testing when development and testing are essentially completed
    and final bugs and problems need to be found before final release. Typically done
    by end-users or others, not by programmers or testers.
   Mutation testing - a method for determining if a set of test data or test cases is
    useful, by deliberately introducing various code changes ('bugs') and retesting
    with the original test data/cases to determine if the 'bugs' are detected. Proper
    implementation requires large computational resources.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:43
posted:11/5/2011
language:English
pages:24