Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

AUTOMATED SOFTWARE TESTING by ajithkumarjak47

VIEWS: 10 PAGES: 6

									National Conference on Role of Cloud Computing Environment in Green Communication 2012                                             410




                  AUTOMATED SOFTWARE TESTING
                 USING OPTIMIZED FEATURE SUBSET
                     SELECTION WITH GENETIC
                           ALGORITHM

                                             Firmi Silvin L#1, Muthukrishnan S,M.E.,*2
                                   #1
                                        II PG Student,Rajalakshmi Engineering college, Thandalam,
                                                       Chennai, Tamilnadu, India.
                                                   E-mail:fir.sil33@gmail.com
                       *2
                            Assistant Professor, Rajalakshmi Engineering Engineering college, Thandalam ,
                                                      Chennai, Tamilnadu, India.
      Abstract

      — The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. Unit
      testing is used to find problems early in the development cycle of the product. In this paper, we can use an evolutionary
      (genetic) algorithm to evolve a set of inputs. We describe Nighthawk, a system which uses a genetic algorithm (GA) to find
      parameters for randomized unit testing that optimize test coverage. To assess the size and content of the representations
      within the GA we therefore use a feature subset selection (FSS) tool. Using this tool we can reduce the size of the
      representation substantially while still achieving most of the coverage found using the full representation. Our reduced
      GA achieves almost the same results as the full system, but in only 10 percent of the time.

      Key words—Software testing, randomized unit testing, genetic algorithm, testing tools, knowledge based pooling.

                         I.INTRODUCTION
          Software testing is the process of running a piece of software on the given input data and checking the
      correctness of the outputs. The goals of software testing are to find failures of the software under test. The reliability
      of the software under test depends on the thoroughness. The thoroughness depends upon the SUT without forcing
      failures.
          A unit test provides a strict, written contract that the piece of code must satisfy In the continuous unit testing
      environments, through the inherent practice of sustained maintenance, unit tests will continue to accurately reflect
      the intended use of the executable and code in the face of any change. Depending upon established development
      practices and unit test coverage, up-to-the-second accuracy can be maintained.
          Randomized testing uses the randomization for selecting the modules from the whole java program.

      Randomized testing of software unit is effective at forcing failures in well-tested units. The thoroughness of
      randomized unit testing is dependent on when and how randomization is applied, e.g., the number of method calls to
      make, the relative frequency with which different methods are called. It is often difficult to work out the optimal
      values of the parameters and the optimal value reuse policy by hand. This paper describes the Nighthawk unit test
      data generator testing the modules and correcting the minor mistakes automatically. Nighthawk has two levels. The
      lower level is a randomized unit testing engine which tests a set of methods according to parameter values specified
      as genes in a chromosome. The upper level is a genetic algorithm (GA) which uses fitness evaluation, selection,
      mutation, and recombination of chromosomes to find good values for the genes. Goodness is evaluated on the basis
      of test coverage and number of method calls performed. After testing the program the test results will be got and
      from that by using knowledge pooling method more suggestions will be displayed and the wrong operations can be
      corrected. Randomized testing is 10 times faster than testing the whole program.



 Department of CSE, Sun College of Engineering and Technology
National Conference on Role of Cloud Computing Environment in Green Communication 2012                                            411


                          II.EXISTING SYSTEM
          Unit testing is the process of testing the individual module of the java program maximum of the time unit testing
      will be done during the development time itself. In [1], [2], [3], [4] have found that randomized unit testing found
      errors in well tested units. Unit testing is defined as testing a single method or a group of methods, or a class. The
      method which is going to be tested with this tool is called as targeted method. Unit testing will call a sequence of
      targeted methods which contain the java codes. In [3] GUI based random unit testing engine called RUTE-J has
      been developed. By using RUTE-J, users write their own customized test wrapper classes, hand coding such
      parameters as relative frequencies of method calls. To simplify that process, we describe experiments here with
      automatic feature subset selection (FSS) which lead us to propose that automatic feature subset selection should be a
      routine part of the design of any large GA system.
          Genetic algorithms were first described as Candidate solutions and that are represented as chromosomes, with
      solutions represented as genes in the chromosomes. The possible chromosomes form a search space and are
      associated with a fitness function representing the value of solutions encoded in the chromosome. Search proceeds
      by evaluating the fitness of each of a population of chromosomes. GAs can defeat purely random search in finding
      solutions to complex problems.
          The work addresses in [4] random generation of unit tests for object-oriented programs. Such a test typically
      consists of a sequence of method calls that create and mutate objects, plus an assertion about the result of a final
      method call. A test can be built up iteratively by randomly selecting a method or constructor to invoke, using
      previously computed values as inputs. It is only sensible to build upon a legal sequence of method calls, each of
      whose intermediate objects is sensible and none of whose methods throw an exception indicating a problem. This
      makes the technique highly scalable. Industrial implementations of the JDK, that found previously unknown errors.
          The result of the execution determines whether the input is redundant, illegal, contract violating, or useful for
      generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing
      tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or
      more contract) point to potential errors that should be corrected. In [5] tested several units using randomized
      sequences of method calls.
          In this paper [6] we describe Nighthawk, a system for generating unit test data. The system can be viewed as
      consisting of two levels. The lower level is a randomized unit testing engine which tests a set of methods according
      to parameter values specified in a chromosome. The upper level is a genetic algorithm (GA) which uses fitness
      evaluation, selection, mutation and recombination to find good values for the randomized unit testing parameters,
      including parameters that encode a value reuse policy. Goodness is evaluated based on test coverage and number of
      method calls performed. Users can use Nighthawk to find good parameters, and then perform randomized unit
      testing based on those parameters. The randomized testing can quickly generate many new test cases that achieve
      high coverage, and can continue to do so for as long as users wish to run it. Randomized testing has been shown to
      be an effective method for testing software units. However, the thorough-ness of randomized unit testing varies
      widely according to the settings of certain parameters, such as the relative frequencies with which methods are
      called. In this paper [6] describes a system that uses a genetic algorithm to find parameters for randomized unit
      testing that optimize test coverage.
          Randomized unit testing is a promising technology that has been shown to be effective, but whose thoroughness
      depends on the settings of test algorithm parameters. In this paper [6] it is described Nighthawk, a system in which
      an upper-level genetic algorithm automatically derives good parameter values for a lower-level randomized unit test
      algorithm. We have shown that Nighthawk is able to achieve high coverage of complex Java units.
          In paper [7] discusses experiments with feature subset selection (FSS) and genetic algorithms (GAs). We will
      show that the RELIEF feature subset selector [9] consistently rejects 60% of our operations.
          Scaling up and speeding up model generation to meet the size and time constraints of modern software
      development projects. There will always be a trade-off between completeness and runtime speed. Here we explore
      that trade-off in the context of using genetic algorithms to learn coverage models; i.e. biases in the control structures
      for randomized test generators. After applying feature subset selection to logs of the GA output, we find we can
      generate the coverage model and run the resulting test suite ten times faster while only losing 6% of the test case
      coverage.

               III.PROPOSED SYSTEM
         This paper describes the unit test by using Nighthawk algorithm for randomized unit testing and for using
      genetic algorithm. Users can use Nighthawk to find good parameters, and then perform randomized unit testing
      based on those parameters.



 Department of CSE, Sun College of Engineering and Technology
National Conference on Role of Cloud Computing Environment in Green Communication 2012                                                  412


          First step is to load a java program into the tool. The java program is the main input for the tool. The fed java
      program is divided into modules by using occurrence of open and close braces, if the number of open brace and
      close brace are equal means it is considered as a single module likewise the whole java program is divided into
      individual modules.
          The second step is to identify the operations present in the modules. The operations are identified by the
      operators present in the modules. In this the lower level nighthawk algorithm is used. The modules are selected in
      random, and from the modules the operations are selected. After selecting the operation the chromosome is
      generated by using genetic algorithm.

         Input: a set M of target methods; a chromosome c.
         Output: a test case.
         Step:
               1) For each element of each value pool of each primitive type in IM, choose an initial value that is within the bounds
                     for that value pool.
               2) For each element of each value pool of each other type t in IM :
                     a) If t has no initializers, then set the element to null.
                     b) Otherwise, choose an initializer method I of t, and call tryRunMethod (i,c). If the call returns a non-null
                           value, place the result in the destination element.
               3) Initialize test case k to the empty test case.
               4) Repeat n times, where n is the number of method calls to perform:
                     a) Choose a target method m ∈ CM.
                     b) Run tryRunMethod (m,c). Add the returned call description to k.
                     c) If tryRunMethod returns a method call failure indication, return k with a failure indication.
               5) Return k with a success indication.

      Fig.1. Algorithm constructRunTestCase




       Source Code


                          Modules




                  GA chromosomes




                                            Training sets




                               Customize the
        Test wrapper              output



        Testing Result

 Department of CSE, Sun College of Engineering and Technology
National Conference on Role of Cloud Computing Environment in Green Communication 2012                                      413




      Fig. 2 overall architecture

          For the operation selected from the module the number of arguments has to be given and the value for the
      arguments and the expected result has to be given by the tester. If the output got and the expected output is equal
      then the module will meet the customer requirement otherwise the value pool is connected. The value pool contain




      Fig. 3 value pool



 Department of CSE, Sun College of Engineering and Technology
National Conference on Role of Cloud Computing Environment in Green Communication 2012                                                                 414


      Stage 1: Random values are seeded into the value pools for primitive types such as int, according to bounds in the
      pools. Stage 2: Values are seeded into non primitive type classes that have initializer constructors by calling those
      constructors. Stage 3: The rest of the test case is constructed and run by repeatedly randomly choosing a method and
                                                               result                                                pool.
      receiver and parameter values. Each method call may result in a return value which is placed back into a value pool
          Form the value pool the suggestion for the change in the code will be displayed. Value pool is a knowledge pool
                                                                                  customer
      which contains major and minor changes which can be made to meet the customer requirements.




      Fig. 4 Generation of GA chromosome




      Fig. 5 evolutionary algorithm for unit testing.

         In the Comparison of pooling and Knowledge based pooling first take Testing Result from Randomized Testing
                                 om
      then take Testing Result from Enhanced Randomized Testing.

                         IV.CONCLUSION
          In summary, our experiments indicate that feedback directed random generation retains the benefits of random
                                                                                                              redunda
      testing (scalability, simplicity of implementation), avoids random testing pitfalls (generation of redundant or
                                                                   techniques.
      meaningless inputs), and is competitive with systematic techniques The Nighthawk is able to achieve high coverage
                         world                                           desirable
      of complex, real-world Java units, while retaining the most desirable feature of randomized testing the ability to
                                  coverage
      generate many new high-coverage test cases quickly.
          Future work includes the integration testing it will reduce the time taken for testing the individual modules by
      using un it testing. If 1/10th of the coverage is surrender then we can run nighthawk 10 times faster.


                                                                 V. REFERENCES


      [1]   James H. Andrews, Member, IEEE, Tim Menzies, Member, IEEE, and Felix C.H. Li,” Genetic Algorithms for Randomized Unit Testing,”
                                                                       January/February 2011.
            IEEE transactions on Software Engineering, vol. 37, no. 1, January/F
      [2]                                                                                                                                     32
            B.P. Miller, L. Fredriksen, and B. So, “An Empirical Study of the Reliability of UNIX Utilities,” Comm. ACM, vol. 33, no. 12, pp. 32-44,
            Dec. 1990.
      [3]   J.H. Andrews, S. Haldar, Y. Lei, and C.H.F. Li, “Tool Support for Randomized Unit Testing,” Proc. First Int’l Workshop Randomized
            Testing, pp. 36-45, July 2006.
      [4]                                                     “Feedback
            C. Pacheco, S.K. Lahiri, M.D. Ernst, and T. Ball, “Feedback-
            Directed Random Test Generation,” Proc. 29th Int’l Conf. Software Eng., pp. 75-84, May 2007.
      [5]      K.                                                                                          grams,”
            R.-K. Doong and P.G. Frankl, “The ASTOOT Approach to Testing Object-Oriented Programs,” ACM Trans. Software Eng. And
                                                 130,
            Methodology, vol. 3, no. 2, pp. 101-130, Apr. 1994.
      [6]                                                        Two                 Random
            J. Andrews, F. Li, and T. Menzies, “Nighthawk: A Two-Level Genetic-Random Unit Test Data Generator,” Proc. 22nd IEEE/ACM Int’l
                                              http://menzies.us/pdf/07asenighthawk.
            Conf. Automated Software Eng., http://menzies.us/pdf/07asenighthawk pdf, 2007.
      [7]   J.C. King, “Symbolic Execution and Program Testing,” Comm. ACM, vol. 19, no. 7, pp. 385-394, 1976.
      [8]                                                                           et
            J. Andrews and T. Menzies, “On the Value of Combining Feature Subset Selection with Genetic Algorithms: Faster Learning of Coverage
            Models,” Proc. Fifth Int’l Conf. Predictor Models in Software Eng., http://menzies.us/pdf/09fssga.pdf, 2009.


 Department of CSE, Sun College of Engineering and Technology
National Conference on Role of Cloud Computing Environment in Green Communication 2012                                                    415


      [9]   I. Kononenko, E. Simec, and M. Robnik-Sikonja. Overcoming the myopia of inductive learning algorithms with relieff. Applied
            Intelligence, 7:39–55, 1997. Available from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.4740




 Department of CSE, Sun College of Engineering and Technology

								
To top