and placement

Document Sample
and placement Powered By Docstoc
					Feedback Report on “Testability

 John Clark, Felix Lindlar, Kieran Lakhotia, Phil
  McMinn, Ignacio Romeu, Ina Schieferdecker
Overall Motivation
   To increase the effectiveness or efficiency (or both)
    of what we do to achieve our testing
Targets (with input from Mark H paper)
    Anything on which we seek to carry out evolutionary testing
         Structural
         Exceptions
         Reuse (breaking invocation pre-conditions)
         Safety condition breaking.
    Stressing timing, power, resource usage generally
    Higher level and specific system descriptions (e.g. FSMs, large
     system simulations or models)
    Not just test data generation, e.g. may seek to improve
     “observability” (e.g. limited probe placement).
    Also annotation is a form of transformation.
         Daikon is a great testability transformation tool!

    “MH: Open Problems in Testability Transformation”
What makes ET hard?
   Difficult data types (e.g. non-numerics) – leads to
    difficult landscapes
   High (internal) domain to (internal) range ratios
   We don’t understand how to get best/effective
    problem/solution matches.
   Concurrency (as a particular form of “complexity”)
   Execution time (i.e. if very long per invocation)
   Choosing a cost function (do we assume “one-size-
   Smoothing the landscape
   Topological stretching
   Making the system “smaller” – “scaling” the system
       Reducing input ranges and the like
   Choosing small representative systems, e.g. small number of processes
    in a system to be model checked (where potentially there may be
    unlimited numbers in practice)
       How low can you go?
   Adding more constraints! In a sense making the problem apparently
    harder in some cases.
   Try an easier problem (“How to solve it” by Polya)
       We want A and B and C
       So try to solve A and B and see what happens
   Replacing elements of functionality with convenient functionality (e.g.
    “mock” objects)
   Bounding (e.g. taking only a specified number of loop iterations)
   “Execute” the program backwards????? Work from solution to
   Use reduction strategies from other domains, e.g. formal
    verification techniques for reduction (symmetries, data
    independence, abstraction functions)
   Look at strategies from general problems solving:
       Divide and conquer (hierarchical and parallel)
       Waypoints (sequential, problem solving)
Some questions
   Why is ET not used more?
   Do we really understand when ET has good chances
    of working (better)?
   How does the success of ET vary across test goals
   Can use EC techniques to improve testability of other
    solution techniques?
   How can we find transformations?
   (some work on evolving sequences of
    transformations or transformation types (e.g.
    compiler optimisation)

Shared By: