A Model for the Measurement of the Runtime Testability of by zhouwenjuan


									                           A Model for the Measurement of the
                      Runtime Testability of Component-based Systems

                Alberto Gonz´ lez
                               a           ´
                                           Eric Piel        Hans-Gerhard Gross
              Delft University of Technology, Software Engineering Research Group
                          Mekelweg 4, 2628 CD Delft, The Netherlands
        Email: {a.gonzalezsanchez,e.a.b.piel,h.g.gross}@tudelft.nl

                      Abstract                              in a development time testing environment, so that a vi-
                                                            able option is to test their component interactions during
Runtime testing is emerging as the solution for the in-     runtime.
tegration and validation of software systems where tra-
ditional development-time integration testing cannot be          Although runtime testing solves the problem of the
performed, such as Systems of Systems or Service Ori-       availability of unknown components, and eliminates the
ented Architectures. However, performing tests during       need to take the system off-line, it introduces new prob-
deployment or in-service time introduces interference       lems. First, it requires the test infrastructure, like test
problems, such as undesired side-effects in the state of    drivers, stubs, oracles, etc. to be integrated into the run-
the system or the outside world.                            time environment. Second, the components themselves
     This paper presents a qualitative model of runtime     must assume some of the testing infrastructure in order
testability that complements Binder’s classical testabil-   to take advantage of runtime testing. Third, and most
ity model, and a generic measurement framework for          important, it requires knowledge of the likely impact of
quantitatively assessing the degree of runtime testabil-    tests on the running system in order to elude interfer-
ity of a system based on the ratio of what can be tested    ence of the runtime activities with the operational state
at runtime vs. what would have been tested during de-       of the system, and taking the appropriate measures in
velopment time. A measurement is devised for the con-       order to minimise these disturbances to the lowest pos-
crete case of architecture-based test coverage, by using    sible degree.
a graph model of the system’s architecture. Concretely,          The knowledge of the runtime test’s impact on a
two testability studies are performed for two compo-        system requires a measurement of what can be tested
nent based systems, showing how to measure the run-         during runtime compared with what would have been
time testability of a system.                               tested if the system was checked off-line in a traditional
                                                            development time integration test without causing any
                                                            disturbance in the running system. In this paper this
1. Introduction                                             measurement is defined as Runtime Testability. Ques-
                                                            tions regarding runtime testability include “What tests
     Runtime testing is emerging as the solution for        are safe to run?”, “Which parts of the system will have
the validation and acceptance of software systems for       to be left untested?” or “How can the runtime testability
which traditional development-time integration test-        of a system be improved?”.
ing cannot be performed. Examples of such systems                This paper devises a qualitative model of the main
are systems with a very high availability requirement,      facets of runtime testability, and a framework for the
which cannot be put off-line to perform maintenance         definition of measurements of runtime testability to de-
operations (such as air traffic control systems, systems     rive a number of metrics according to the test crite-
of the emergency units, banking applications, etc.), or     ria applied. Furthermore, it defines one of such mea-
dynamic Systems of Systems [14], and Service Ori-           surements, based on architectural test coverage on a
ented Architectures [3], where the components that will     graph representation of the system. This measurement
form the system are not known beforehand in some            will be used to estimate the Runtime Testability of two
cases. Integration and system testing for such systems      component-based systems, one taken from a case study
is becoming increasing difficult and costly to perform       in the maritime safety and security domain, and an air-
port lounge Internet gateway.                                   tween traditional integration testing and runtime test-
     The paper is structured as follows. In Section 2 run-      ing. On the left-hand side, a traditional off-line test-
time testing is presented, and in which cases it is nec-        ing method is used, where a copy of the system is cre-
essary. In Section 3, background and related work to            ated, the reconfiguration is planned, tested separately,
runtime testing and testability is discussed and related        and once the testing has finished the changes are ap-
to our research. Section 4 defines runtime testability in        plied to the production system. On the right-hand side,
the context of the IEEE’s definition of Testability, de-         a runtime testing process where the planning and testing
scribes the main factors that have an influence over it,         phases are executed over the production system.
and the generic framework for the measurement of run-
time testability. Section 5 presents our particular model-
based coverage measurement and describes two exam-
ples of the calculation of runtime testability and their
results. Finally, Section 6 presents our conclusions and
ideas for future research.

2. Runtime Testing

     Runtime Testing is a testing method that has to
be carried out on the final execution environment [4].
It can be divided on two phases: deployment-testing
(when the software is first installed), and in-service-
testing (once the system is in use).
     Deployment-time testing is motivated because
there are many aspects of a system that cannot be ver-
ified until the system is deployed in the real environ-
ment [4, 21]. Also, on Systems of Systems and Service              Figure 1. Non-runtime vs. runtime testing
Oriented Architectures, engineers will have to integrate
components that are autonomous, (i.e. the components
or services that are integrated have a separate opera-          3. Related Work
tional and managerial entity of their own [9, 19]). The
service or component integrator does not have complete          3.1. Related work on Runtime Testing
control over the components that he or she is integrat-
ing. Moreover, in many cases these components will                   Architectural support (special sets of interfaces
be remote, third-party services over which he or she            and/or components and the activities associated to
will have no control at all, let alone control for access-      them) for testing a system once it has been deployed
ing a second instance of the system for testing purposes        have already been introduced for component-based sys-
[3, 6].                                                         tems [4, 13] and autonomic systems [18].
     In-service testing derives from the fact that                   Many of the concepts inherent to runtime testing
Component-based systems (such as Systems of Systems             are introduced through Brenner et al. [4] without relat-
and Service Oriented Architectures) can have a chang-           ing them explicitly to the concept of runtime testabil-
ing structure. The components that will form the system         ity. Examples are awareness of when tests are possi-
may not be available, or even known beforehand [3].             ble from the application logic and resource availability
Every time a new component is added, removed or up-             point of view, or the necessity for an infrastructure that
dated there will be a number of tests whose results will        isolates application logic from testing operations apply-
no longer be valid and will have to be re-verified. The          ing the appropriate countermeasures. Related to this,
only possibility for testing, thus, is to verify and validate   Suliman et al. [21] discuss several runtime test execu-
the system after it has been modified and the missing            tion and isolation scenarios, for which, depending on
components become available. This is relevant for self-         the characteristics of the components, different test iso-
managing autonomic systems as well, which dynami-               lation strategies are advised. The properties introduced
cally change the structure or behaviour of components           will be applied in our testability model presented in Sec-
that may be already operating in an unpredictable envi-         tion 4.
ronment.                                                             In addition, Brenner et al. [3] also introduce run-
     Figure 1 depicts this fundamental difference be-           time testing in the context of Web Services. The au-
thors argue that traditional component-based testing           duction state and data of the system will mix with the
techniques can still be applied to Web Services under          testing. Even worse, test operations may trigger events
certain circumstances. Different testing strategies for        outside the system’s boundaries, possibly affecting the
runtime testing are proposed, depending on the services        system’s environment in critical ways that are difficult
type (stateless, per-client state, pan-client state). How-     to control or impossible to recover, e.g. firing a missile
ever, in this paper neither test sensitivity nor test isola-   while testing part of a combat system.
tion are mentioned.                                                 The fact that there is interference through runtime
                                                               testing requires an indicator of how resilient the system
3.2. Related work on Testability                               is with respect to runtime testing, or, in other words,
                                                               what adverse effects can be caused by tests on the run-
     According to the IEEE’s Standard Glossary, testa-         ning system. The standard definition of testability by
bility is defined as: (1) The degree to which a system          the IEEE can be rephrased to reflect these requirements,
or component facilitates the establishment of test crite-      as follows:
ria and the performance of tests to determine whether
those criteria have been met; (2) The degree to which a        Definition 1 Runtime Testability is (1) the degree to
requirement is stated in terms that permit establishment       which a system or a component facilitates runtime test-
of test criteria and performance of tests to determine         ing without being extensively affected; (2) the specifica-
whether those criteria have been met.                          tion of which tests are allowed to be performed during
     A number of research efforts are focused on mod-          runtime without extensively affecting the running sys-
eling statistically which characteristics of the compo-        tem.
nent, or of the test setup, are more prone to showing
                                                               This definition considers both (1) the characteristics of
faults [10, 22]. These probabilistic models can be used
                                                               the system and the extra infrastructure needed for run-
to amplify reliability information [1, 15].
                                                               time testing, and (2) the identification of which test
     Jungmayr [17] proposes a measurement of testa-
                                                               cases are admissible out of all the possible ones.
bility from the point of view of the architecture of the
                                                                    Runtime testability is based on two main pillars:
system, by measuring the static dependencies between
                                                               test sensitivity, and test isolation. We will introduce the
the components. This measurement does not intend
                                                               main factors that have an impact on both of them. Fig-
to extract reliability information, but to maximize the
                                                               ure 2 depicts a fish bones diagram of them.
system’s maintainability, by minimizing the number of
components needed to test another one, and by mini-
mizing the number of affected components after a com-          4.1. Test Sensitivity
ponent changes.
                                                                    It characterises which operations, performed as
     The model presented in this paper can be seen as a
                                                               part of a test, interfere with the state of the running sys-
complement to Binder’s model of Testability for object-
                                                               tem or its environment in an unacceptable way. In this
oriented systems [2], which is based on six main fac-
                                                               section we will describe four of the main factors that
tors, that contribute to the overall testability of the sys-
                                                               have an influence on the test sensitivity of a compo-
tem: (1) Representation; (2) Implementation; (3) Built-
                                                               nent: a component having internal state, a component’s
in Test; (4) Test Suite; (5) Test Tools; and (6) Test Pro-
                                                               internal/external interactions, resource limitations, and
cess. A very similar model of Testability, adapted to
                                                               system availability.
component-based systems, is presented in [11].
     Although the testability issues taken into account        4.1.1. Component State. Knowing if the component
by these works are a concern when performing runtime           exhibits some kind of external state (i.e. the result of an
testing, they do not explicitly refer to test sensitivity      input does not depend only on the value of the input it-
and isolation as first class concerns of runtime testing.       self, but also on the value of past inputs) is an important
This paper provides a complementary model of testa-            factor of test sensitivity. In traditional “off-line” testing,
bility that can be used to assess the ability of a system      this is important because the invocation order will have
of being tested at runtime, without interfering with the       an effect on the expected result of the test. In the case of
production state of the system: its runtime testability.       runtime testing, knowing if a component has state is im-
                                                               portant for two additional reasons. Firstly because the
4. Runtime Testability                                         results of runtime tests will be influenced by the state
                                                               of the system if not handled correctly, and secondly, be-
     Runtime testing will interfere with the system state      cause the state of of the system could be altered as a
or resource availability in unexpected ways, as the pro-       result of a test invocation.
                           Figure 2. Qualitative factors that affect runtime testability

     An interesting distinction, made in [3], is whether        as processor or memory usage, timing constraints, or
the state of the component under test can influence in           even power consumption restrictions.
the states of other components other than the tester com-            Appropriate measures such as the ones presented
ponent (i.e. each user sees the same common state) or           later on this paper, must be implemented to ensure that
not (i.e. each user sees a different state).                    runtime tests do not affect the availability of resources
                                                                for the components that need them, or that, if the avail-
4.1.2. Component Interactions. On many occasions,               ability is impaired, the affected components can re-
components will use other components from the sys-              cover.
tem, or interact with external actors outside the bound-
aries of the system. These interactions have the poten-         4.1.4. Availability. The availability requirements of
tial of initiating other interactions, and so forth. This       the system in which the testing is going to be performed
means that the runtime testability of a component de-           is also a factor. There exist two possibilities: if the com-
pends on the runtime testability of the components it           ponent is going to be active only for testing purposes
interacts with during a test.                                   (exclusive usage), or for both the testing and normal ser-
     All these interactions will likely cause interfer-         vice (shared usage). In a shared configuration two dis-
ences with the state of the running system by changing          tinctions can be made: blocking and non-blocking. The
the state of any of the components in the collaboration.        first means that production operations will be blocked
In some cases, some of these interactions will cross the        or rejected while the test is being performed, impairing
boundaries of the system and affect the states of other         the availability of the services provided by the compo-
systems, and this may be difficult to prevent and fix. In         nents. If a component has a high availability require-
the worst case, the interaction will reach “the outside         ment, runtime testing under this circumstances cannot
world” by sending some output that will enable a physi-         be performed. In the second case, test invocations can
cal output that may be impossible to undo, for example,         be interleaved with production invocations and the com-
firing a missile.                                                ponent is able to distinguish between testing and pro-
     The main implication of interactions, besides alter-       duction requests.
ing the real world, is that the runtime testability of a sys-
tem cannot be completely calculated if the some of the          4.2. Test Isolation
components are not known (something very common
in a dynamic system like a Service Oriented System).                 Test isolation techniques are the means test engi-
Therefore, runtime testability has to be studied online,        neers have of preventing test operations from interfer-
when the components that form the system are known.             ing with the state or environment of the system, and of
                                                                the production state and interactions of the system from
4.1.3. Resource Limitations. The two previous sensi-            influencing test results. Therefore, they represent the
tivity factors are mainly affecting functional require-         capability of a system to enable runtime testing by pro-
ments of the system. However, runtime testing may               viding countermeasures for its test sensitivity.
also affect non-functional requirements. Because run-                A prerequisite for applying test isolation tech-
time tests will be executed on the running system, the          niques is a prior study of test sensitivity in order to
load of these tests will be added to the load caused by         find the runtime testability issues that will make runtime
the normal operation of the system. In some cases it            testing to (partially) fail. In the following paragraphs we
will exceed the available resources of the system such          present some ideas for the development of test isolation
techniques that could be evaluated in further work, by        4.3. Runtime Testability Measurement
using our measurement in terms of testability gain and
implementation effort.                                              Ultimately, all the test sensitivity factors which im-
                                                              pede runtime testing will prevent test engineers from as-
                                                              sessing a certain feature or requirement that could oth-
4.2.1. State Separation. State separation techniques          erwise be performed under ideal conditions of unlimited
are the counterpart of state sensitivity. They aim to sep-    resources and full control of the running system. This is
arate the in-service state of a component from the test-      the main idea used in this section to obtain a numerical
ing state. If the component has an observable state, Suli-    measurement of the Runtime Testability Measurement
man et al. [21] propose a solution based on three levels      (RTM) of a system.
of sophistication. The first level consists of blocking the          Let M ∗ be a measurement of all those features or
component operation while it is being tested. The sec-        requirements which we want to test and Mr the same
ond level proposes cloning the component with special         measurement but reduced to the actual amount of fea-
support from the runtime environment. The last level re-      tures or requirements that can be tested at runtime,
lies on special testing sessions supported by the tested      with Mr ≤ M ∗ . The Runtime Testability Measurement
components to provide state isolation on components           (RTM) of a system is defined as the quotient between
that cannot be cloned easily.                                 M ∗ , and Mr .
                                                                                      RT M = ∗                          (1)
4.2.2. Interaction Separation. Interaction separation               Although generic, the simplicity of RTM allows
can be applied to component interactions that propagate       engineers to tailor it to their specific needs, applying
through the system and affect other components, and,          it to any abstraction of the system for which they whose
in particular, the external environment of the system.        features they would like to asses by runtime testing. In
When interactions cross the system boundary two possi-        this paper, we will further instantiate RTM in terms of
ble isolation solutions can be foreseen: omission of the      test coverage, to estimate the maximum test coverage
output, or simulation. Omission of an output consists of      that a test engineer will be able to reach under runtime
suppressing the output, as if it had never occurred. This     testing conditions. This measurement of runtime testa-
is possible only if the rest of the test does not depend at   bility can be used to predict defects in test coverage,
all on the effects that output might have. If a response      even in the complete absence of test cases, and cor-
is expected, in the form of an input or external event,       rect this situation by showing the testability improve-
then the external system will have to be replaced by a        ment when the issues causing the untestabilities are ad-
simulator, or a mock component.                               dressed.
                                                                    Given C, the set of all the features that a given test
4.2.3. Resource Monitoring. To prevent test cases             adequacy criterion requires to be covered, and Cr , the
from exhausting the resources of the system, resource         set of features which can be covered at runtime, the
monitoring techniques can be applied. A simple mon-           value of RTM, based on Equation 1, is calculated as
itoring solution would be to deny or postpone the exe-                                         |Cr |
cution of the test case if this needs more resources than                            RT M =                            (2)
currently available, for example if system load grows
                                                                   This definition is still generic enough so that it can
over a certain threshold [4]. A more advanced possibil-
                                                              be used with any representation of the system for which
ity would be to allow components and tests to negotiate
                                                              a coverage criterion can be defined. For example, at a
the resources needed for specific tests.
                                                              high granularity level, coverage of function points (as
                                                              defined in the system’s functional requirements) can be
4.2.4. Scheduling. Tests can be scheduled to preserve         used. At a lower granularity level, coverage of the com-
the availability of the components, aiming to control         ponent’s state machines can be used, for example for
how, in what number, and at what moment test cases            all-states or all-transitions coverage. In the following
are allowed to be executed in the system. For exam-           section, we will instantiate the above generic definition
ple, some test cases would only be executed when the          of RTM to component-based systems.
component is less needed, akin to what was proposed
for the reconfiguration phase in [20]. If a component is       5. Case Study
blocked by a test, but there is a certain service operation
that cannot be put on hold anyway, the test case could be          As a concrete case, this paper presents the applica-
pre-empted from the system to satisfy the service call.       tion of a graph dependency model of the system, anno-
tated with runtime testability information. This model        penalty information τi , meaning whether it is possible
is used to assess the cost of covering a specific feature      to traverse such vertex when performing runtime test-
with potential runtime test sequences, and to remove          ing or not, as follows:
those whose cost is unacceptable.
     A runtime testability study is performed on two                          0    if the vertex can be traversed
                                                                     τi =                                                  (3)
component-based systems: a system-of-systems taken                            ∞    otherwise
from a case study in the maritime safety and security
                                                                   Edge information can be obtained either by static
domain, and an airport’s wireless access-point system.
                                                              analysis of the component’s source code, or by provid-
The objective of our experiments is to show that our
                                                              ing some kind of model, such as state or sequence dia-
model can help identify systems not apt for runtime test-
                                                              grams [24]. In the case no information is available for a
ing, and taking decisions on how to address this situa-
                                                              certain vertex, a conservative approach should be taken,
tion, with the final goal of improving the quality and
                                                              assigning infinite weights to it.
reliability of the integrated system.
                                                              5.2. Coverage Criteria
5.1. Model of the System
                                                                   A number of architectural test coverage adequacy
     Component-based systems are formed by compo-             criteria have been defined based on CIG or other similar
nents bound together by their service interfaces, which       representations [16, 24]. We will measure the runtime
can be either provided (the component offers the ser-         testability of the system based on two adequacy criteria
vice), or required (the component needs other compo-          proposed in [24]:
nent to provide the service). During a test, any ser-              The all-vertices adequacy criterion requires execut-
vice of a component can be invoked in a component,            ing each method in all the provided and required inter-
although at a cost. In the case of a runtime testabil-        faces of the components, which translates to traversing
ity analysis, the cost we are interested in is the impact     each vertex vi ∈ V of our model, at least once.
cost of the test invocation on the running system or its
                                                                   On the other hand, the all-context-dependence cri-
environment, derived from the sensitivity factors of the
                                                              terion requires testing invocations of vertices between
component in Section 4. These costs can present them-
                                                              every possible context. A vertex v j is context depen-
selves in multiple magnitudes (computational cost, time
                                                              dent on vi if there’s an invocation sequence from vi that
or money, among others).
                                                              reaches v j . For each of this dependences, all the pos-
     Operations whose impact cost is prohibitive have         sible paths (vi , vi+1 , . . . , v j ) are considered viable, and
to be avoided, designating them as untestable. In this        need to be tested.
paper we will abstract from the process of identifying
the cost sources and their magnitudes, and assume that        5.3. Value of RTM
all operations have either testable (no cost) or untestable
(infinite cost).                                                    We will estimate the impact cost of covering each
     For the purposes of this paper, the system will          of the context dependences or vertices in the graph, flag-
be modelled using a directed component depen-                 ging as untestable those whose cost is prohibitive, in the
dency graph known as Component Interaction Graph              same way as for individual operations. We do not look
(CIG) [24]. On the one hand, it is detailed enough to         at the penalty of actual test cases, but at the possible,
identify key runtime testability issues to the individual     worst-case penalty of any test case that tries to cover
operations of components that cause them. On the other        the elements of C.
hand, it is simple enough so that its derivation is easy           We will assume that the interaction starts at the
and its computation is a tractable problem.                   vertex (for all-vertices coverage) or the first vertex of
     A CIG is defined as a directed graph CIG = (V, E).        the path (for all-context-dependences coverage) that we
The vertex set, V = VP ∪ VR , is formed by the union of       want to cover. Because edges in the CIG represent inter-
the sets of provided and required vertices, where each        actions that might happen or not (without any control-
vertex represents a method of an interface of a certain       flow information), we cannot assume that when try-
component. Edges in E are created from the vertices           ing to cover a path only the vertices in the path will
corresponding to the required interfaces to the vertices      be traversed. In the worst case, the interaction could
of provided of interfaces for inter-component depen-          propagate through all vertices reachable from the ver-
dencies, and from the provided to the required inter-         tex where the interaction starts. Therefore, to estimate
faces for intra-component dependencies.                       the worst-case penalty of covering a vertex vi or a con-
     Each vertex vi ∈ V is annotated with a testing           text dependence path starting at vertex vi , the calcula-
                                     AISPlot       WifiLounge
 Total components                      31              9
 Total vertices                        86             159
 Total edges                           108            141
 Context-dependent paths              1447            730

  Table 1. Characteristics of the experiments

tion has to take into account all the vertices reachable               Figure 3. AISPlot Component Architecture
from vi , which we will denote as Pvi .
     For each vertex vi or path (vi , v j , vk , . . .) that we
would like to cover, we calculate a penalty value T (vi )
similar to the one for individual vertices:

                       T (vi ) =     ∑        τj              (4)
                                   v j ∈Pvi

By considering as testable only those features whose
T (vi ) = ∞, Equation 2 can be rewritten for all-vertices
and all-context-dependence coverage, respectively, as

               |{v ∈ V | T (v) = ∞}|
       RT Mv =                                                 (5)
                             |V |
               |{(vi , v j , vk , . . .) ∈ CIG | T (vi ) = ∞}|       Figure 4. WIfi Lounge Component Architecture
   RT Mc-dep =
                       |{(vi , v j , vk , . . .) ∈ CIG}|
                                                                     named AISPlot. The architecture of the AISPlot system
                                                                     can be seen in Figure 3.
    A possibility for future research is to use finite val-                Position messages are broadcast thourgh radio by
ued penalties, establishing finite upper limits for the tra-          ships (represented in our experiment by the World
verse penalty.                                                       component), and received by a number of base stations
                                                                     (BS component) spread along the coast. Each mes-
5.4. Experimental Setup                                              sage received by a base station is then relayed to the
                                                                     Merger component, which removes duplicates (some
     In both experiments, intra-component CIG edges                  base stations cover overlapping areas). Components in-
were derived by static analysis of the primitive com-                terested in receiving status updates of ships, can sub-
ponents’ source code. The edges between components                   scribe to receive notifications. The Monitor compo-
were derived by inspecting the dependencies at runtime               nent scans received messages in search for inconsisten-
using reflection. The runtime testability flag τi was                  cies in messages to detect potentially dangerous situa-
added based on the test sensitivities found on the source            tions, e.g. ships on collision course. The Visual com-
code of each component. In order to keep the num-                    ponent draws the position of all ships on a screen in the
ber of untestable vertices at a tractable size, we con-              control centre, and also the warnings generated by the
sidered that only operations in components whose state               Monitor component.
was too complex to duplicate (such as databases), or
which caused external interactions (output components)               5.4.2. Airport Lounge. In a second experiment we di-
would be considered untestable.                                      agnosed the runtime testability of a wireless hotspot at
     Table 1 shows the characteristics of the system ar-             an airport lounge [7]. Clients authenticate themselves as
chitecture and graph model of the two systems used in                either business class passengers, loyalty program mem-
our experiments, including number of components, ver-                bers, or prepaid service clients. The component archi-
tices, edges, ad context-dependent paths of each system.             tecture of the system is depicted in Figure 4.
                                                                          When a computer connects to the network, the
5.4.1. AISPlot. For the first experiment we will use a                DhcpListener component generates an event in-
vessel tracking system taken from our industrial case                forming of the assigned IP address. All communica-
study. It consists of a component-based system com-                  tions are blocked by the firewall until the authentication
ing from the maritime safety and security domain, code-              is validated. Passengers of business class are authen-
                          RT Mv     RT Mc-dep                even though there are more runtime-untestable vertices
           AISPlot         0.14      0.012                   than in the previous case, they are not part of as many
           WifiLounge       0.62       0.41                   Pvi as it was the case for the previous experiment.

Table 2. Runtime testabilities of both systems               5.6. Discussion of the Experiments

ticated against a number of fly ticket databases. Pas-             Once the model of the system and the measurement
sengers from a miles program are authenticated against       is obtained, the potential applications of this measure-
the frequent flyer program database, and the ticket           ment are multiple. It can be used to evaluate the gain in
databases to check that they are actually entitled to free   testability when the test sensitivities of a set of vertices
access. Passengers using the prepaid method must cre-        are addressed. An algorithm can be devised to find the
ate an account in the system, linked to a credit card that   optimal solution in terms of cost and testability gain, for
is used for the payments. Once the authentication has        example by providing fix costs for each vertex or com-
succeeded, the port block in the firewall is disabled so      ponent, and evaluating all the possible fix combinations.
that the client can use the connection. The session ends          Moreover, even though the relationship between
when the user disconnects, or the authentication token       test coverage and defect coverage is not clear [5], previ-
becomes invalid. If the user is using a prepaid account,     ous studies have shown a beneficent effect of test cover-
its remaining prepaid time will be updated.                  age on reliability [8, 12, 23]. Furthermore, coverage is
                                                             a widely used measurement as a quality indicator by the
5.5. Testability Diagnostic                                  industry. For this reasons, and as our measurement is as
                                                             an indicator of the maximum test coverage attainable of
     For AISPlot, five operations from the Visual             a runtime-tested system, our measurement can be used
component have testability issues, due to the fact that      as a indicator of the quality of a system, for example by
they will influence the outside world by printing ship        using a coverage-based reliability models to account for
positions and warnings on the real screen if not isolated    the fact that coverage will be imperfect when perform-
properly. Figure 5 depicts the Interaction Graph of AIS-     ing runtime testing.
Plot, with the problematic vertices marked with a larger
dot.                                                         6. Conclusions and Future Work
     As summarized in Table 2, the RTM of the system
is initially 14% for vertex coverage, and 1.2% for con-           The amount of runtime testing that can be per-
text dependence coverage. This extremely poor value          formed on a system is limited by the characteristics of
is due to the fact that the architecture of the system is    the system, its components, and the test cases them-
organised as a pipeline, with the Visual component           selves. A measurement for this limitations is what we
at the end. Almost all vertices are connected to the         have defined as runtime testability.
five problematic vertices of the visualiser component,             In this paper, we have presented a qualitative model
so they will appear in the Pvi of almost every vertex.       of the main factors that affect the runtime testability of
     On the other hand, for the Airport Lounge sys-          a system. Furthermore, we have provided a framework
tem, 13 operations are runtime untestable: operations        for the definition of quantitative coverage-based run-
which modify the state of the AccountDatabase,               time testability measurements, and proposed a concrete
TransientIpDb and PermanentIpDb compo-                       application of our measurement to a number of cover-
nents are considered runtime untestable because they         age criteria of the Component Interaction Graph of the
act on databases behind the components. The with-            system. This model is very suitable for many types of
draw operation in CardCenter is also not runtime             systems, such as data flow or client-server architectures.
testable because it is operating on a banking system out-         Further work will include using this model and
side our control. Finally, the operations that control the   measure to find the optimal fix at the lowest cost, and
Firewall component are also runtime untestable be-           a possible refinement of the model, for example to in-
cause this component is a front-end to a hardware ele-       clude state information to give better estimations. The
ment impossible to duplicate. The interaction graph of       evaluation of accuracy of the predicted values and of
the system can be seen in Figure 6.                          the effect of runtime testability on the system’s reliabil-
     The runtime testability of the system is interme-       ity is also left for future work. More validation using
diate, 62% for vertex coverage, and 41% for context          industrial cases and synthetic systems is also planned.
dependency coverage. This value is far from the ex-
tremely low testability of the AISPlot system, because           Acknowledgements: This work has been carried
  Figure 5. AISPlot Interaction Graph

Figure 6. Wifi Lounge Interaction Graph
out as part of the Poseidon project under the responsi-                           a      ´
                                                                   [13] A. Gonz´ lez, E. Piel, and H.-G. Gross. Architec-
bility of the Embedded Systems Institute (ESI), Eind-                   ture support for runtime integration and verification
hoven, The Netherlands. This project is partially sup-                  of component-based systems of systems. In 1st In-
ported by the Dutch Ministry of Economic Affairs un-                    ternational Workshop on Automated Engineering of
der the BSIK03021 program.                                              Autonomous and run-time evolving Systems (ARAMIS
                                                                        2008), pages 41–48, L’Aquila, Italy, Sept. 2008. IEEE
                                                                        Computer Society.
References                                                                         a      ´
                                                                   [14] A. Gonz´ lez, E. Piel, H.-G. Gross, and M. Glan-
                                                                        drup. Testing challenges of maritime safety and secu-
 [1] A. Bertolino and L. Strigini. Using testability measures           rity systems-of-systems. In Testing: Academic and In-
     for dependability assessment. In ICSE ’95: Proceed-                dustry Conference - Practice And Research Techniques
     ings of the 17th international conference on Software              (TAIC PART’08), pages 35–39, Windsor, United King-
     engineering, pages 61–70, New York, NY, USA, 1995.                 dom, Aug. 2008. IEEE Computer Society.
     ACM.                                                          [15] D. Hamlet and J. Voas. Faults on its sleeve: amplify-
 [2] R. V. Binder. Design for testability in object-oriented            ing software reliability testing. SIGSOFT Software En-
     systems. Communications of the ACM, 37(9):87–101,                  gineering Notes, 18(3):89–98, 1993.
     1994.                                                         [16] N. L. Hashim, S. Ramakrishnan, and H. W. Schmidt. Ar-
 [3] D. Brenner, C. Atkinson, O. Hummel, and D. Stoll.                  chitectural test coverage for component-based integra-
     Strategies for the run-time testing of third party web ser-        tion testing. In QSIC ’07: Proceedings of the Seventh In-
     vices. In SOCA ’07: Proceedings of the IEEE Interna-               ternational Conference on Quality Software, pages 262–
     tional Conference on Service-Oriented Computing and                267, Washington, DC, USA, 2007. IEEE Computer So-
     Applications, pages 114–121, Washington, DC, USA,                  ciety.
     2007. IEEE Computer Society.                                  [17] S. Jungmayr. Identifying test-critical dependencies. In
 [4] D. Brenner, C. Atkinson, R. Malaka, M. Merdes,                     ICSM ’02: Proceedings of the International Conference
     B. Paech, and D. Suliman. Reducing verification effort              on Software Maintenance (ICSM’02), pages 404–413,
     in component-based software engineering through built-             Washington, DC, USA, 2002. IEEE Computer Society.
     in testing. Information Systems Frontiers, 9(2-3):151–        [18] T. M. King, D. Babich, J. Alava, P. J. Clarke, and
     162, 2007.                                                         R. Stevens. Towards self-testing in autonomic com-
 [5] L. Briand and D. Pfahl. Using simulation for assess-               puting systems. In Autonomous Decentralized Systems,
     ing the real impact of test coverage on defect coverage.           2007. ISADS ’07. Eighth International Symposium on,
     In Proceedings of the 10th International Symposium on              pages 51–58, mar 2007.
     Software Reliability Engineering, pages 148–157, 1999.        [19] M. W. Maier. Architecting principles for systems-of-
 [6] A. Bucchiarone, H. Melgratti, and F. Severoni. Testing             systems. Systems Engineering, 1(4):267–284, 1998.
     service composition. In 8th Argentine Symposium on            [20] J. Matevska and W. Hasselbring. A scenario-based
     Software Engineering, Mar del Plata, Argentina, 2007.              approach to increasing service availability at runtime
 [7] T.      Bures.                Fractal      BPC      demo.          reconfiguration of component-based systems. In EU-
     http://kraken.cs.cas.cz/ft/doc/demo/ftdemo.html.                   ROMICRO ’07: Proceedings of the 33rd EUROMICRO
 [8] X. Cai and M. R. Lyu. Software reliability modeling                Conference on Software Engineering and Advanced
     with test coverage: Experimentation and measurement                Applications, pages 137–148, Washington, DC, USA,
     with a fault-tolerant software project. In ISSRE ’07:              2007. IEEE Computer Society.
     Proceedings of the The 18th IEEE International Sympo-         [21] D. Suliman, B. Paech, L. Borner, C. Atkinson, D. Bren-
     sium on Software Reliability, pages 17–26, Washington,             ner, M. Merdes, and R. Malaka. The MORABIT ap-
     DC, USA, 2007. IEEE Computer Society.                              proach to runtime component testing. In 30th Annual
 [9] D. Fisher. An emergent perspective on interoperation               International Computer Software and Applications Con-
     in systems of systems. Technical Report CMU/SEI-TR-                ference, volume 2, pages 171–176, Sept. 2006.
     2006-003, Software Engineering Institute, 2006.               [22] J. Voas, L. Morrel, and K. Miller. Predicting where faults
[10] R. S. Freedman.         Testability of software compo-             can hide from testing. IEEE Software, 8(2):41–48, 1991.
     nents. IEEE Transactions on Software Engineering,             [23] M. A. Vouk. Using reliability models during testing with
     17(6):553–564, 1991.                                               non-operational profiles. In Proceedings of the 2nd Bell-
[11] J. Gao and M.-C. Shih. A component testability model               core/Purdue workshop on issues in Software Reliability
     for verification and measurement. In COMPSAC ’05:                   Estimation, pages 103–111, 1992.
     Proceedings of the 29th Annual International Computer         [24] Y. Wu, D. Pan, and M.-H. Chen. Techniques for testing
     Software and Applications Conference, volume 2, pages              component-based software. In Proceedings of the IEEE
     211–218, Washington, DC, USA, 2005. IEEE Computer                  International Conference on Engineering of Complex
     Society.                                                           Computer Systems, volume 0, page 0222, Los Alamitos,
[12] S. S. Gokhale and K. S. Trivedi. A time/structure based            CA, USA, 2001. IEEE Computer Society.
     software reliability model. Annals of Software Engineer-
     ing, 8(1-4):85–121, 1999.

To top