Docstoc

final1

Document Sample
final1 Powered By Docstoc
					The formal definition can be given as:
 the process of exercising or evaluating a system or system
component by manual or automated means to verify that it
satisfies specified requirements or to identify differences
between expected and actual results.
Testing occurs at every stage of system construction. The larger a piece of code is when
defects are detected, the harder and more expensive it is to find and correct the defects.

The different levels of testing reflect that testing, in the general sense, is not a single
phase of the software lifecycle. It is a set of activities performed throughout the entire
software lifecycle.

In considering testing, most people think of the activities described in figure below. The
activities after Implementation are normally the only ones associated with testing.
Software testing must be considered before implementation, as is suggested by the input
arrows into the testing activities.
BLACK-BOX & WHITE BOX:-
Black-box and white-box are test design methods.
Black-box test design treats the system as a "black-box", so it doesn't explicitly use
knowledge of the internal structure. Black-box test design is usually described as
focusing on testing functional requirements. Black box tests are performed to assess how
well a program meets its requirements, looking for missing or incorrect functionality.
Functional tests typically exercise code with valid or nearly valid input for which the
expected output is known. This includes concepts such as 'boundary values'. Synonyms
for black-box include: behavioral, functional, opaque-box, and closed-box.
 White-box test design allows one to peek inside the "box", and it focuses specifically on
using internal knowledge of the software to guide the selection of test data. Synonyms
for white-box include:
structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer
the terms "behavioral" and "structural". Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn't strictly forbidden, but
it's still discouraged. In practice, it hasn't proven useful to use a single test design
method. One has to use a mixture of different methods so that they aren't hindered by the
limitations of a particular one. Some call this "gray-box" or "translucent-box" test
design, but others wish we'd stop talking about boxes altogether. White box testing is
performed to reveal problems with the internal structure of a program. This requires the
tester to have detailed knowledge of the internal structure. A common goal of white-box
testing is to ensure a test case exercises every path through a program. A fundamental
strength that all white box testing strategies share is that the entire software
implementation is taken into account during testing, which facilitates error detection even
when the software specification is vague or incomplete. The effectiveness or
thoroughness of white-box testing is commonly expressed in terms of test or code
coverage metrics, which measure the fraction of code exercised by test cases. It is
important to understand that these methods are used during the test design phase, and
their influence is hard to see in the tests once they're implemented. Note that any level of
testing (unit testing,
system testing, etc.) can use any test design methods. Unit testing is
usually associated with structural test design, but this is because
testers usually don't have well-defined requirements at the unit level
to validate.

REGRESSION TESTING :-
Regression testing is an expensive but necessary activity performed on modified software
to provide confidence that changes are correct and do not adversely affect other system
components. Four things can happen when a developer attempts to fix a bug. Three of
these things are bad, and one is good:
                                            New Bug No New Bug
                      Successful Change Bad             Good
                     Unsuccessful Change Bad            Bad
Because of the high probability that one of the bad outcomes will result from a change to
the system, it is necessary to do regression testing. It can be difficult to determine how
much re-testing is needed, especially near the end of the development cycle. Most
industrial testing is done via test suites; automated sets of procedures designed to exercise
all parts of a program and to show defects. While the original suite could be used to test
the modified software, this might be very time-consuming. A regression test selection
technique chooses, from an existing test set, the tests that are deemed necessary to
validate modified software.

There are three main groups of test selection approaches in use:
    Minimization approaches seek to satisfy structural coverage criteria by identifying
       a minimal set of tests that must be rerun.
    Coverage approaches are also based on coverage criteria, but do not require
       minimization of the test set. Instead, they seek to select all tests that exercise
       changed or affected program components.
    Safe attempt instead to select every test that will cause the modified program to
       produce different output than original program.

An interesting approach to limiting test cases is based on whether we can confine testing
to the "vicinity" of the change. (Ex. If I put a new radio in my car, do I have to do a
complete road test to make sure the change was successful?) A new breed of regression
test theory tries to identify, through program flows or reverse engineering, where
boundaries can be placed around modules and subsystems. These graphs can determine
which tests from the existing suite may exhibit changed behavior on the new version.

Regression testing has been receiving more attention as corporations focus on fixing the
'Year 2000 Bug'. The goal of most Y2K is to correct the date handling portions of their
system without changing any other behavior. A new 'Y2K' version of the system is
compared against a baseline original system. With the obvious exception of date formats,
the performance of the two versions should be identical. This means not only do they do
the same things correctly, they also do the same things incorrectly. A non-Y2K bug in the
original software should not have been fixed by the Y2K work.

There are a number of different ways to determine the test phase of the software life cycle
is complete. Some common examples are:

      All black-box test cases are run
      White-box test coverage targets are met
      Rate of fault discovery goes below a target value
      Target percentage of all faults in the system are found
      Measured reliability of the system achieves its target value (mean time to failure)
      Test phase time or resources are exhausted
STRESS TESTING & LOAD TESTING:-

1. Stress testing is subjecting a system to an unreasonable load while denying it the
resources (e.g., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is
to stress a system to the breaking point in order to find bugs that will make that break
potentially harmful. The system is not expected to process the overload without adequate
resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing
data). Bugs and failure modes discovered under stress testing may or may not be
repaired depending on the application, the failure mode, consequences, etc. The load
(incoming transaction stream) in stress testing is often deliberately distorted so as to force
the system into resource depletion.

2. Load testing is subjecting a system to a statistically representative (usually) load.
The two main reasons for using such loads is in support of software reliability testing and
in performance testing. The term "load testing" by itself is too vague and imprecise to
warrant use. For example, do you mean representative load," "overload," "high load,"
etc. In performance testing, load is varied from a minimum (zero) to the maximum level
the system can sustain without running out of resources or having, transactions suffer
(application-specific) excessive delay.

3. A third use of the term is as a test whose objective is to determine the maximum
sustainable load the system can handle. In this usage, "load testing" is merely testing at
the highest transaction arrival rate in performance testing.

SOFTWARE DEVELOPMENT LIFECYCLES:-
The various activities which are undertaken when developing software are commonly
modelled as a software development lifecycle. The software development lifecycle
begins with the identification of a requirement for software and ends with the formal
verification of the developed software against that requirement.
The software development lifecycle does not exist by itself, it is in fact part of an overall
product lifecycle. Within the product lifecycle, software will undergo maintenance to
correct errors and to comply with changes to requirements. The simplest overall form is
where the product is just software, but it can become much more complicated, with
multiple software developments each forming part of an overall system to comprise a
product.
There are a number of different models for software development lifecycles. One thing
which all models have in common, is that at some point in the lifecycle, software has to
be tested. This paper outlines some of the more commonly used software development
lifecycles, with particular emphasis on the testing activities in each model.
1. Sequential Lifecycle Models
The software development lifecycle begins with the identification of a requirement for
software and ends with the formal verification of the developed software against that
requirement. Traditionally, the models used for the software development lifecycle have
been sequential, with the development progressing through a number of well defined
phases. The sequential phases are usually represented by a V or waterfall diagram. These
models are respectively called a V lifecycle model and a waterfall lifecycle model.
                                Figure 1 V Lifecycle Model
There are in fact many variations of V and waterfall lifecycle models, introducing
different phases to the lifecycle and creating different boundaries between phases. The
following set of lifecycle phases fits in with the practices of most professional software
developers.

 Requirements phase, in which the requirements for the software are gathered
  The
and analyzed, to produce a complete and unambiguous specification of what the
software is required to do.
 Architectural Design phase, where a software architecture for the
  The
implementation of the requirements is designed and specified, identifying the
components within the software and the relationships between the components.




                           Figure 2 Waterfall Lifecycle Model
 Detailed Design phase, where the detailed implementation of each component
   The
is specified.
 Code and Unit Test phase, in which each component of the software is coded
   The
and tested to verify that it faithfully implements the detailed design.
 Software Integration phase, in which progressively larger groups of tested
   The
software components are integrated and tested until the software works as a whole.
 System Integration phase, in which the software is integrated to the overall
   The
product and tested.
 Acceptance Testing phase, where tests are applied and witnessed to validate
   The
that the software faithfully implements the specified requirements.
Software specifications will be products of the first three phases of this lifecycle model.
The remaining four phases all involve testing the software at various levels, requiring
test specifications against which the testing will be conducted as an input to each of these
phases.
2. Progressive Development Lifecycle Models
The sequential V and waterfall lifecycle models represent an idealised model of software
development. Other lifecycle models may be used for a number of reasons, such as
volatility of requirements, or a need for an interim system with reduced functionality
when long timescales are involved. As an example of other lifecycle models, let us look
at progressive development and iterative lifecycle models.
A common problem with software development is that software is needed quickly, but it
will take a long time to fully develop. The solution is to form a compromise between
timescales and functionality, providing "interim" deliveries of software, with reduced
functionality, but serving as a stepping stones towards the fully functional software. It is
also possible to use such a stepping stone approach as a means of reducing risk.
The usual names given to this approach to software development are progressive
development or phased implementation. The corresponding lifecycle model is referred
to as a progressive development lifecycle. Within a progressive development lifecycle,
each individual phase of development will follow its own software development
lifecycle, typically using a V or waterfall model. The actual number of phases will
depend upon the development.




                           Progressive Development Lifecycle
Each delivery of software will have to pass acceptance testing to verify the software
fulfils the relevant parts of the overall requirements. The testing and integration of each
phase will require time and effort, so there is a point at which an increase in the number
of development phases will actually become counter productive, giving an increased cost
and timescale, which will have to be weighed carefully against the need for an early
solution.
The software produced by an early phase of the model may never actually be used, it
may just serve as a prototype. A prototype will take short cuts in order to provide a
quick means of validating key requirements and verifying critical areas of design. These
short cuts may be in areas such as reduced documentation and testing. When such short
cuts are taken, it is essential to plan to discard the prototype and implement the next
phase from scratch, because the reduced quality of the prototype will not provide a good
foundation for continued development.
3. Iterative Lifecycle Models
An iterative lifecycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of
the software, which can then be reviewed in order to identify further requirements. This
process is then repeated, producing a new version of the software for each cycle of the
model.
Consider an iterative lifecycle model which consists of repeating the four phases in
sequence, as illustrated by figure 4.




                           Figure 4 Iterative Lifecycle Model
 Requirements phase, in which the requirements for the software are gathered and
  A
analyzed. Iteration should eventually result in a requirements phase which produces
a complete and final specification of requirements.
Design phase, in which a software solution to meet the requirements is designed.
This may be a new design, or an extension of an earlier design.
 Implementation and Test phase, when the software is coded, integrated and
  An
tested.
 Review phase, in which the software is evaluated, the current requirements are
   A
reviewed, and changes and additions to requirements proposed.
For each cycle of the model, a decision has to be made as to whether the software
produced by the cycle will be discarded, or kept as a starting point for the next cycle
(sometimes referred to as incremental prototyping). Eventually a point will be reached
where the requirements are complete and the software can be delivered, or it becomes
impossible to enhance the software as required, and a freash start has to be made.
The iterative lifecycle model can be likened to producing software by successive
approximation. Drawing an analogy with mathematical methods which use successive
approximation to arrive at a final solution, the benefit of such methods depends on how
rapidly they converge on a solution.
Continuing the analogy, successive approximation may never find a solution. The
iterations may oscillate around a feasible solution or even diverge. The number of
iterations required may become so large as to be unrealistic. We have all seen software
developments which have made this mistake!
The key to successful use of an iterative software development lifecycle is rigorous
validation of requirements, and verification (including testing) of each version of the
software against those requirements within each cycle of the model. The first three
phases of the example iterative model are in fact an abbreviated form a sequential V or
waterfall lifecycle model. Each cycle of the model produces software which requires
testing at the unit level, for software integration, for system integration and for
acceptance. As the software evolves through successive cycles, tests have to be repeated
and extended to verify each version of the software.

UNIT TESTING:-
There’s no doubt that software is playing an expanding role in today’s commercial and
military products. As that software grows more complex, the pressure is on for
engineering teams to find ways to improve product reliability while reducing
development costs. Unit level testing continues to attack one side of those efforts. It
continues to be very effective in improving software reliability. Yet many teams are put
off by the perception that unit level testing is both tedious and expensive. Unit level
testing verifies correct functionality of individual software units:
a procedure or function in a procedural language, or a class in an object-oriented
language. Any external functions called are replaced with stub (or wrapper) functions
to allow full control of the test environment. A “test driver” program is written (or
automatically generated) to initialize data, invoke functions under test and
verify the results.
A test driver is software which executes software in order to test it, providing a
framework for setting input parameters, executing the unit, and reading the output
parameters. A stub is an imitation of a unit, used in place of the real unit to facilitate
testing.
 Figure 1 illustrates the unit test program structure. When unit level coding and design
problems occur many development projects get bogged down at the integration phase.
Each iteration of a system build requires significant time and resources.
That’s followed by the setup, execution and analysis of a system level test. Clearly
such elaborate steps aren’t an efficient method for detecting low level defects.
Meanwhile, unit level error conditions are nearly impossible to simulate in a system
level test, leaving many error handling scenarios completely untested. With that in mind,
low level defects should be detected at the unit development stage, before the integration
phase.
DIFFERENT APPROACHES FOR UNIT TESTING:-

1. Top Down Testing
1.1. Description
In top down unit testing, individual units are tested by using them from the units which
call them, but in isolation from the units called.
The unit at the top of a hierarchy is tested first, with all called units replaced by stubs.
Testing continues by replacing the stubs with the actual called units, with lower level
units being stubbed. This process is repeated until the lowest level units have been tested.
Top down testing requires test stubs, but not test drivers.
Figure 2.1 illustrates the test stubs and tested units needed to test unit D, assuming that
units A, B and C have already been tested in a top down approach.
A unit test plan for the program shown in figure 2.1, using a strategy based on the top
down organisational approach, could read as follows:
Step (1)
Test unit A, using stubs for units B, C and D.
Step (2)
Test unit B, by calling it from tested unit A, using stubs for units C and D.
Step (3)
Test unit C, by calling it from tested unit A, using tested units B and a stub for unit D.
Step (4)
Test unit D, by calling it from tested unit A, using tested unit B and C, and stubs for units
E, F and G. (Shown in figure 2.1).
Step (5)
Test unit E, by calling it from tested unit D, which is called from tested unit A, using
tested units B and C, and stubs for units F, G, H, I and J.
Step (6)
Test unit F, by calling it from tested unit D, which is called from tested unit A, using
tested units B, C and E, and stubs for units G, H, I and J.
Step (7)
Test unit G, by calling it from tested unit D, which is called from tested unit A, using
tested units B, C, E and F, and stubs for units H, I and J.
Step (8)
Test unit H, by calling it from tested unit E, which is called from tested unit D, which is
called from tested unit A, using tested units B, C, E, F and G, and stubs for units I and J.
Step (9)
Test unit I, by calling it from tested unit E, which is called from tested unit D, which is
called from tested unit A, using tested units B, C, E, F, G and H, and a stub for units J.
Step (10)
Test unit J, by calling it from tested unit E, which is called from tested unit D, which is
called from tested unit A, using tested units B, C, E, F, G, H and I.
                              Figure 1.1 - Top Down Testing
1.2. Advantages
Top down unit testing provides an early integration of units before the software
integration phase. In fact, top down unit testing is really a combined unit test and
software integration strategy.
The detailed design of units is top down, and top down unit testing implements tests in
the sequence units are designed, so development time can be shortened by overlapping
unit testing with the detailed design and code phases of the software lifecycle.
In a conventionally structured design, where units at the top of the hierarchy provide
high level functions, with units at the bottom of the hierarchy implementing details, top
down unit testing will provide an early integration of 'visible' functionality. This gives a
very requirements oriented approach to unit testing.
Redundant functionality in lower level units will be identified by top down unit testing,
because there will be no route to test it. (However, there can be some difficulty in
distinguishing between redundant functionality and untested functionality).
1.3. Disadvantages
Top down unit testing is controlled by stubs, with test cases often spread across many
stubs. With each unit tested, testing becomes more complicated, and consequently more
expensive to develop and maintain.
As testing progresses down the unit hierarchy, it also becomes more difficult to achieve
the good structural coverage which is essential for high integrity and safety critical
applications, and which are required by many standards. Difficulty in achieving
structural coverage can also lead to a confusion between genuinely redundant
functionality and untested functionality. Testing some low level functionality, especially
error handling code, can be totally impractical.
Changes to a unit often impact the testing of sibling units and units below it in the
hierarchy. For example, consider a change to unit D. Obviously, the unit test for unit D
would have to change and be repeated. In addition, unit tests for units E, F, G, H, I and J,
which use the tested unit D, would also have to be repeated. These tests may also have to
change themselves, as a consequence of the change to unit D, even though units E, F, G,
H, I and J had not actually changed. This leads to a high cost associated with retesting
when changes are made, and a high maintenance and overall lifecycle cost.
The design of test cases for top down unit testing requires structural knowledge of when
the unit under test calls other units. The sequence in which units can be tested is
constrained by the hierarchy of units, with lower units having to wait for higher units to
be tested, forcing a 'long and thin' unit test phase. (However, this can overlap
substantially with the detailed design and code phases of the software lifecycle).
The relationships between units in the example program in figure 2.1 is much simpler
than would be encountered in a real program, where units could be referenced from more
than one other unit in the hierarchy. All of the disadvantages of a top down approach to
unit testing are compounded by a unit being referenced from more than one other unit.
1.4. Overall
A top down strategy will cost more than an isolation based strategy, due to complexity of
testing units below the top of the unit hierarchy, and the high impact of changes. The top
down organisational approach is not a good choice for unit testing. However, a top down
approach to the integration of units, where the units have already been tested in isolation,
can be viable.


2. Bottom up Testing
2.1. Description
In bottom up unit testing, units are tested in isolation from the units which call them, but
using the actual units called as part of the test.
The lowest level units are tested first, then used to facilitate the testing of higher level
units. Other units are then tested, using previously tested called units. The process is
repeated until the unit at the top of the hierarchy has been tested. Bottom up testing
requires test drivers, but does not require test stubs.
Figure 2.1 illustrates the test driver and tested units needed to test unit D, assuming that
units E, F, G, H, I and J have already been tested in a bottom up approach.




                              Figure 2.1 - Bottom Up Testing
A unit test plan for the program shown in figure 3.1, using a strategy based on the
bottom up organisational approach, could read as follows:
Step (1)
(Note that the sequence of tests within this step is unimportant, all tests within step 1
could be executed in parallel.)
Test unit H, using a driver to call it in place of unit E;
Test unit I, using a driver to call it in place of unit E;
Test unit J, using a driver to call it in place of unit E;
Test unit F, using a driver to call it in place of unit D;
Test unit G, using a driver to call it in place of unit D;
Test unit B, using a driver to call it in place of unit A;
Test unit C, using a driver to call it in place of unit A.
Step (2)
Test unit E, using a driver to call it in place of unit D and tested units H, I and J.
Step (3)
Test unit D, using a driver to call it in place of unit A and tested units E, F, G, H, I and J.
(Shown in figure 2.1).
Step (4)
Test unit A, using tested units B, C, D, E, F, G, H, I and J.
2.2. Advantages
Like top down unit testing, bottom up unit testing provides an early integration of units
before the software integration phase. Bottom up unit testing is also really a combined
unit test and software integration strategy. All test cases are controlled solely by the test
driver, with no stubs required. This can make unit tests near the bottom of the unit
hierarchy relatively simple. (However, higher level unit tests can be very complicated).
Test cases for bottom up testing may be designed solely from functional design
information, requiring no structural design information (although structural design
information may be useful in achieving full coverage). This makes the bottom up
approach to unit testing useful when the detailed design documentation lacks structural
detail.
Bottom up unit testing provides an early integration of low level functionality, with
higher level functionality being added in layers as unit testing progresses up the unit
hierarchy. This makes bottom up unit testing readily compatible with the testing of
objects.
2.3. Disadvantages
As testing progresses up the unit hierarchy, bottom up unit testing becomes more
complicated, and consequently more expensive to develop and maintain. As testing
progresses up the unit hierarchy, it also becomes more difficult to achieve good
structural coverage.
Changes to a unit often impact the testing of units above it in the hierarchy. For example,
consider a change to unit H. Obviously, the unit test for unit H would have to change and
be repeated. In addition, unit tests for units A, D and E, which use the tested unit H,
would also have to be repeated. These tests may also have to change themselves, as a
consequence of the change to unit H, even though units A, D and E had not actually
changed. This leads to a high cost associated with retesting when changes are made, and
a high maintenance and overall lifecycle cost.
The sequence in which units can be tested is constrained by the hierarchy of units, with
higher units having to wait for lower units to be tested, forcing a 'long and thin' unit test
phase. The first units to be tested are the last units to be designed, so unit testing cannot
overlap with the detailed design phase of the software lifecycle.
The relationships between units in the example program in figure 2.2 is much simpler
than would be encountered in a real program, where units could be referenced from more
than one other unit in the hierarchy. As for top down unit testing, the disadvantages of a
bottom up approach to unit testing are compounded by a unit being referenced from
more than one other unit.
2.4. Overall
The bottom up organisational approach can be a reasonable choice for unit testing,
particularly when objects and reuse are considered. However, the bottom up approach is
biased towards functional testing, rather than structural testing. This can present
difficulties in achieving the high levels of structural coverage essential for high integrity
and safety critical applications, and which are required by many standards.
The bottom up approach to unit testing conflicts with the tight timescales required of
many software developments. Overall, a bottom up strategy will cost more than an
isolation based strategy, due to complexity of testing units above the bottom level in the
unit hierarchy and the high impact of changes.
3. Isolation Testing
3.1. Description
Isolation testing tests each unit in isolation from the units which call it and the units it
calls.
Units can be tested in any sequence, because no unit test requires any other unit to have
been tested. Each unit test requires a test driver and all called units are replaced by stubs.
Figure 4.1 illustrates the test driver and tested stubs needed to test unit D.




                               Figure 4.1 - Isolation Testing
A unit test plan for the program shown in figure 4.1, using a strategy based on the
isolation organisational approach, need contain only one step, as follows:
Step (1)
(Note that there is only one step to the test plan. The sequence of tests is unimportant, all
tests could be executed in parallel.)
Test unit A, using a driver to start the test and stubs in place of units B, C and D;
Test unit B, using a driver to call it in place of unit A;
Test unit C, using a driver to call it in place of unit A;
Test unit D, using a driver to call it in place of unit A and stubs in place of units E, F and
G, (Shown in figure 3.1);
Test unit E, using a driver to call it in place of unit D and stubs in place of units H, I and
J;
Test unit F, using a driver to call it in place of unit D;
Test unit G, using a driver to call it in place of unit D;
Test unit H, using a driver to call it in place of unit E;
Test unit I, using a driver to call it in place of unit E;
Test unit J, using a driver to call it in place of unit E.
3.2. Advantages
It is easier to test an isolated unit thoroughly, where the unit test is removed from the
complexity of other units. Isolation testing is the easiest way to achieve good structural
coverage, and the difficulty of achieving good structural coverage does not vary with the
position of a unit in the unit hierarchy.
Because only one unit is being tested at a time, the test drivers tend to be simpler than for
bottom up testing, while the stubs tend to be simpler than for top down testing.
With an isolation approach to unit testing, there are no dependencies between the unit
tests, so the unit test phase can overlap the detailed design and code phases of the
software lifecycle. Any number of units can be tested in parallel, to give a 'short and fat'
unit test phase. This is a useful way of using an increase in team size to shorten the
overall time of a software development.
A further advantage of the removal of interdependency between unit tests, is that
changes to a unit only require changes to the unit test for that unit, with no impact on
other unit tests. This results in a lower cost than the bottom up or top down
organisational approaches, especially when changes are made.
An isolation approach provides a distinct separation of unit testing from integration
testing, allowing developers to focus on unit testing during the unit test phase of the
software lifecycle, and on integration testing during the integration phase of the software
lifecycle. Isolation testing is the only pure approach to unit testing, both top down testing
and bottom up testing result in a hybrid of the unit test and integration phases.
Unlike the top down and bottom up approaches, the isolation approach to unit testing is
not affected by a unit being referenced from more than one other unit.
3.3. Disadvantages
The main disadvantage of an isolation approach to unit testing is that it does not provide
any early integration of units. Integration has to wait for the integration phase of the
software lifecycle. (Is this really a disadvantage?).
An isolation approach to unit testing requires structural design information and the use of
both stubs and drivers. This can lead to higher costs than bottom up testing for units near
the bottom of the unit hierarchy. However, this will be compensated by simplified testing
for units higher in the unit hierarchy, together with lower costs each time a unit is
changed.
3.4. Overall
An isolation approach to unit testing is the best overall choice. When supplemented with
an appropriate integration strategy, it enables shorter development timescales and
provides the lowest cost, both during development and for the overall lifecycle.
Following unit testing in isolation, tested units can be integrated in a top down or bottom
up sequence, or any convenient groupings and combinations of groupings. However, a
bottom up integration is the most compatible strategy with current trends in object
oriented and object biased designs.
An isolation approach to unit testing is the best way of achieving the high levels of
structural coverage essential for high integrity and safety critical applications, and which
are required by many standards. With all the difficult work of achieving good structural
coverage achieved by unit testing, integration testing can concentrate on overall
functionality and the interactions between units.

HARDWARE REQUIREMENTS THAT AFFECT TESTABILITY:-
At first glance, software testability seems purely a software issue, unaffected by the
design choices made by the hardware engineers. However, in embedded systems, the
feasibility of software testing is very dependent on the availability of hardware resources.
The main factors affecting software testability are the selection of the CPU type, the
amount and types of memory and the availability of an I/O channel for a test results I/O
stream back to the user.
CPU Selection: In selecting the CPU (or microcontroller chip), ensure that the maximum
size of the address space is at least four times the size that the application requires. This
will leave enough extra space for any memory overhead required by a testing tool.
Memory Capacity: Software testing tools will require extra program memory—roughly
2 to 3 times the nominal program space, to allow for instrumented code (Figure A). In
addition, plan for an extra 64 Kbytes of program memory for test related runtime
libraries. In terms of RAM space, expect a typical testing tool to require 8-32 Kbytes of
extra data storage over and above the requirements of the software under test. To measure
code coverage with minimal impact on the execution time, allow for an extra 200-300
Kbytes of RAM for runtime storage of coverage data. If an ICE (In Circuit Emulator) is
available, some of this extra memory could be mapped into the target CPU address space
by the emulator.
I/O Facilities: In an embedded environment, a communication channel must be available
to transfer test results from the target to a host machine. It may be possible to use a
debugger/download I/O port for this
purpose if all test-result I/O happens after the execution has completed. However, if
test data will be continuously piped to a host, an I/O channel should be dedicated for this
purpose.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:11/2/2011
language:English
pages:19