Docstoc

Comparative Study of the Effectiveness of Ad Hoc, Checklist- and Perspective-based Software Inspection Reading Techniques

Document Sample
Comparative Study of the Effectiveness of Ad Hoc, Checklist- and Perspective-based Software Inspection Reading Techniques Powered By Docstoc
					                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                              Vol. 9, No. 11, November 2011




             Comparative Study of the Effectiveness of Ad Hoc, Checklist- and
               Perspective-based Software Inspection Reading Techniques

        Olalekan S. Akinola                                      Ipeayeda Funmilola Wumi
        Solom202@yahoo.co.uk                                     funmipy12@yahoo.com
        Department of Computer Science,                          Department of Physical Sciences,
        University of Ibadan, Ibadan, Nigeria                    Ajayi Crowther University, Oyo Town, Nigeria

        ABSTRACT

        Software inspection is said to be inevitable in order to ensure software quality assurance. Nevertheless,
        there have been controversies on which defect detection techniques should be applied in software
        document inspection. This work comparatively study the effectiveness of three software inspection
        techniques: Ad Hoc, Perspective-based and Checklist-based defect detection techniques. Paper-based
        inspections of software artifact were carried out on an industrial code artifact seeded with forty bugs. An
        experimental 3 x 3 x 4 factorial design with three defect detection techniques (checklist-based, Adhoc
        and perspective-based) as independent variables, three dependent variables (inspection effectiveness,
        effort and false positives) and four teams for each defect detection methods was used for the experiment.
        The data obtained were subjected to tests of hypotheses using One-way ANOVA, Post-Hoc tests and
        Mean coefficients. Results from the study indicate that there were significant differences in the defect
        detection effectiveness and effort in terms of time taken in minutes reported by the reviewers using
        perspective-based, ad hoc and checklist-based based reading techniques in the industrial settings.

        Key words: Software inspection, Ad Hoc reading technique, Checklist reading technique, Perspective
        reading technique


1.          INTRODUCTION                                           to Capers [4], “… formal design and code
                                                                   inspections rank as the most effective
The process of improving software quality has                      methods of defect removal yet discovered
been a growing discussion in the few decades.                      … (defect removal) can top 85%, about
Software quality can be defined to be software that                twice those of any form of testing.”
satisfies the needs of the users and the
programmers involved in it or as the customer’s                    Since the year Fagan developed the
perception of how the system work. Software                        inspection process in the early 70s at IBM;
inspection is a fundamental component of the                       there have been many variations of the
software quality assurance process. It is a process                process put forth by others. The aim is to
whereby a group of software competent people                       uncover faults in the products, rather than
critically checks a piece of software milestone for                to correct them. The goal of inspection
detecting defects [1]. Inspection improves the                     meetings is to collect the faults discovered
quality of software products, such as understand-                  and bring synergy (process gains) to the
ability, portability, maintainability, testability, etc.           software inspection. It is believed that the
Its success has always been demonstrated in many                   combination of different viewpoints, skills
published articles.                                                and knowledge from many reviewers
                                                                   creates this synergy [10].
Software Inspections are a formalized, structured
form of peer reviews. They are an extremely cost-                  Ad Hoc, checklist-based and perspective-
effective quality assurance technique that can be                  based reading techniques are the three
applied to any type of software project deliverable,               commonly used inspection artifacts
such as Requirements documents, Design                             reading techniques. To the best of our
documents, Code, and other items such as test                      knowledge, experiments comparing the
plans and user documents. For most software                        effectiveness of all three together are
organizations, Software Inspections are the most                   scarce. This research is therefore
important single process improvement. According                    conducted to find if there are any
                                                              

         
                                                       163                               http://sites.google.com/site/ijcsis/
                                                                                         ISSN 1947-5500
                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                         Vol. 9, No. 11, November 2011




significant differences in the effectiveness of              or strategy for individual inspector to
reviewers using Perspective-based, Checklist-                detect defects in the inspected product.
based and Ad hoc code reading techniques in an
industrial code setting. Thirty volunteered                  There are many reading techniques that
reviewers from ten software houses in Nigeria                focus on finding as many detects as
were used to carry out code inspection on the                possible but three among the reading
visual-basic large-sized code artifact.                      techniques were used in carrying out this
                                                             work; ad hoc, checklist-based and
1.1     Research Hypotheses                                  perspective-based reading techniques.
Three hypotheses were stated for this experiment             According to Porter and Votta [11], defect
as follows.                                                  detection techniques range in prescription
Ho1: There is no significant difference among the            from intuitive, non-systematic procedures
        effectiveness     of    reviewers     using          such as ad hoc or checklist techniques, to
        Perspective-based, Ad hoc and Checklist              explicit and highly systematic procedures
        reading techniques in distributed code               such as Perspective technique.
        inspection.
Ho2: There is no significant difference among the            Ad-hoc reading, by nature, offers very
        effort taken by reviewers using                      little reading support at all since a software
        Perspective-based, Ad hoc and Checklist              product is simply given to inspectors
        techniques in distributed code inspection.           without any direction or guidelines on how
Ho3: There is no significant difference among the            to proceed through it and what to look for.
        false positives reported by reviewers using          However, ad-hoc does not mean that
        Perspective-based, Ad hoc and Checklist              inspection participants do not scrutinize
        techniques in distributed code inspection.           the inspected product systematically. The
                                                             word ad-hoc only refers to the fact that no
2.      SOFTWARE       INSPECTION                            technical support is given to them for the
        READING TECHNIQUES                                   problem of how to detect defects in a
                                                             software artifact. In this case, defect
Software inspection encompasses a set of methods             detection fully depends on the skill, the
in which the purpose is to identify and locate               knowledge, and the experience of an
faults in software. Software inspection is a peer            inspector. Training sessions in program
review process led by software developers who are            comprehension as presented in Rifkin and
trained in inspection techniques [7]. Michael                Deimel [12], may help subjects develop
Fagan [10] originally developed the software                 some of these capabilities to alleviate the
inspection process ‘out of sheer frustration’ [13].          lack of reading support.
Since Fagan developed the inspection process in
the early 1970s at IBM, there have been many                 Perspective-based reading technique gives
variations of the process put forth by others.               reviewers a set of procedures to inspect
Overall, the aim in any review process is to apply           software products for defects. The
inspection to the working product as early as                perspective-based       reading  technique
possible so that major faults are caught before the          instructs the reviewer to perform an active
product is released.                                         review by assigning different perspectives
                                                             to each reviewer. Common perspectives
A reading technique can be defined as a series of            are user, tester, and designer.
steps or procedures whose purpose is to guide an
inspector in acquiring a deep understanding of the           Checklist-based       reading      technique
inspected     software     product     [8].   The            reviewers use a checklist which guides
comprehension of inspected software products is a            them regarding what kind of faults to look
prerequisite for detecting subtle and/ or complex            for. The reviewers read the document
defects, those often causing the most problems if            using the checklist to guide the focus of
detected in later life-cycle phases. In a sense, a           their review. Checklist-based offers
reading technique can be regarded as a mechanism             stronger, boilerplate support in the form of
                                                             questions inspectors are to answer while
                                                             reading the document
                                                  

 
                                                  164                               http://sites.google.com/site/ijcsis/
                                                                                    ISSN 1947-5500
                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                          Vol. 9, No. 11, November 2011




3.      RESEARCH METHODOLOGY                                  training given to them on Visual Basic
                                                              programming because they are software
3.1     Subjects                                              professionals who are conversant with the
The subjects used for this research were the                  language     used   for   the    artifact.
Software professionals drawn from ten software                Nevertheless, the reviewers were given
houses in Nigeria. Software professionals were                some initial code inspection trainings
chosen as subjects for this research because results          before the real experiments were carried
obtained with professionals would make us to                  out.
predict what may likely happen at industry level.
                                                              During individual preparation, reviewers
3.2      Experimental         artifact         and            examined the artifact in order to identify
         Instrumentation                                      the bugs seeded in them. In perspective-
The artifact inspected was a large sized Visual               based technique, reviewers were assigned
basic 6.0 language industrial code. It calculates             with a particular role to play (a designer,
staff allowances such as domestic, responsibility,            tester, reader and user) and in checklist-
hazard, housing, leave, medical, transport, utility           based, a Visual basic checklist question
and so on in order to compute staff salary                    designed by the researchers was given to
monthly, annual salary and arrears as well as some            reviewers in order to guide them in fishing
deductions(such as tax, cooperative, official                 out bugs. There was no particular time
housing accommodation rent and so on) to be                   given to the reviewers for their individual
made on staff salary. The artifact which was 500              preparations. All the suspected defects
lines of code was tested okay before it was seeded            were recorded on the individual
with 40 bugs which are syntax, semantics and                  preparation form.
logical in nature.
                                                              Moreover, before the commencement of
The designed instruments for this experiment were             the team collection meeting, the individual
the individual preparation forms, the experimental            preparation forms were collected by the
artifact (code) and the collection meeting forms.             researcher in order that the reviewers do
The experimental artifact and individual                      not add to their preparation forms any
preparation forms were given to each reviewer.                defects that were not found during team
The individual preparation form was filled during             defects collection meeting. During defects
preparation by each reviewer. During individual               collection meetings, there was no specific
preparation, each reviewer recorded the start and             duration given to the reviewers for the
the end times for the review of the artifact. The             artifact inspections.
line number of the suspected defect and the
description of defects suspected were also                    During the defect collection meetings, one
recorded in the forms.                                        of the reviewers in each team serves as the
                                                              reader, moderator and recorder. While in
The meeting form was filled in at the collection              the meetings, reviewers brought up new
meeting. The start and end times of the team                  defects or discussed any defects found
collection meetings held was filled on the                    during the individual preparation. All
collection meeting forms. The line numbers of the             defects found were recorded in the team
defect and the defect description was recorded on             defects collection meeting forms by the
the collection meeting forms. Most importantly,               recorder.
the teams’ identification numbers were filled on
the collection meeting form in order to identify              Four different teams were created each for
each team.                                                    the Perspective-based, Ad hoc and
                                                              Checklist-based reviewers. In order to
3.3      Conducting the Experiment                            eliminate bias in the results, team sizes
The experiment was monitored and conducted by                 were duplicated for each of the groups. For
the researchers. The software professionals were              instance, teams 001A and 001B in Table 1
used for the experiment without specifying any                are for team size of one, while teams 002A
particular year of experience. There was no special

                                                   

 
                                                   165                               http://sites.google.com/site/ijcsis/
                                                                                     ISSN 1947-5500
                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                           Vol. 9, No. 11, November 2011




and 002B are of size two. This is done for team                3.4.2 Threats to External Validity
sizes three and four in that order.                            Threats to external validity are conditions
                                                               that can limit our ability to generalize the
3.4     Threats to          Validity      of     the           results of experiments to industrial practice
        Experiment                                             [14]. We considered one source of such
In this experiment, two important threats that may             threats: experimental scale.
affect validity of the research are considered.
These threats limit our ability to generalize or               Experimental scale is a threat when the
guarantee the results and hence, it demands                    experimental setting or the materials are
caution when interpreting the results. In this                 not representative of industrial practice.
experiment, we considered two important threats                This study made use of a Visual Basic
that may affect the validity of the research in the            industrial code that computes wages and
domain of software inspection.                                 salary of staff in a particular company.
                                                               More so, industrial experienced software
3.4.1 Threats to Internal Validity                             professionals were used for the
 Threats to internal validity are influences that can          experiment. Therefore, experimental scale
affect the dependent variables without the                     effect was reduced to a large extent in this
researcher's knowledge [15]. We considered three               study
such influences: (1) selection effects, (2)
maturation effects, and (3) instrumentation effects.           4.       RESULTS
                                                               We are particular about the effectiveness
Selection effects are due to natural variation in              of the reviewers, the effort in terms of
human performance [14]. We limited this effect by              number of minutes taken by them and the
randomly assigning team members for the                        false positives reported by the reviewers in
inspection. This way, individual differences were              a defect collection meeting. These three
spread across all treatments.                                  data were collected from the experiment.
                                                               Initially the reviewers were given the code
Maturation effects result from the participants'               artifact    to    study   individually    at
skills improving with experience. If the same set              preparations before the actual defect
of participants were used in all three experiments,            collection meeting took place. Table 1
there may be maturation effect also as the                     gives the mean aggregate values of
participants' inspection ability may get better over           results obtained with the inspection
time. Randomly assigning the reviewers and doing               teams.
the review within the same period of time checked
these effects.                                                 The teams’ effectiveness at detecting
                                                               errors in the code artifact is depicted in
Instrumentation effects are caused by the artifacts            Figure 1. Fp means false positives reported
to be inspected, by differences in the data                    by the reviewers. False positives are those
collection forms, or by other experimental                     errors perceived by the reviewers to be
materials. In this study, this was negligible or did           true errors but were indeed not valid.
not take place at all since all the groups inspected
the artifacts within the same period of time. Again,
one set of data collection forms was used for all
the groups (the treatments).




                                                    

 
                                                    166                               http://sites.google.com/site/ijcsis/
                                                                                      ISSN 1947-5500
                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                 Vol. 9, No. 11, November 2011




                                 Table 1: Raw data of Team collection meetings

                             PERSPECTIVE                CHECKLIST-BASED                                    AD HOC
    Teams                 Defects Effort Fp             Defects Effort Fp                    Defects         Efforts         Fp
                                  (Mins)                        (Mins)                                       (Mins)

    001A                  31     58        6            25                 71       4        29              43              3
    001B                  34     56        4            28                 67       5        28              63              3
    002A                  26     46        8            30                 71       6        28              58              5
    002B                  36     61        5            28                 69       5        27              49              3
    003A                  28     58        5            29                 69       5        28              38              3
    003B                  34     68        4            24                 66       2        31              58              5
    004A                  30     45        6            27                 59       4        28              48              6
    004B                  31     38        5            29                 44       3        25              41              4

 
                     40



                     35



                     30



                     25
     Defect Values




                                                                                                               Perspective Defects
                     20

                                                                                                               Checklist Defects

                     15
                                                                                                               Ad Hoc Defects


                     10



                      5



                      0
                                   001A   001B       002A           002B    003A   003B   004A      004B
                                                            Teams
                                                                                                                                      
Figure 1: Teams’ effectiveness at detecting errors in the code artifact

Figure 1 shows that Perspective –based                                     and roughly the same for most of the
teams have the highest defect detection                                    Checklist-based teams even though there
effectiveness compared to the other groups                                 were no restrictions placed on the time the
– Checklist-based and Ad hoc groups.                                       reviewers must spend for the code
                                                                           inspection.
Figure 2 shows that Efforts in terms of
minutes expended by the teams were high




                                                                     

 
                                                            167                                   http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                        Vol. 9, No. 11, November 2011




                                              80



                                              70



                                              60



                                              50
                          Efforts (Minutes)



                                                                                                                                      Perspective Efforts
                                              40
                                                                                                                                      Checklist Efforts

                                                                                                                                      Ad Hoc Efforts
                                              30



                                              20



                                              10



                                               0
                                                          001A      001B      002A           002B     003A    003B   004A    004B
                                                                                     Teams


Figure 2: Teams’ efforts (Minutes) expended in the code Inspection

                      9



                      8



                      7



                      6
    False Positives




                      5
                                                                                                                                      Perspective Fp

                                                                                                                                      Checklist Fp
                      4
                                                                                                                                      Ad Hoc Fp

                      3



                      2



                      1



                      0
                                                   001A          001B      002A           002B      003A     003B    004A    004B
                                                                                  Teams
                                                                                                                                                           
Figure 3: Teams’ False positives reported in the code Inspection




                                                                                             

 
                                                                                     168                                    http://sites.google.com/site/ijcsis/
                                                                                                                            ISSN 1947-5500
                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                        Vol. 9, No. 11, November 2011




Figure 3 shows that perspective-based                        Table 3 shows that there was a significant
teams which had highest defect detection                     difference among the defect detection
effectiveness from Figure 1 reported more                    effectiveness of the reviewers using the
or less highest number of false positives,                   three reading techniques – Perspective-
especially teams 001A and 002A.                              based, checklist-based and ad hoc. This is
                                                             also true of the effort (time) expended by
4.1 Further Statistical Analyses                             the reviewers during the inspection
The data obtained in the experiment were                     exercise. However, the case is different
subjected to further statistics tests. One-                  with False positives reported by the
Way ANOVA was used to perform the                            reviewers.
statistical analyses and thereafter Post-Hoc
Turkey HSD and Duncan tests were done                        Further Tukey HSD post hoc test analyses
to determine where differences lie among                     was carried out on the data to ascertained
the data pairs.                                              where the differences actually lie. The
                                                             results of the analyses are shown in Table
Table 3 shows the results of ANOVA                           4.
statistical tests performed on the data
obtained in this experiment.

                                 Table 3: Results of ANOVA Analysis

Data                        Hypothesis Tested                p-value     Mean values            Decision
                            Ho: There is no significant      0.012       Perspeffe = 31.25      H1 accepted
Defect      Detection       difference     among       the               CBReffe = 27.50
Effectiveness (effe)        effectiveness of reviewers                   AHeffe = 28.00         P < 0.05
                            using Perspective-based, Ad
                            hoc and Checklist-based
                            techniques
                            Ho: There is no significant      0.014       Perspeff = 53.75       H1 accepted
                            difference among the effort                  CBReff = 64.50
Effort   (eff,         in   of       reviewers      using                AHeff = 49.75          p < 0.05
Minutes)                    Perspective-based, Ad hoc
                            and           Checklist-based
                            technique
                            Ho: There is no significant      0.090       Perspfp = 5.37         H0 accepted
False Positives (fp)        difference among the false-                  Cfp = 4.25             p > 0.05
                            positives of reviewers using                 Afp = 4.00
                            Perspective-based, Ad hoc
                            and           Checklist-based
                            technique



Table 4. Tukey HSD Post Hoc Multiple Comparison Tests Summaries

(I) Group      (J) Group           Dependent Variables’ Significance Levels (p)
                                Effectiveness          Efforts    False Positives
 Perspective     Ad hoc            0.039*               0.674          0.098
                Checklist          0.016*               0.078          0.199
   Ad hoc      Perspective         0.039*               0.674          0.098
                Checklist           0.914              0.013*          0.917
  Checklist    Perspective         0.016*               0.078          0.199
                 Ad hoc             0.914              0.013*          0.917
* The mean difference is significant at the .05 level.


                                                        

 
                                                 169                                http://sites.google.com/site/ijcsis/
                                                                                    ISSN 1947-5500
                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                           Vol. 9, No. 11, November 2011




From table 4, it can be vividly inferred that                  Duncan post hoc test on Efforts shows that
there are truly significant differences                        Checklist-based reading technique actually
between the effectiveness of Perspective                       has the highest mean (64.5), which makes
and Ad hoc (p = 0.036 < 0.05), Perspective                     checklist group to actually expended more
and Checklist (p = 0.016 < 0.05) inspection                    time than the Ad hoc group in the
reading techniques. Between Ad hoc and                         inspection exercise.
others, there is a significant difference
between Ad hoc and perspective                                 Results from table 4 shows that false
effectiveness alone (p = 0.039 < 0.05). And                    positives are not truly significantly
in the case of Checklist, a significant                        different for all the variables – Perspective,
difference is obtained between it and                          Ad hoc and Checklist based inspection
Perspective alone (p = 0.016 < 0.05).                          artifact reading techniques.

Duncan’s Post Hoc test in Table 5 shows                        5.        Discussion of Results
that Mean Effectiveness value (31.25) is                       Software inspection is a successful method
greater than the effectiveness of the other                    for detecting faults in documents and codes
variables – Checklist and Ad hoc. This                         produced in software development.
made us to conclude that Perspective –                         Checklist-based and Ad hoc are the earlier
based reading techniques performed higher                      reading techniques usually employed to
than both Checklist and Ad hoc reading                         detect errors in software artifacts.
techniques.                                                    Perspective-based reading, proposed by
                                                               Basili et al., [3] in which a software
Efforts in terms of time in minutes spent                      product is inspected from the perspective
by the reviewers in the inspection exercise,                   of different stakeholders (analysts,
is only significant for Ad hoc and                             designers, programmers, and so on) was
Checklist as well as between Checklist and                     later introduced.
Ad hoc.



Table 5: Duncan Post Hoc Test for Effectiveness

    Group Var                          N                       Subset for alpha = .05
                                       1                         2                  1
    Checklist                                       8                27.5000
    Adhoc                                           8                28.0000
    Perspective                                     8                               31.2500
    Sig.                                                               0.689            1.000

Means for groups in homogeneous subsets are displayed.
a Uses Harmonic Mean Sample Size = 8.000.


Table 6: Duncan post hoc test for effort

    Group Var                          N                       Subset for alpha = .05
                                       1                         2                  1
    Adhoc                                           8                49.7500
    Perspective                                     8                53.7500
    Checklist                                       8                               64.5000
    Sig.                                                                .403            1.000

Means for groups in homogeneous subsets are displayed.
a Uses Harmonic Mean Sample Size = 8.000.
                                                           

 
                                                    170                                 http://sites.google.com/site/ijcsis/
                                                                                        ISSN 1947-5500
                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                  Vol. 9, No. 11, November 2011




Defects detection effectiveness of these              in his work on “Testing the value of
three reading techniques was studied in               checklists in code inspections” shows there
this work. The fact that perspective –based           is no evidence that checklists significantly
reading technique outperforms other
                                                      improve inspections. Akinola and Osofisan
techniques is obvious from the fact that the
reviewers were given specific tasks to                [2] show that there is no statistical
perform on the codes. For instance, an                relationship between the false positives of
analyst will have to inspect the code to              Ad hoc and Checklist-based defect
ascertain that it conforms to the                     detection techniques.
requirements specification and nothing
more. Checklist-based reviewers were                  6.      Conclusion
assisted with some checklists which gave              Software inspection is very important
some precise questions on what to look for            in the software quality assurance. In
in the code artifact. Therefore they are              this study, the statistical significant
expected to perform higher than the Ad                relationships among the effectiveness
hoc group. However, this is not the case in
                                                      of Perspective-based, Checklist-based
this work.
                                                      and Adhoc defect detection methods on
Our results are in consonance with some               software artifact inspections was
related works in the literatures. To mention          questioned. It is concluded from this
a few, Basili et al., [3] results show that           study that perspective based reading
                                                      technique is a best choice for code
Perspective-based reading technique is
                                                      inspection exercise. However, we look
more effective than Ad-hoc or Checklist-
                                                      forward to authenticate our results with
based reading techniques. Giedre et al., [6]
                                                      automated tools in the nearest future.
results from their experiment to compare
checklist based reading and perspective-
                                                      References
based reading for UML design documents
inspection shows that Checklist-based                 1.   Abdusalam F. Ahmed Nwesri and Rodina
reading (CBR) uncovers 70% in defect                       Ahmad, (2000). An Asynchronous
detection while Perspective–based reading                  Software Inspection Model, Malaysian
                                                           Journal of Computer Science, Vol. 13 No.
(PBR) uncovers 69% and that checklist                      1, June 2000, pp. 17-26.
takes more time (effort) than PBR. They               2.   Akinola S.O. and Osofisan A.O, (2009).
also showed that that checklist-based                      An empirical Comparative Study of Adhoc
consumes more time than Perspective-                       and Checklist Code Reading Techniques in
                                                           Distributed Groupware Environment,
based technique. Porter and Votta [11] on                  International Journal of Computer Science
their experiment for comparing defect                      and Information Security (IJSCSIS), Vol.
detection     methods        for    software               5, No. 1, 2009.
requirements inspections show that                    3.   Basili, V., Green, S., Laitenberger, O.,
                                                           Lanubile, F., Shull, F., Sorumgard, S., and
checklist reviewers were no more effective                 Zelkowitz, M., (1996). The Empirical
than Ad hoc reviewers. Filippo  and                        Investigation     of      Perspective-based
Giuseppe [5] on their work on evaluating                   Reading. Journal of Empirical Software
                                                           Engineering, 2(1):133-164.
defect detection techniques for software
                                                      4.   Capers Jones (2008). Applied Software
requirements inspections, also show that                   Measurement, 3rd Ed. McGraw Hill.
no difference was found between                       5.   Filippo Lanubile and Giuseppe Visaggio
inspection teams applying Ad hoc or                        (2000), Evaluating defect Detection
                                                           Techniques for Software Requirements
Checklist reading with respect to the                      Inspections,
percentage of discovered defects. Les [9]

                                                  

 
                                           171                               http://sites.google.com/site/ijcsis/
                                                                             ISSN 1947-5500
                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                       Vol. 9, No. 11, November 2011




      http://citeseer.ist.psu.edu/Lanubile00evalu          Olalekan    Akinola is a lecturer of
      ating.html, downloaded Feb. 2010.                                       Computer Science at
6.    Giedre        Sabaliauskaite,      Fumikazu                             the    University    of
      Matsukawa, Shinji Kusumoto, Katsuro                                     Ibadan, Nigeria. He
      Inoue, (2002).           "An Experimental
                                                                              had his PhD Degree in
      Comparison of Checklist-Based Reading
      and Perspective-Based Reading for UML                                   Software Engineering
      Design Document Inspection," ISESE, p.                                  from      the     same
      148, 2002 International Symposium on                                    University in Nigeria.
      Empirical          Software     Engineering          He is currently working on Software
      (ISESE'02).                                          Process Improvement modelling for
7.    IEEE Standard 1028-1997 (1998).                      software industry.
      Standard for Software Reviews. The
      Institute of Electrical and Electronics              Ipeayeda Funmilola Wumi finished her
      Engineering, Inc. ISBN 1-55937-987-1.                                   Masters degree in
8.    Laitenberger Oliver (2002), A Survey of                                 Computer Science at
      Software        Inspection     Technologies,                            the University of
      Handbook on Software Engineering and                                    Ibadan, Nigeria in
      Knowledge Engineering, vol. II, 2002.                                   2010. This work was
9.    Les Hatton (2008). Testing the Value of                                 actually part of her
      Checklists in Code Inspections, IEEE                                    Masters Thesis. She
      Software, 25:4, July 2008, pp. 82 -88                                   is now a lecturer at
10.   Michael E. Fagan, (1976). Design and                                    Ajayi        Crowther
      code inspections to reduce errors in                 University, Oyo Town, Oyo State, Nigeria.
      program development. IBM Systems
      Journal, 15(3):182-211.
11.   Porter, A. A. and Votta, L. (1998).
      Comparing Detection Methods for
      Software Requirements Inspection: A
      Replication using Professional Subjects.
      Journal       of      Empirical     Software
      Engineering, vol. 3, no. 4, page 355-378.
12.   Rifkin, S. and Deimel, L., (1994).
      Applying        Program       Comprehension
      Techniques        to    Improve     Software
      Inspection. Proceedings of the 19th Annual
      NASA        Software      Eng.   Laboratory
      Workshop.NASA.
13.   Wheeler, D. A. and Brykczynski, B.,
      (1996). Software Inspection: An Industry
      Best Practice, IEEE CS Press.
14.    Porter, A. A., Votta, L. G. and Basili, V.
       R. (1995). Comparing detection methods
      for software requirements inspections: A
      replicated experiment. IEEE Trans. on
      Software        Engineering,     21(Harvey,
      1996):563-575.
15.    Porter, A. A., Siy, H., P., Toman, C. A.
      and Votta, L. G. (1997). An Experiment to
      Assess the Cost-Benefits of Code
      Inspections in Large Scale Software
      development, IEEE Transactions on
      Software Engineering, vol. 23, No. 6, pp.
      329 – 346.




                                                       

 
                                                172                               http://sites.google.com/site/ijcsis/
                                                                                  ISSN 1947-5500

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:27
posted:2/17/2012
language:English
pages:10