A Black-Box Test Case Generation Method
IJCSIS is an open access publishing venue for research in general computer science and information security. Target Audience: IT academics, university IT faculties; industry IT departments; government departments; the mobile industry and computing industry. Coverage includes: security infrastructures, network security: Internet security, content protection, cryptography, steganography and formal methods in information security; computer science, computer applications, multimedia systems, software, information systems, intelligent systems, web services, data mining, wireless communication, networking and technologies, innovation technology and management. The average paper acceptance rate for IJCSIS issues is kept at 25-30% with an aim to provide selective research work of quality in the areas of computer science and engineering. Thanks for your contributions in September 2010 issue and we are grateful to the experienced team of reviewers for providing valuable comments.
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 A Black-Box Test Case Generation Method Nicha Kosindrdecha Jirapun Daengdej Autonomous System Research Laboratory Autonomous System Research Laboratory Faculty of Science and Technology, Assumption University Faculty of Science and Technology, Assumption University Bangkok, Thailand Bangkok, Thailand P4919742@au.edu email@example.com Abstract—Test case generation techniques have been researched generation) are widely-used for generating test cases in the over a long period of time. Unfortunately, while many commercial industry. researchers have found methods of minimizing test cases, there are still a number of important related issues that need to be Moreover, the study , , , , , , , researched. The primarily outstanding research issue is a large  shows that the primary research issue is that existing single test suite containing a huge number of test cases. Our study black-box test case generation methods generate a huge single shows that this can lead to other two problems: unable to identify test suite with a number of possible tests. The number of suitable test cases for execution and those test cases are lack of possible black-box tests for any non-trivial software application ability to cover domain specific requirement. Therefore, we is extremely large. Consequently, it is unable to identify proposed an additional requirement prioritization process during suitable test cases for execution. a test case generation process and an automated method to generate multiple test suites while minimizing a number of test Also, the study shows that the secondary research issue is cases from UML Use Case diagram 2.0. Our evaluation result that the existing black-box test case generation methods ignore shows that the proposed method is the most recommendation critical domain specific requirements  during a test case method to minimize size of test cases while maximizing ability to generation process. These requirements are one of the most cover critical domain specific requirements. important requirements that should be addressed during test activities. Keywords-component; Test generation, testing and quality, test case generation, test generation technique and generate tests Therefore, we propose a new black-box test case generation, with requirement prioritization approach, from requirements captured as use cases, 2.0, , , . A use I. INTRODUCTION case is the specification of interconnected sequences of actions Software testing is known as a key critical phase in the that a system can perform, interacting with actors of the software development life cycle, which account for a large part system. Use cases have become one of the favorite approaches of the development effort. A way of reducing testing effort, for requirements capture. Our automated black-box approach while ensuring its effectiveness, is to generate a minimize aims to generate a minimize number of suitable test cases while number of test cases automatically from artifacts used in the reserving critical domain specific requirements. Additionally, early phases of software development. Many test case we introduce an automated test generation method derived generation techniques have been proposed , , , , from UML Use Case diagram, 2.0. Our approach is developed , , , , , , , mainly random, path- to automatically generate many test suites based on notions oriented, goal-oriented and model-based approaches. Random announced in the latest version of UML. techniques determine a set of test cases based on assumptions The rest of the paper is organized as follow. Section 2 concerning fault distribution. Path-oriented techniques discusses an overview of test case generation techniques. generally use control flow graph to identify paths to be covered Section 3 describes motivated research issues. Section 4 and generate the appropriate test cases for those paths. Goal- introduces a new test generation process with requirement oriented techniques identify test cases covering a selected goal prioritization step. Also, section 4 proposes a new black-box such as a statement or branch, irrespective of the path taken. test generation method. Section 5 describes an experiment, There are many researchers and practitioners who have been measurement metrics and results. Section 6 provides the working in generating a set of test cases based on the conclusion and research directions in the test case generation specifications. Modeling languages are used to get the field. The last section represents all source references used in specification and generate test cases. Since Unified Modeling this paper. Language (UML) 2.0 is the most widely used language, many researchers are using UML diagrams such as UML Use Case diagram, UML Activity diagram and UML Statechart diagram II. LITERATURE REVIEW to generate test cases and this has led to model-based test case The literature review is structured into two sections. The generation techniques. The study shows that model-based test first section gives an overview of previous studies. The second generation methods (or also known as black-box test section provides the related works 22 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 A. An Overview of Recent Researches Whereas all requirements are mandatory, some are more Model-based techniques are popular and most researchers critical than others. For example, failure to implement certain have proposed several techniques. One of the reasons why requirements may have grave business ramifications that would those model-based techniques are popular is that wrong make the system a failure, while others although contractually interpretations of complex software from non-formal binding would have far less serious business consequences if specification can result in incorrect implementations leading to they were not implemented or not implemented correctly (b) testing them for conformance to its specification standard . Help programs through negotiation and consensus building to A major advantage of model-based V&V is that it can be easily eliminate unnecessary potential “requirements” (i.e., goals, automated, saving time and resources. Other advantages are desires, and “nice-to-haves” that do not merit the mandatory shifting the testing activities to an earlier part of the software nature of true requirements) and (c) schedule the development process and generating test cases that are implementation of requirements (i.e., help determine what independent of any particular implementation of the design . capabilities are implemented in what increment). Additionally, these researches in 1980-2008 , , , , ,  The model-based techniques are method to generate test reveal that there are many requirement prioritization methods cases from model diagrams like UML Use Case diagram , such as Binary Search Tree (BST), 100-point method and , , UML Sequence diagram  and UML State diagram Analytic Hierarchy Process (AHP) , , , , , , , . There are many researchers who investigated in generating test cases from those diagrams. The following paragraphs show examples of III. RESEARCH PROBLEM model-based test generation techniques that have been This section discusses the details of research issues related proposed for a long time. to test case generation techniques and research problems, Heumann  presented how using use cases, derived from which are motivated this study. Every test case generation UML Use Case diagram 1.0, to generate test cases can help technique has weak and strong points, as addressed in the launch the testing process early in the development lifecycle literature survey. In general, referring to the literature review, and also help with testing methodology. In a software the following lists major outstanding research challenges. development project, use cases define system software The first research problem is that existing test case requirements. Use case development begins early on, so real generation methods are lack of ability to identify domain use cases for key product functionality are available in early specific requirements. The study  shows that domain iterations. According to the Rational Unified Process (RUP), a specific requirements are some of the most critical use case is used to describe fully a sequence of actions requirements required to be captured for implementation and performed by a system to provide an observable result of value testing, such as constraints requirements and database specific to a person or another system using the product under requirements. Existing approaches ignore an ability to address development." Use cases tell the customer what to expect, the domain specific requirements. Consequently, software testing developer what to code, the technical writer what to document, engineers may ignore the critical functionality related to the and the tester what to test. He proposed three-step process to critical domain specific requirements. Thus, this paper generate test cases from a fully detailed use case: (a) for each introduces an approach to priority those specific requirements use case, generate a full set of use-case scenarios (b) for each and generates an effective test case. scenario, identify at least one test case and the conditions that will make it execute and (c) for each test case, identify the data The second problem is that existing black-box test case values with which to test. generation techniques aim to generate a large single test suite with all possible test cases which maximize cover for each Ryser  raised the practical problems in software testing scenario. Basically, they generate a huge number of test cases as follows: (1) Lack in planning/time and cost pressure, (2) which are impossible to execute given limited time and Lacking test documentation, (3) Lacking tool support, (4) resources. As a result, those unexecuted test cases are useless Formal language/specific testing languages required, (5) and it is unable to identify suitable test cases for execution. Lacking measures, measurements and data to quantify testing and evaluate test quality and (6) Insufficient test quality. They IV. PROPOSED METHOD proposed their approach to resolve the above problems. Their approach is to derive test case from scenario / UML Use Case A. Test Case Generation Process diagram 1.0 and state diagram 1.0. In his work, the generation of test cases is done in three processes: (a) preliminary test case This section presents a new high-level process to generate a definition and test preparation during scenario creation (b) test set of test cases introduced by using the above comprehensive case generation from Statechart and from dependency charts literature review and previous works . and (c) test set refinement by application dependent strategies. B. Related Works This section provides the related works used in this paper, prioritize requirement methods. Donald Firesmith  addressed the purpose of requirement prioritization as follows: (a) Determine the relative necessity of the requirements. Autonomous System Research Laboratory, Faculty of Science and Technology, Assumption University. 23 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 From the above figure, the horizontal axis presents a customer’s need while the vertical axis represents a customer satisfaction. There are four groups of requirements based on those two factors: delight, attractive, indifferent and basic. First, the delight requirement is known as ‘nice-to-have’ requirement. If this requirement is well fulfilled, it will increase the customer satisfaction. Otherwise, it will not decrease the satisfaction. Second, the attractive requirement is called as ‘surprise’ or ‘know your customer’ requirement. This requirement can directly increase the customer satisfaction if it is fulfilled. Marketers and sales  believe that if we can deliver this kind of requirement, it will impress customers and significantly improve the customer satisfaction. Third, the indifferent requirement is a requirement that customer does not concentrate and it will not impress customers at all. In the competitive industry, this requirement may be fulfilled, but there are no any impacts to the customer satisfaction. Last, the basic requirement is a mandatory requirement that customers basically expect. Therefore, if this requirement is well delivered, it will not increase the customer satisfaction. Figure 1. A Proposed Process to Generate Test Cases Furthermore, our study reveals that the requirement can be simply divided into two types: functional and non-functional From the above figure, there are two test case generation requirement. Our study also presents that functional processes: existing and proposed process. The left-hand side requirements can be categorized into two groups: domain shows an existing process to generate test cases directly from specific requirement  (or known as constraints requirement) diagrams. Meanwhile, the right-hand side proposes to add an and behavior requirement. The following shows the additional requirement prioritization process before generating requirement classification used in this paper: test cases. The requirement prioritization process aims to be able to effectively handle with a large number of requirements. The objective of this process is to prioritize and organize requirements in an appropriate way in order to effectively design and prepare test cases , , . There are two sub-processes: (a) classify requirements and (b) prioritize requirements. Our study , , ,  shows that a marketing perspective concentrates on two factors: customer’s needs and customer satisfaction. We apply that perspective to the requirement prioritization and propose the following: Figure 3. Classify Requirement on Software Engineer From the above figure, functional requirement is a requirement that customers directly are able to provide. The non-functional requirement is a requirement that is given indirectly. The domain specific or constraints requirement is a requirement relative to any constraints and business rules in the software development. Meanwhile, the behavior requirement is a requirement that describes a behavior of system. Once the requirement is classified based on previous two perspectives, the next process is to prioritize requirements based on return on investment (or ROI) , , . From business perspective, ROI is the most important factor to assess the important of each requirement. The following presents a ranking tree by combining those two perspectives. Figure 2. Classify Requirement on Marketing’s Perspective 24 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 • CostCode is a cost of coding that is charged to customers. This paper applies the cost-value approach to identify the cost of coding for each requirement group (e.g. “Must- Have”, “Should-Have”, “Could-Have” and “Wish”). The unit is US dollar. • EffTest is an estimated effort of testing for each requirement. The unit is man-hours. • CostTest is a cost of testing that is charged to customers. The approach to identify this value is similar to CostCode’s approach. The unit is US dollar. In this paper, we assumed the following in order to calculate CostCode and CostTest. Also, this paper assumes that a standard cost for both activities is $100 per man-hours. • A value is 1.5 of (“Must-Have”, “Should-Have”) – this means that “Must-Have” requirements have one and half times cost value than “Should-Have” requirements. • A value is 3 of (“Must-Have”, “Could-Have”) – this means that “Must-Have” requirements have three times cost value than “Could-Have” requirements. Figure 4. Requirement Prioritization Tree • A value is 2 of (“Should-Have”, “Could-Have”) – this means that “Should-Have” requirements have two times From the above figure, we give the highest priority for all cost value than “Could-Have” requirements. ‘basic requirements due to the fact that they must be implemented even they do not increase the customer • A value is approximately 3 of (“Could-Have”, “Wish”) – satisfaction. We rank the lowest priority for all ‘indifferent’ this means that “Could-Have” requirements have three requirements, because customers do not concentrate on. times cost value than “Wish” requirements. Additionally, we prioritize both of all ‘delight’ and ‘attractive’ requirement based on ROI. In this paper, we propose to use a Therefore, the procedure of requirement prioritization cost-value approach to weight and prioritize requirements. This process can be shortly described below: paper proposes to use the following formula: 1. Provide estimated efforts of coding and testing for each P(Req) = (Cost * CP) (1) requirement. Where: 2. Assign cost value for each requirement group based on • P is a prioritization value. the previous requirement classification (e.g. “Must- Have”, “Should-Have”, “Could-Have” and “Wish”). • Req is a requirement required to be prioritized. 3. Calculate a total estimated cost for coding and testing, • Cost is a total estimated cost of coding and testing for by using the formula (2). each requirement. 4. Define a customer priority for each requirement. • CP is an user-defined customer priority value. This value 5. Compute a priority value for each requirement by is in the range between 1 and 10. 10 is the highest priority using the formula (1). and 1 is the lowest priority. This value aims to allow customers to identify how important of each requirement 6. Prioritize requirements based on the higher priority is from their perspective. value. To compute the above cost for coding and testing, this Once the requirements are prioritized, the next proposed paper proposes to apply the following formula: step is to generate test scenario and prepare test case. Cost = (EffCode*CostCode)+(EffTest*CostTest) (2) B. Test Case Generation Technique This section presents an automated test scenario generation Where derived from UML Use Case diagram 2.0. The big different between UML Use Case diagram 1.0 and 2.0 is a package • Cost is a total estimated cost. notion that can group each use case into each package. • EffCode is an estimated effort of coding for each The following shows an example of UML Use Case requirement. The unit is man-hours. diagram 2.0. 25 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 Case Case y Event ve Events ss Id Name Rules UC- Withdra To allow 1. Insert 1. Select (a) 001 w bank' s Card Inquiry Input customer 2. Input 2. Select amount s to PIN A/C Type <= withdraw 3. Select 3. Check Outstan money Withdraw Balance ding from 4. Select Balanc ATM A/C Type e machines 5. Input (b) Fee anywher Balance charge e in 6. Get if using Thailand Money differen . 7. Get Card t ATM machin es UC- Transfer To allow 1. Insert 1. Select Amoun Figure 5. An Example of UML Use Case Diagram 2.0 002 users to Card Inquiry t <= From the above figure, the new notion in UML Use Case transfer 2. Input 2. Select 50,000 diagram 2.0 is a package that is used for grouping each money to PIN A/C Type baht function. There are three packages or releases. Each release other 3. Select 3. Check contains different functional requirement. The first release banks in Transfer Balance contains two functions: inquiry and withdraw. The second Thailand 4. Select release is composed of: transfer own account and transfer to from all bank other banks. The last release has only one function to support ATM 5. Select Thai (TG) airline tickets. machines "To" Our approach aims to generate three test suites to cover the account above three packages while existing test case generation 6. Select techniques do not concentrate on. The first test suite is A/C Type developed for: inquiry and withdraw functions. The second test 7. Input suite is used for transferring own banks and other banks. The Amount last suite aims to a TG airline ticket support. 8. Get Receipt The approach is built based on Heumann’s algorithm . 9. Get Card The limitation of our approach is to ensure that all use cases are fully dressed. The fully dressed use case is a use case with the comprehensive of information, as follows: use case name, use The above use cases can be extracted into the following use case number, purpose, summary, pre-condition, post-condition, case scenarios: actors, stakeholders, basic events, alternative events, business rules, notes, version, author and date. TABLE II. EXTRACTED USE CASE SCENARIOS The proposed method contains four steps, as follows: (a) extract use case diagram (b) generate test scenario (c) prepare Scenario Id Summary Basic Scenario test data and (d) prepare other test elements. These steps can be shortly described as follows: Scenario-001 s To allow bank' 1. Insert Card customers to 2. Input PIN 1. The first step is to extract the following information withdraw money from 3. Select Withdraw from fully dressed use cases: (a) use case number (b) ATM machines 4. Select A/C Type purpose (c) summary (d) pre-condition (e) post- anywhere in Thailand. 5. Input Balance condition (f) basic event and (g) alternative events. 6. Get Money This information is called use case scenario in this 7. Get Card paper. The example fully dressed use cases of ATM withdraw functionality can be found as follows: TABLE I. EXAMPLE FULLY DRESSED USE CASE Use Use Summar Basic Alternati Busine 26 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 Scenario-002 s To allow bank' 1. Insert Card TS-002 s To allow bank' 1. Insert Card customers to 2. Input PIN customers to 2. Input PIN withdraw money from 3. Select Inquiry withdraw money from 3. Select Inquiry ATM machines 4. Select A/C Type ATM machines 4. Select A/C Type anywhere in Thailand. 5. Check Balance anywhere in Thailand. 5. Check Balance 6. Select Withdraw 6. Select Withdraw 7. Select A/C Type 7. Select A/C Type 8. Input Balance 8. Input Balance 9. Get Money 9. Get Money 10. Get Card 10. Get Card Scenario-003 To allow users to 1. Insert Card TS-003 To allow users to 1. Insert Card transfer money to 2. Input PIN transfer money to 2. Input PIN other banks in 3. Select Transfer other banks in 3. Select Transfer Thailand from all 4. Select bank Thailand from all 4. Select bank ATM machines 5. Select "To" ATM machines 5. Select "To" account account 6. Select A/C Type 6. Select A/C Type 7. Input Amount 7. Input Amount 8. Get Receipt 8. Get Receipt 9. Get Card 9. Get Card Scenario-004 To allow users to 1. Insert Card TS-004 To allow users to 1. Insert Card transfer money to 2. Input PIN transfer money to 2. Input PIN other banks in 3. Select Inquiry other banks in 3. Select Inquiry Thailand from all 4. Select A/C Type Thailand from all 4. Select A/C Type ATM machines 5. Check Balance ATM machines 5. Check Balance 6. Select Transfer 6. Select Transfer 7. Select bank 7. Select bank 8. Select "To" 8. Select "To" account account 9. Select A/C Type 9. Select A/C Type 10. Input Amount 10. Input Amount 11. Get Receipt 11. Get Receipt 12. Get Card 12. Get Card 2. The second step is to automatically generate test 3. The next step is to prepare test data. This step allows to scenarios from the previous use case scenarios . manually prepare an input data for each scenario. From the above table, we automatically generate the following test scenarios: 4. The last step is to prepare other test elements, such as expected output, actual output and pass / fail status TABLE III. GENERATED TEST SCENARIOS V. EVALUATION Test Scenario Summary Basic Scenario The section describes the experiments design, measurement Id metrics and results. TS-001 s To allow bank' 1. Insert Card A. Experiments Design customers to 2. Input PIN withdraw money from 3. Select Withdraw 1. Prepare Experiment Data. Before evaluating the ATM machines 4. Select A/C Type proposed methods and other methods, preparing anywhere in Thailand. 5. Input Balance experiment data is required. In this step, 50 6. Get Money requirements and 50 use case scenarios are randomly 7. Get Card generated. 2. Generate Test Scenario and Test Case. A comparative evaluation method has been made among the proposed test scenario algorithm, Heumann’s technique Jim , Ryser’s method , Nilawar’s algorithm  and the proposed method presented in the previous section. It 27 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 is included a prioritization requirement algorithm prior basic1, basic2, …, basicn. to generate a set of test scenarios and test cases. B. Measurement Metrics 3. Evaluate Results. In this step, the comparative The section lists the measurement metrics used in the generation methods are executed by using 50 experiment. This paper proposes to use three metrics, which requirements and 50 use case scenarios. These methods are: (a) size of test cases (b) total time and (c) percentage of are also executed for 10 times in order to find out the critical domain requirement coverage. The following describe average percentage of critical domain requirement the measurement in details. coverage, a size of test cases and total generation time. In total, there are 500 requirements and 500 use case 1. A Number of Test Cases: This is the total number of scenarios executed in this experiment. generated test cases, expressed as a percentage, as follows: The following tables present how to randomly generate data % Size = (# Size / # of Total Size)*100 (3) for requirements and use case scenarios respectively. Where: • % Size is a percentage of the number of test cases. TABLE IV. GENERATE RANDOM REQUIREMENTS • # of Size is a number of test cases. Attribute Approach • # of Total Size is the maximum number of test cases in the Requirement Randomly generated from the following experiment, which is assigned 1,000. ID combination: Req + Sequence Number. 2. A Domain Specific Requirement Coverage: This is an For example, Req1, Req2, Req3, …, indicator to identify the number of requirements covered in ReqN. the system, particularly critical requirements, and critical Description Randomly generated from the following domain requirements . Due to the fact that one of the combination: Des + Sequence Number goals of software testing is to verify and validate same as Requirement ID. requirements covered by the system, this metric is a must. Therefore, a high percentage of critical requirement coverage is desirable. For example, Des1, Des2, Des3, …, DesN. It can be calculated using the following formula: Type of Randomly selected from the following % CRC = (# of Critical / # of Total)*100 (4) Requirement values: Functional AND Non-Functional. Where: MoSCoW Randomly selected from the following • % CRC is the percentage of critical requirement coverage. Criteria values: Must Have (M), Should Have (S), Could Have (C) and Won’t Have (W) • # of Critical is the number of critical requirements Is it a critical Randomly selected from the following covered. requirement values: True (Y) and False (N) (Y/N)? • # of Total is the total number of requirements. 3. Total Time: This is the total number of times the generation methods are run in the experiment. This metric TABLE V. GENERATE RANDOM USE CASE SCENARIO is related to the time used during the testing development phase (e.g. design test scenario and produce test case). Attribute Approach Therefore, less time is desirable. Use case ID Randomly generated from the following combination: uCase + It can be calculated using the following formula: Sequence Number. For example, Total = PTime + CTime + RTime (5) uCase1, uCase2, …, uCasen. Where: Purpose Randomly generated from the • Total is the total amount of times consumed by running following combination: Pur + generation methods. Sequence Number same as Use case ID. For example, Pur1, Pur2, …, • PTime is the total amount of time consumed by Purn. preparation before generating test cases. Pre-condition Randomly generated from the • CTime is the time to compile source code / binary code in following combination: pCon + order to execute the program. Sequence Number same as Use case ID. For example, pCon1, pCon2, …, • RTime is the total time to run the program under this pConn. experiment. Basic Scenario Randomly generated from the following combination: uCase + Sequence Number. For example, 28 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 C. Results and Discussion For an ability to cover critical domain specific requirement, This section discusses an evaluation result of the above the maximum and minimum percentage is 53.20% and 19%. experiment. This section presents a graph that compares the The different value is 34.2%. The interval value is 6.84. above proposed method to other three existing test case Therefore, it can be determined as follows: 5-Excellent (since generation techniques, based on the following measurements: 46.36% to 53.2%), 4-Very good (between 39.52% and (a) size of test cases (b) critical domain coverage and (c) total 46.36%), 3-Good (between 32.68% and 39.52%), 2-Normal time. Those three techniques are: (a) Heumman’s method (b) (between 25.84% and 32.68%) and 1-Poor (from 19% to Ryser’s work and (c) Nilawar’s approach. There are two 25.84%). dimensions in the following graph: (a) horizontal and (b) For a total time, the maximum and minimum percentage is vertical axis. The horizontal represents three measurements 31.82% and 30.20%. The different between maximum and whereas the vertical axis represents the percentage value. minimum value is 1.62%. An interval value is equal to a result of dividing the different values by 5. As a result, the interval value is 0.324. Thus, it can be determined as follows: 5- Excellent (since 30.2% to 30.524%), 4-Very good (between 30.524% and 30.848%), 3-Good (between 30.848% and 31.172%), 2-Normal (between 31.172% and 31.496%) and 1- Poor (from 31.496% to 31.82%). Therefore, the experiment result of those comparative methods can be shown below: TABLE VI. A COMPARISON OF TEST CASE REDUCTION METHODS Algorithm A Cover Total Number of Critical Time Test Cases Domain Specific Req. Heumann’s 1 1 5 Method Figure 6. An Evaluation Result of Test Generation Methods Ryser’s Method 1 1 1 The above graph shows that the above proposed method Nilawar’s 1 1 1 generates the smallest set of test cases. It is calculated as Method 80.80% where as the other techniques is computed over 97%. Those techniques generated a bigger set of test cases, than a set Our Proposed 5 5 5 generated by the proposed method. The literature review Method reveals that the smaller set of test cases is desirable. Also, the graph shows that the proposed method consumes the least total time during a generation process, comparing to other In the conclusion, the proposed method is the best to techniques. It used only 30.20%, which is slightly less than generate the smallest size of test cases with the maximum of others. Finally, the graph presents that the proposed method is critical domain coverage and the least time consumed in the the best techniques to coverage critical domains. Its percentage generation process. is much greater than other techniques’ percentage, over 30%. From the above figure, this study determines and ranks the VI. CONCLUSION above comparative methods into five ranking: 5-Excellent, 4- In this paper, we introduced a new test case generation Very good, 3-Good, 2-Normal and 1-Poor. This study uses a method and process, with an additional requirement maximum and minimum value to find an interval value for prioritization process. The approach inserts an additional ranking those methods. process to ensure that all domain specific requirements are captured during the test case generation. Also, the approach is For a number of test cases, the maximum and minimum developed to minimize a number of test cases in order to be percentage is 98% and 80.80%. The different between able to select suitable test cases for execution. Additionally, we maximum and minimum value is 17.2%. An interval value is proposed an automated approach to generate test cases from equal to a result of dividing the different values by 5. As a fully described UML use cases, version 2.0. Our generated result, the interval value is approximately 3.4. Thus, it can be method can generate many test suites derived from UML Use determined as follows: 5-Excellent (since 80.80% to 84.2%), 4- Case diagram, 2.0. Existing test case generation methods Very good (between 84.2% and 87.6%), 3-Good (between generate only a large single test suite that contains a lot of 87.6% and 91%), 2-Normal (between 91% and 94.4%) and 1- numbers of test cases. Poor (from 94.4% to 97.8%). Furthermore, we conducted an evaluation experiment with a random requirements and fully described use cases. Our 29 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010 evaluation result reveals that the proposed method is the most  David C. Kung, Chien-Hung Liu and Pei Hsia “An Object- recommendation automated test case generation methods for Oriented Web Test Model for Testing Web Applications.” In maximizing critical domain requirement coverage. Also, the Proceedings of the First Asia-Pacific Conference on Quality result present that the proposed method is one of best methods Software (APAQS’00), page 111, Los Alamitos, CA, 2000. to minimize a number of test cases.  Donald Firesmith “Prioritizing Requirements. Journal of Object Technology”, Vol.3, No8, 2004. The future research, we plan to enhance an ability to prioritize requirements and conduct a large experiment for a  D. Harel “On visual formalisms.” Communications of the ACM, large system development vol. 31, no. 5, pp. 514-530, 1988.  D. Harel. “Statecharts: A Visual Formulation for Complex REFERENCES System.” Sci.Comput. Program. 8(3):232-274, 1987.  Ahl, V. “An Experimental Comparison of Five Prioritization  Flippo Ricca and Paolo Tonella “Analysis and Testing of Web Methods” Master' Thesis, School of Engineering, Blekinge s Applications.” Proc. of the 23rd International Conference on Institute of Technology, Ronneby, Sweden, 2005. Software Engineering, Toronto, Ontario, Canada. pp.25-34, 2001.  Alessandra Cavarra, Charles Crichton, Jim Davies, Alan Hartman, Thierry Jeron and Laurent Mounier. “Using UML for  Harel, D. “Statecharts: a visual formalism for complex system.” Automatic Test Generation.” Oxford University Computing Science of Computer Programming, v. 8, p. 231-274, 1987. Laboratory, Tools and Algorithms for the Construction and  Hassan Reza, Kirk Ogaard and Amarnath Malge “A Model Analysis of Systems, TACAS' 2000, 2000. Based Testing Technique to Test Web Applications Using  Amaral. “A.S.M.S. Test case generation of systems specified in Statecharts.” Fifth International Conference on Information Statecharts.” M.S. thesis – Laboratory of Computing and Technology, 2008. Applied Mathematics, INPE, Brazil, 2006.  Ibrahim K. El-Far and James A. Whittaker “Model-based  Annelises A. Andrews, Jeff Offutt and Roger T. Alexander. Software Testing”, 2000. “Testing Web Applications. Software and Systems Modeling.”,  Jim Heumann “Generating Test Cases From Use Cases.” 2004. Rational Software, 2001.  Avik Sinha, Ph.D and Dr. Carol S. Smidts. “Domain Specific  Johannes Ryser and Martin Glinz “SCENT: A Method Test Case Generation Using Higher Ordered Typed Languages Employing Scenarios to Systematically Derive Test Cases for fro Specification.” Ph. D. Dissertation, 2005. System Test”, 2000.  A. Bertolino. “Software Testing Research and Practice.” 10th  Karl E. Wiegers “First Things First: Prioritizing Requirements.” International Workshop on Abstract State Machines Published in Software Development, 1999. (ASM' 2003), Taormina, Italy, 2003.  Karlsson, J. “Software Requirements Prioritizing.” Proceedings  A.Z. Javed, P.A. Strooper and G.N. Watson “Automated of the Second International Conference on Requirements Generation of Test Cases Using Model-Driven Architecture.” Engineering (ICRE' 96). Colorado Springs, CO, April 15-18, Second International Workshop on Automation of Software Test 1996. Los Alamitos, CA: IEEE Computer Society, p 110-116, (AST’07), 2007. 1996.  Beck, K. & Andres, C. “Extreme Programming Explained:  Karlsson, J. “Towards a Strategy for Software Requirements Embrace Change", 2nd ed. Boston, MA: Addison-Wesley, 2004. Selection.” Licentiate. Thesis 513, Linköping University, 1995.  Boehm, B. & Ross, R. “Theory-W Software Project  Karlsson, J. & Ryan, K. “A Cost-Value Approach for Management: Principles and Examples.” IEEE Transactions on Prioritizing Requirements.” IEEE Software September/October, Software Engineering 15, 4: 902-916, 1989. p67-75, 1997.  B.M. Subraya, S.V. Subrahmanya “Object driven performance  Leffingwell, D. & Widrig, D. “Managing Software testing in Web applications.” in: Proceedings of the First Asia- Requirements: A Use Case Approach”, 2nd ed. Boston, MA: 00), Pacific Conference on Quality Software (APAQS' pp. 17-26, Addison-Wesley, 2003. Hong Kong, China, 2000.  Leslie M. Tierstein “Managing a Designer / 2000 Project.”  Chien-Hung Liu, David C. Kung, Pei Hsia and Chih-Tung Hsu NYOUG Fall’97 Conference, 1997. “Object-Based Data Flow Testing of Web Applications.”  L. Brim, I. Cerna, P. Varekova, and B. Zimmerova Proceedings of the First Asia-Pacific Conference on Quality “Component-interaction automata as a verification oriented 00), Software (APAQS' pp. 7-16, Hong Kong, China, 2000. component-based system specification.” In: Proceedings  C.H. Liu, D.C. Kung, P. Hsia, C.T. Hsu “Structural testing of 05), (SAVCBS' pp. 31-38, Lisbon, Portugal, 2005. Web applications.” in: Proceedings of 11th International  Mahnaz Shams, Diwakar Krishnamurthy and Behrouz Far “A Symposium on Software Reliability Engineering (ISSRE 2000), Model-Based Approach for Testing the Performance of Web pp. 84-96, 2000. Applications.” Proceedings of the Third International Workshop  Davis, A. “The Art of Requirements Triage.” IEEE Computer on Software Quality Assurance (SOQUA’06), 2006. 36, 3 p: 42-49, 2003.  Manish Nilawar and Dr. Sergiu Dascalu “A UML-Based  Davis, A. “Just Enough Requirements Management: Where Approach for Testing Web Applications.” Master of Science Software Development Meets Marketing.” New York: Dorset with major in Computer Science, University of Nevada, Reno, House (ISBN 0-932633-64-1), 2005. 2003. 30 http://sites.google.com/site/ijcsis/ ISSN 1947-5500 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010  Moisiadis, F. “Prioritising Scenario Evolution.” International International Transactions in Operational Research, 9, 321-336, Conference on Requirements Engineering (ICRE 2000), 2000. 2002.  Moisiadis, F. “A Requirements Prioritisation Tool.” 6th  Wiegers, K. “E. Software Requirements”, 2nd ed. Redmond, Australian Workshop on Requirements Engineering (AWRE WA: Microsoft Press, 2003. 2001). Sydney, Australia, 2001.  Xiaoping Jia, Hongming Liu and Lizhang Qin “Formal  M. Prasanna S.N. Sivanandam R.Venkatesan R.Sundarrajan “A Structured Specification for Web Application Testing”. Proc. of Survey on Automatic Test Case Generation.” Academic Open the 2003 Midwest Software Engineering Conference Internet Journal, 2005. 03). (MSEC' Chicago, IL, USA. pp.88-97, 2003.  Nancy R. Mead “Requirements Prioritization Introduction.”  Yang, J.T., Huang, J.L., Wang, F.J. and Chu, W.C. Software Engineering Institute, Carnegie Mellon University, “Constructing an object-oriented architecture for Web 2008. application testing.” Journal of Information Science and  Park, J.; Port, D.; & Boehm B. “Supporting Distributed Engineering 18, 59-84, 2002. Collaborative Prioritization for Win-Win Requirements Capture  Ye Wu and Jeff Offutt “Modeling and Testing Web-based and Negotiation.” Proceedings of the International Third World Applications”, 2002. Multi-conference on Systemics, Cybernetics and Informatics  Ye Wu, Jeff Offutt and Xiaochen “Modeling and Testing of 99) (SCI' Vol. 2. 578-584, Orlando, FL, July 31-August 4, 1999. Dynamic Aspects of Web Applications, Submitted for Orlando, FL: International Institute of Informatics and Systemic publication.” Technical Report ISE-TR-04-01, (IIIS), 1999. www.ise.gmu.edu/techreps/, 2004.  Rajib “Software Test Metric.” QCON, 2006.  Zhu, H., Hall, P., May, J. “Software Unit Test Coverage and  Robert Nilsson, Jeff Offutt and Jonas Mellin “Test Case Adequacy.” ACM Comp. Survey 29(4), pp 366~427, 1997. Generation for Mutation-based Testing of Timeliness.”, 2006.  Kano Noriaki, Nobuhiku Seraku, Fumio Takahashi,Shinichi  Saaty, T. L. “The Analytic Hierarchy Process.” New York, NY: Tsuji. “Attractive Quality and Must-Be Quality.” Journal of the McGraw-Hill, 1980. Japanese Society for Quality Control. 14(2), pp 39~48, 1984.  Shengbo Chen, Huaikou Miao, Zhongsheng Qian “Automatic  Cadotte, Ernest R., Turgeon, Normand “Dissatisfiers and Generating Test Cases for Testing Web Applications.” Satisfiers: Suggestions from Consumer Complaints and International Conference on Computational Intelligence and Compliments.” Journal of Consumer Satisfaction, Security Workshops, 2007. Dissatisfactions and Complaining Behavior. 1, pp 74~79, 1988.  Valdivino Santiago, Ana Silvia Martins do Amaral, N.L.  Brandt, D. Randall “How service marketers can identify value- Vijaykumar, Maria de Fatima, Mattiello-Francisco, Eliane enhancing service elements.” Journal of Services Marketing. Martins and Odnei Cuesta Lopes “A Practical Approach for 2(3), pp 35~41, 1988. Automated Test Case Generation using Statecharts”, 2006.  Herzberg, Frederick, Mausner, B., Snyderman, B.B. “The  Vijaykumar, N. L.; Carvalho, S. V.; Abdurahiman, V. “On motivation to work.” New York: Wiley, 2nd edition, 1959. proposing Statecharts to specify performance models.” 31 http://sites.google.com/site/ijcsis/ ISSN 1947-5500