Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Test oracle strategies for model-based testing
Li N., Offutt J. IEEE Transactions on Software Engineering43 (4):372-395,2017.Type:Article
Date Reviewed: Aug 10 2017

For software testing, it is important to know the expected results of test inputs. In a simple form, an oracle helps to determine if a test passed or failed by providing the expected output for a given test input. Oracle strategies intend to determine the parts of the programs that need to be evaluated to know if a test passes or fails. This is needed when testing “beyond the unit level.” The main goal of the paper is to study the effectiveness, in terms of defects detected, of several oracle strategies applied to software models. Oracle strategies can be relevant to test automation in general.

This technical paper defines a significant number of concepts (in the first two sections), something that I appreciated. The authors made several additional contributions: an improvement to the definition of an oracle strategy by incorporating the precision and frequency concepts: an improvement to the model on which the authors based their work, the RIP model; ten new oracle strategies with distinct precision and frequencies; and an empirical study. Precision refers to the internal variables and outputs checked, and frequency, as the name suggests, to the frequency of those checks.

The study tries to answer four research questions. The first two are about the effectiveness of the new oracle strategies, the third is about the effectiveness of the coverage criteria with the oracle strategies, and the fourth is about the cost/benefit of each oracle strategy. The authors generated test inputs for 17 distinct Java programs using two coverage criteria and seeded the programs with several faults using a mutation tool. The models were generated by hand. The tests were initially produced from the models using a tool and then finished by hand.

The results show that an increase in precision or in frequency did not always corresponded to a more effective strategy; also, a stronger criterion is not necessarily more effective than a weaker one.

For the first two questions, the authors hypothesized a trend among the oracle strategies, but the statistical analysis was not done in order to produce an accept/reject answer. For the analysis, multiple comparisons of pairs of oracle strategies were made.

In principle, multiple comparisons increase the probability of error type I. In the paper, I could not find a clarification to justify the multiple comparison approach. In table 8, the authors correctly used effect sizes with confidence intervals, but in other cases it appears that no adjustments were made.

To sum up, I found the paper to be very informative with some relevant contributions.

Reviewer:  Alberto Sampaio Review #: CR145474 (1710-0666)
Bookmark and Share
  Editor Recommended
 
 
Testing Tools (D.2.5 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Testing Tools": Date
Automatic generation of random self-checking test cases
Bird D., Munoz C. IBM Systems Journal 22(3): 229-245, 1983. Type: Article
Aug 1 1985
Program testing by specification mutation
Budd T., Gopal A. Information Systems 10(1): 63-73, 1985. Type: Article
Feb 1 1986
SEES--a software testing environment support system
Roussopoulos N., Yeh R. (ed) IEEE Transactions on Software Engineering SE-11(4): 355-366, 1985. Type: Article
Apr 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy