Abstract
An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria was performed. The experiment was designed so as to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. All-uses was significantly more effective than all-edges for five of the subjects, and appeared guaranteed to detect the error in four of them. Further analysis showed that in four of these subjects, all-uses adequate test sets were more effective than all-edges adequate test sets of similar size. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. The evidence did not strongly support this conjecture. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four (different) subjects, but the relationship was weaker.
Original language | English (US) |
---|---|
Pages (from-to) | 774-787 |
Number of pages | 14 |
Journal | IEEE Transactions on Software Engineering |
Volume | 19 |
Issue number | 8 |
DOIs | |
State | Published - Aug 1993 |
Keywords
- Branch testing
- data flow testing
- error detection
- experiment
- program testing
- software testing
ASJC Scopus subject areas
- Software