Abstract
This paper reports on an empirical evaluation of the fault-detecting ability of two white-box software testing techniques: decision coverage (branch testing) and the all-uses data flow testing criterion. Each subject program was tested using a very large number of randomly generated test sets. For each test set, the extent to which it satisfied the given testing criterion was measured and it was determined whether or not the test set detected a program fault. These data were used to explore the relationship between the coverage achieved by test sets and the likelihood that they will detect a fault. Previous experiments of this nature have used relatively small subject programs and/or have used programs with seeded faults. In contrast, the subjects used here were eight versions of an antenna configuration program written for the European Space Agency, each consisting of over 10,000 lines of C code. For each of the subject programs studied, the likelihood of detecting a fault increased sharply as very high coverage levels were reached. Thus, this data supports the belief that these testing techniques can be more effective than random testing. However, the magnitudes of the increases were rather inconsistent and it was difficult to achieve high coverage levels.
Original language | English (US) |
---|---|
Pages | 153-162 |
Number of pages | 10 |
State | Published - 1998 |
Event | Proceedings of the 1998 ACM SIGSOFT 6th International Symposium on the Foundations of Software Engineering, FSE-6, SIGSOFT-98 - Lake Buena Vista, FL, USA Duration: Nov 3 1998 → Nov 5 1998 |
Other
Other | Proceedings of the 1998 ACM SIGSOFT 6th International Symposium on the Foundations of Software Engineering, FSE-6, SIGSOFT-98 |
---|---|
City | Lake Buena Vista, FL, USA |
Period | 11/3/98 → 11/5/98 |
ASJC Scopus subject areas
- Software