Dr. Dobb's Journal March 1998
One aspect of test planning and reporting is measuring test effectiveness. In "Measuring the Effectiveness and Efficiency of Testing" (Proceedings of Software Testing '96, June 1996), Dorothy Graham suggests that test effectiveness can be measured by computing the number of faults found in a given test by the total number of faults found (including those found after the test). For example, suppose integration testing finds 56 faults, and the total testing process finds 70 faults. Then Graham's measure of test effectiveness says that integration testing was 80 percent effective. However, suppose the system is delivered after the 70 faults are found, and 70 additional faults are discovered during the first six months of operation. Then integration testing is responsible for finding 56 of 140 faults, or only 40 percent test effectiveness.
This approach to evaluating the impact of a particular testing phase or technique can be adjusted in several ways. For example, failures can be assigned a severity level, and test effectiveness can be calculated by level. In this way, integration testing might be 50 percent effective at finding critical faults, but 80 percent effective at finding minor faults. Alternatively, test effectiveness may be combined with root cause analysis, so that we can describe effectiveness in finding faults as early as possible in development. For example, integration testing may find 80 percent of faults, but half of those faults might have been discovered earlier, such as during design review, because they are design problems.
Test efficiency is computed by dividing the number of faults found in testing by the cost of testing, to yield a value in faults per staff hour. Efficiency measures help us to understand the cost of finding faults, as well as the relative costs of finding them in different phases of the testing process.
-- S.L.P.