jump to navigation

Measuring Test Effectiveness: Three Views August 20, 2012

Posted by Tim Rodgers in Product design, Quality.
Tags: , , ,
trackback

I’ve managed testing organizations supporting both hardware and software development teams, and I often had to explain to folks that it’s not possible to “test-in quality.” Quality is determined by the design and the realization of that design, whether through software coding or hardware manufacturing. Testing can measure how closely that realization matches the design intent and customer requirements, but testing can’t improve the quality of a poorly-conceived design (and neither can a flawless execution of that design).

So, how can you measure the effectiveness of a test program? Here are three ways that make sense to me:

1. Testing should verify all design requirements and all possible failure modes. This means there should be at least one test case associated with each functional, cosmetic, reliability, regulatory and all other requirements. Also, each failure mode predicted from an FMEA review should have a corresponding test to determine the design margin.

2. Testing should be designed to eliminate escapes, or at least make their occurrence a statistical improbability that is acceptable to the business. An escape is any defect found by the end-user. It may not be economically feasible to achieve zero defects, but any reported escape is an opportunity to improve test coverage. Is there an existing test case corresponding to this defect? Was the test performed and reported correctly? Does this test have to be run on a larger sample size to improve the confidence level?

3. Testing should report defects that get fixed. Testing is buying information, and if that information has no value to the organization, then it’s not a good use of resources. When I managed software quality I looked at the “signal-to-noise ratio,” or the percentage of all defects reported that were fixed by the development team. Defects that are not fixed are either potential escapes that should be discussed as a business risk, or they’re “noise” that waste money and distract the development team.

It may not be possible to test-in quality, but poorly-designed testing will surely frustrate your efforts to achieve the required level of quality.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: