jump to navigation

Designing a Software Quality System May 30, 2012

Posted by Tim Rodgers in Product design, Quality.
Tags: , , ,
trackback

Last time I wrote about considerations in designing a quality system for a hardware product, a system that’s based on engineering analysis of the design and doesn’t rely solely on end-of-line testing or audits of finished goods.  This post examines the differences and unique elements associated with a software quality system.

Hardware development relies on early prototyping, an iterative process that uses measurements and test results to verify that a product based on the design meets the performance specifications, and that the manufacturing processes can repeatedly and consistently deliver many products that meet those specs. Software delivery doesn’t rely on a manufacturing process (especially now that almost no one uses CDs anymore), so software development focuses on a single, final release of the code that meets the specs.

Agile or scrum development is very much in vogue right now (more on that later), but the older waterfall development model is still widely used. In the waterfall model an integrated, fully-functional build is delivered to the quality group who use system-level “black box” testing to check the performance and reliability. Defects are prioritized and fixed, and another build is released for another round of testing that includes verifying the defect fixes. The iterations continue, gradually converging on a level of quality that’s acceptable, or running out of time, whichever comes first. Black box testing is the equivalent of end-of-line testing in the hardware manufacturing world; it’s better than nothing, but it’s late in the process and makes it hard to isolate the cause of any test failures found.

As with hardware development, a better quality system includes early testing and verification of the building blocks of the software, usually called modules or subsystems. Assuming the modules have been architected and designed with well-defined interfaces, it’s possible to test and validate these intermediate deliverables before they’re integrated with the rest of the software using white-box unit tests or test harnesses. Many software teams still rely on the integrated build to indirectly test subsystems instead of taking the time to create specialized unit tests, but that time has to be balanced against the time required to debug system test failures to figure out where the defect is.

I’m a big fan of the agile software development model because (when practiced correctly) it allows you to verify the quality in stages before new functionality is added to the build, instead of waiting until the software is finished before starting testing. This means that if there are N stages planned for an agile project, you could theoretically deliver the software at stage N-1 when you inevitably run out of time. The software at that point would be missing some lower priority functionality, but it would have been fully tested and qualified along the way, thereby avoiding the last-minute scramble that often characterizes the waterfall method.

Regardless of the development model, testing of functional modules or subsystems is preferable to relying solely on system testing near the end of the program. An alternative to subsystem testing is to build and system test “prototypes” of software with partial functionality, ultimately leading to a functionally-complete (FC) checkpoint. Further development after the FC checkpoint can focus on robustness, defect fixing, and generally extending the basic functionality to include other use cases.

It’s a little harder to do an engineering analysis of software failure modes using FMEA or other processes, mainly because it’s harder to predict the impact of other software and operating systems. Hardware is tangible with clear interfaces and behaves fairly predictably according to well-established engineering principles. Software problems often arise because of unexpected interactions with its environment. Nevertheless, stress testing should be performed  to determine how the software recovers from error conditions, and assess possible timing and resource conflicts.

Whether software or hardware, the ideal quality system emphasizes: (1) design for quality, (2) testing activities early in the lifecycle, and (3) qualification of sub-units (parts, sub-assemblies, software modules) before integration.

Advertisements

Comments»

1. Fred Schenkelberg - May 30, 2012

Hi Tim, saw this posted on Linkedin — didn’t know you are blogging and looks like good information.

I agree that a solid quality system is critical, yet having a meaningful way to me sure reliability is also important for most products. I’ll need to scan over your writing to see if you’d address this aspect.

cheers,

Fred

Tim Rodgers - May 30, 2012

Thanks Fred. I appreciate the feedback and I would enjoy reading your comments about the blog posts.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: