jump to navigation

Natural Variation, Outliers, and Quality November 2, 2013

Posted by Tim Rodgers in Product design, Quality, Supply chain.
Tags: , , , ,
trackback

When you work in quality, people want to tell you when bad things happen. A product has failed in the field. A customer is unhappy. The factory has received a lot of bad parts. You’ve got to figure out the scope of the problem, what to do about it, and how this could have possibly happened in the first place. Is this the beginning of a string of catastrophes derived from the same cause, or is this a one-time event? And, by the way, isn’t it “your job” to prevent anything terrible from happening?

People who haven’t been trained in quality may have a hard time understanding the concept of natural variation. Sometimes bad things happen, even when the underlying processes are well-characterized and generally under-control. A six-sigma process does not guarantee a zero percent probability of failure, and of course you probably have very few processes that truly perform at a six-sigma level, especially when humans have the opportunity to intervene and influence the outcome. Every process is subject to variation, even at the sub-atomic level.

So, how can you tell whether this bad thing is just an outlier, or an indicator of something more serious? And, how will you convince your colleagues that this is not the first sign of disaster? Or, maybe it is. How would you know?

You may possess process capability studies and control charts that show that all processes that contribute to this result are stable and deliver outcomes that are within specifications. That would allow you to show that this incident is a low probability event, unlikely, but not impossible. However, I’m not sure that any organization can honestly say they have that level of understanding of all the processes that influence the quality of their ultimate product or service.

In the absence of hard data to establish process capability, you’re left with judgment. Could this happen again? Was it a fluke occurrence, an accident, an oversight, something that happened because controls were temporarily ignored or overridden? Or, are there underlying influences that would make a reoccurrence not just possible, but likely? These are the causes that are hard to address, because they force organizations to decide how important quality really is. Is the organization really prepared to do what is necessary to prevent any quality failure? What level of “escapes” is acceptable, and how much is the organization willing to spend to achieve that level? While it’s easy to find someone to blame for the bad thing, it’s harder to understand how it happened, and how to prevent it from happening again.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: