Getting People to Care About Quality December 4, 2013Posted by Tim Rodgers in Management & leadership, Quality, strategy.
Tags: performance measures, factory quality, quality engineering, change management, test & inspection
add a comment
Quality sounds like something that everyone will support on principle, unless you have a saboteur working in your midst. It’s probably safe to assume that no one is deliberately acting to produce a defective product or service. The problem is that everyone makes daily decisions that balance quality against other considerations, whether to save money, or meet a committed date, or keep the factory running. We tell ourselves that quality is free, but even in highly-evolved organizations it doesn’t happen without deliberate effort. The challenge to quality professionals is helping people understand how good quality contributes to the business and thereby provide a more useful basis for decision making.
Here’s a little not-so-secret secret: all decisions in for-profit businesses eventually come down to how to bring in more revenue while controlling expenses. If you want people to pay attention to quality, talk about money.
For better or worse, this is a lot easier after the cost has already been incurred. If you have to spend more money or time because of scrap or rework, or you have to repair or replace product at the customer, or you’re liable for warranty or other contractual post-sale costs, everyone will be interested to know how it happened and how it can be prevented in the future. After some investigation you may identify the cause or causes, and you can recommend actions to eliminate them. Of course those corrective actions will have a cost of their own, and you will have to determine if there’s a net gain.
All of that is based on the assumption that there’s a 100% probability of that bad thing happening again if you do not implement the corrective action, and a 0% probability if you do. If you want to get more analytical you can estimate those probabilities based on engineering analysis, historical trends, or just good old-fashioned judgment, and then apply a de-rating factor to the cost. This is where an FMEA analysis is useful, along with early prototyping and testing to check those assumptions about probability and impact.
Here it’s important to note that there are indirect costs of poor quality that are harder to factor in to this calculation. For example, even a single incident at a key customer could cause a significant decline in future revenue if it affects brand reputation. Low-probability yet high-severity events are also problematic.
Of course it’s generally harder to look ahead and assess the unknown probability and impact of a quality risk that has not yet been encountered. As long as the bad thing hasn’t happened yet, it’s easy to underestimate it. This is what causes organizations to reduce cost by using cheaper parts or removing design safeguards or eliminating quality checks. They’re saving real money today and implicitly accepting the uncertain risk (and cost) of a poor quality event in the future. Again, if you can say with 100% certainty that this bad thing will happen without specific actions being taken, then your choice is clear. Unfortunately there are many choices that are not clear, or even recognizable.
Are you really willing to spend whatever it takes to prevent any quality problem? Of course not. Managing quality is managing risk, and looking for ways to assess and minimize that risk while under pressure to reduce cost now. It’s not very satisfying to say “I told you so.”
Quick Note: Theory of Constraints and Six Sigma November 24, 2013Posted by Tim Rodgers in Process engineering, Quality.
Tags: factory quality, performance measures, process, six-sigma
add a comment
Last week I attended the monthly meeting of the Northern Colorado chapter of the American Society for Quality. The featured speaker was Dr. Russ Johnson, President of Improvement Quest, a local management consulting firm. Dr. Johnson’s talk “Creating a Culture of Harmony by Using the Theory of Constraints Concepts to Focus and Integrate Lean and Six Sigma” included several interesting insights about to effectively integrate these strategies in a production environment.
Of course the key to successful implementation of the Theory of Constraints is identifying the bottleneck, or constraint, in the production process and then optimizing the rest of system around the constraint (“exploit, subordinate, elevate”) in order to maximize overall throughput while controlling inventory (including work-in-progress, WIP) and operating expense. At the risk of oversimplifying, Six Sigma can be described as “reduce variability,” and the lean philosophy is essentially “eliminate waste.”
These strategies are not different ways of solving the same problem. They can and should be implemented as elements in an integrated improvement effort. The trick is understanding that not all processes are equally good targets for a six sigma or lean improvement plan. It depends where the process is in relation to the constraint.
Any yield improvement or waste elimination that occurs upstream from the bottleneck doesn’t improve throughput because it effectively increases the input to the bottleneck that is already limited. In fact, it can be detrimental to the operation as a whole if it increases WIP and associated costs for material and operating expenses. The focus should be on downstream processes where yield improvement or waste elimination effectively increases the capacity of the constraint. Scrap or rework that occurs after the constraint is especially damaging because it essentially requires another pass through the constraint.
The point is that you can’t assume that all improvements at the micro level are equally beneficial at the macro level. Yes, generally there’s value in reducing variability and eliminating waste, but when your resources are limited and you have to focus, consider the constraint and whether your improvements are really improving the metrics that matter.
Does Your Company Need a Quality Department? November 13, 2013Posted by Tim Rodgers in Management & leadership, Process engineering, Product design, Project management, Quality, Supply chain.
Tags: early stage companies, factory quality, organizational models, outsourcing, process, product development, quality engineering, six-sigma, supply chain
add a comment
You already have a quality department, you just don’t realize it. Do you have suppliers or service providers? You have people managing supplier quality when you receive parts or services that don’t meet your specifications. Is your product manufactured? Whether you build it yourself or outsource to a contract manufacturer, you’ve got quality issues. Do your customers have problems with your product or service? Somebody in your team is managing your response. Poor quality is costing you money, whether through internal rework or post-sale costs. The question is whether you want to pull all this activity together into a separate, centralized organization.
Some organizations, particularly early stage companies, may feel they can’t afford a dedicated quality team. After all, quality is fundamentally a non-value-added function. It doesn’t contribute directly to the delivery of a product or service. However, we live in a world of variability, where every step in the delivery process can cause defects. You may be passionate about eliminating defects and saving money, but do you really know how? Quality professionals understand how to determine root cause, and they can investigate from an impartial perspective. They have expertise in sampling and statistics, and that enables them to distinguish between a one-time occurrence and a downward trend that requires focused resources.
Do you care about ISO 9001 certification? If you do, you need someone to develop and maintain a quality management system, monitor process conformance, and host the auditors. If you’re in regulated industry, you need someone to understand and communicate process and documentation requirements throughout your organization. Other responsibilities that could be assigned to the quality team include environmental, health and safety (EHS), new employee training, equipment calibration, and new supplier qualification.
All of these tasks can theoretically be handled by people in other functional groups, but you have to ask yourself whether you’re getting the results your business requires. Organizational design derives from a logical division of labor. The sales team is separate from product (or service) fulfillment so that one group can focus on the customer and another can focus on meeting customer needs. Fulfillment may require separate teams for development (design) and delivery. As the business grows, other functions are typically created to handle tasks that require specialized skills, such as accounting and human resources.
Quality is another example of a specialized function, one that can help identify and eliminate waste and other costs that reduce profit and productivity. Maybe those costs are tolerable during periods of rapid growth, but at some point your market will mature, growth will slow, and you won’t be able to afford to waste money anywhere in your value stream. That’s when you need quality professionals, and a function that can coordinate all the little quality management activities that are already underway in your organization.
Are Your Suppliers Really Committed to Quality? November 6, 2013Posted by Tim Rodgers in Management & leadership, Process engineering, Quality, Supply chain.
Tags: factory quality, leadership, management, outsourcing, performance measures, process, quality engineering, six-sigma, supply chain, test & inspection, training
1 comment so far
Suppliers always declare their commitment to the highest standards of quality as a core value, but many have trouble living up to that promise. I can’t tell you how many times I’ve visited suppliers who proudly display their framed ISO certificates in the lobby yet suffer from persistent quality problems that lead to higher cost and schedule delays. Here’s how you can tell if they’re really serious:
1. Do they have an on-going program of quality improvement, or do they wait until you complain? Do they have an understanding of the sources of variability in their value stream, and can they explain what they’re doing to reduce variability without being asked to do so? Look for any testing and measurements that occur before outgoing inspection. Award extra credit if the supplier can show process capability studies and control charts. Ask what they’re doing to analyze and reduce the internal cost of quality (scrap and rework).
2. Do they accept responsibility for misunderstandings regarding specifications and requirements? Or, do they make a guess at what you want, and later insist they just did what they were told? Quality means meeting or exceeding customer expectations, and a supplier who is truly committed to quality will ensure those expectations are clear before they start production.
3. Do you find defects when you inspect their first articles, or samples from their first shipment? If the supplier can’t get these right when there’s no schedule pressure, you should have serious concerns about their ability to ramp up to your production levels. By the way, if you’re not inspecting a small sample of first articles, you’ll have to accept at least half of the blame for any subsequent quality problems.
4. Has the supplier ever warned you of a potential quality problem discovered on their side, or do they just hope that you won’t notice? I realize this is a sign of a more mature relationship between supplier and customer, but a true commitment to quality means that the supplier understands their role in your value stream, and upholds your quality standards without being asked.
Ultimately, you will get the level of quality you deserve, depending on what suppliers you select and the messages you give them. You may be willing to trade quality for lower unit cost, shorter lead time, or assurance of supply. The real question is: What level of quality do you need? What level of poor quality can you tolerate?
Natural Variation, Outliers, and Quality November 2, 2013Posted by Tim Rodgers in Product design, Quality, Supply chain.
Tags: factory quality, performance measures, quality engineering, six-sigma, test & inspection
add a comment
When you work in quality, people want to tell you when bad things happen. A product has failed in the field. A customer is unhappy. The factory has received a lot of bad parts. You’ve got to figure out the scope of the problem, what to do about it, and how this could have possibly happened in the first place. Is this the beginning of a string of catastrophes derived from the same cause, or is this a one-time event? And, by the way, isn’t it “your job” to prevent anything terrible from happening?
People who haven’t been trained in quality may have a hard time understanding the concept of natural variation. Sometimes bad things happen, even when the underlying processes are well-characterized and generally under-control. A six-sigma process does not guarantee a zero percent probability of failure, and of course you probably have very few processes that truly perform at a six-sigma level, especially when humans have the opportunity to intervene and influence the outcome. Every process is subject to variation, even at the sub-atomic level.
So, how can you tell whether this bad thing is just an outlier, or an indicator of something more serious? And, how will you convince your colleagues that this is not the first sign of disaster? Or, maybe it is. How would you know?
You may possess process capability studies and control charts that show that all processes that contribute to this result are stable and deliver outcomes that are within specifications. That would allow you to show that this incident is a low probability event, unlikely, but not impossible. However, I’m not sure that any organization can honestly say they have that level of understanding of all the processes that influence the quality of their ultimate product or service.
In the absence of hard data to establish process capability, you’re left with judgment. Could this happen again? Was it a fluke occurrence, an accident, an oversight, something that happened because controls were temporarily ignored or overridden? Or, are there underlying influences that would make a reoccurrence not just possible, but likely? These are the causes that are hard to address, because they force organizations to decide how important quality really is. Is the organization really prepared to do what is necessary to prevent any quality failure? What level of “escapes” is acceptable, and how much is the organization willing to spend to achieve that level? While it’s easy to find someone to blame for the bad thing, it’s harder to understand how it happened, and how to prevent it from happening again.
The Weakest Link In Any Quality System June 29, 2013Posted by Tim Rodgers in Management & leadership, Quality.
Tags: China, factory quality, management, outsourcing, quality engineering, test & inspection, training
It’s time to start writing again. I officially re-joined the workforce in mid-March and I’ve been very busy with starting a new job and relocating to Colorado. While I’ve had a lot of time for reflection, there’s been little time for composition. Now I want to get back into a blogging rhythm, for my own benefit if for no other reason.
I’m managing a quality department again, and it’s another opportunity to establish a quality system of processes and metrics that can enable the business to “meet or exceed customer expectations” at a reasonable cost. In that role I’ve been spending a lot of time understanding how the company measures quality, both externally (field failures, service calls), and internally (factory yield, defective parts received). These measures must provide an accurate picture of the current state of quality because any set of improvement plans will be based on the perceived status and trends over time. If the measures are wrong we will dedicate ourselves to fixing the wrong things, which means either lower priority targets (missed opportunity), or trying to fix something that isn’t broken (process tampering).
Unfortunately almost all of the current quality measures are compromised because of a fundamental weakness: the human element. We’re counting on individual service reps, factory assemblers, inspectors, and others to log their findings correctly, or even log their findings at all. I’m not sure which is more damaging to our quality planning: no data or invalid data. Either way we’re in danger of running off in the wrong direction and possibly wasting a lot of time and energy on the wrong quality improvement projects.
So, how can can get our people to provide better input? Sure, we can impose harsh directives from above to compel people to follow the process for logging defects (not our management style). Or, we could offer incentives to reward those who find the most defects (a disaster, I’ve seen this fail spectacularly). I think the answer is to educate our teams about the cost of quality, and how all these external and internal failures add up to real money spent, and potentially saved by focusing our improvement efforts on the right targets. Some percentage of that money saved could be directed back to the teams that helped identify the improvement opportunities.
My plan is to hit the road, going out to our service reps and our design centers and our factories and our suppliers to help them understand the importance of complete and accurate reporting of quality. I need everyone’s commitment, or else we will continue to wander around in the dark.
The Power of Three (Defect Categories) December 5, 2012Posted by Tim Rodgers in Project management, Quality.
Tags: factory quality, project management, quality engineering, software quality, test & inspection
add a comment
Over the last few weeks of software projects at HP we would have cross-functional discussions about open defects to essentially decide whether or not to fix them. We considered the probability or frequency of the defect’s occurrence, and the severity or impact of the defect to the end-user, then assigned the defect to a category that was supposed to ensure that the remaining time on the project was spent addressing the most important outstanding issues.
I don’t remember exactly how many different categories we had in those days (at least five), but for some reason we spent hours struggling with the “correct” classification for each defect. I do recall a lot of hair-splitting over the distinction between “critical,” “very high,” and “high” which seemed very important at the time. Regardless, everyone wanted their favorites in a high-priority category to make it more likely that they would get fixed.
I think we could have saved a lot of time if we had used just three categories: (1) must fix, (2) might fix, and (3) won’t fix. That covers it. Nothing else is necessary. The first group are those defects you must fix before release. The second group are the ones that you’ll address if you have time after you run out of the must-fix defects. The third group are the ones you aren’t going to fix regardless of how much time you have. To be fair, the might-fix defects should be ranked in some order of priority, but at that point you’ve already addressed the must-fix defects and it won’t matter much which defects you choose.
I’m not a psychologist, but I think there’s a big difference between trying to classify things into three categories vs. trying to classify things into more than three categories. I think the human brain gets a little overwhelmed by too many choices. Two might seem better than three because it forces a binary selection, but I think it’s a good idea to compromise and allow for a “maybe” category rather than endure endless indecision.
Calling Attention to Internal Quality Costs December 3, 2012Posted by Tim Rodgers in Process engineering, Quality.
Tags: factory quality, quality engineering, six-sigma, test & inspection
add a comment
Electronics manufacturing services and contract manufacturers typically operate with very small profit margins, and Foxconn was no exception when I managed a factory quality team there in the late 00s. I assumed this would be a receptive environment for an initiative targeting cost-of-quality (COQ): measuring the costs of prevention processes, appraisal processes (test and inspection), and internal failures (rework and repair); and then working to reduce those costs. External failures that lead to product returns and other warranty costs get a lot of attention because they’re visible to customers and end-users and can ultimately impact loyalty and future business.
Initially it didn’t seem that anyone noticed or cared much about the cost of the internal processes required to minimize those external failures. Some people understood that testing and inspection are non-value-added activities and targets for elimination in any lean manufacturing environment, nevertheless they were assumed to be necessary regardless of how much attention was given to defect prevention. Of course any variability in a manufacturing process leads to higher cost (see Taguchi loss functions), but that was even harder to command attention in this environment. It was challenging enough to get the support to collect the basic cost data and assemble a Pareto diagram.
What I finally realized was that cost savings weren’t appreciated unless they were presented in discrete, quantized bundles. Dollar savings (or RMB) claimed on a spreadsheet doesn’t have the visible or practical impact as headcount reduced or processes eliminated. For example, I started working with the industrial engineers to figure out how what improvement in our internal end-of-line yield would enable us to eliminate exactly one repair station for each production line. When audit inspections and reliability testing of production units consistently exceeded required quality levels I argued for reduced sampling rates that allowed us to re-allocate the headcount to value-added activities.
The performance measures for the factory included units per headcount and units per production hour. I finally got traction with my COQ program when I targeted changes to directly impact those measures, reducing headcount and cycle time without compromising quality.
Managing Quality Without Data: Don’t Try This At Home November 16, 2012Posted by Tim Rodgers in Management & leadership, Quality.
Tags: factory quality, quality engineering, six-sigma, software quality
1 comment so far
I know from experience how hard it is to measure, analyze, improve, and control quality in a high-stakes or high-volume production environment. There’s never enough data, and there’s constant pressure to draw conclusions and make decisions. What can we do to fix this defect? Did the process change work? When can we get the line running again? Sample sizes are usually too small to determine whether differences are statistically significant. One data point on a run chart will be taken as evidence that things are trending in the right direction. One defective product chosen at random “proves” that the whole batch is bad.
It’s worse when there’s no data at all, by which I mean the lack of a reliable source of objective, unfiltered, and unbiased data. You can’t run an effective quality system without links to the factory’s internal information systems: ERP, MRP, and other shop floor control and measurement systems. Data that’s automatically collected and reported in the course of normal production is less likely to be manipulated to make the situation look better (or worse) than it is. Data that’s manually collected is acceptable but less trustworthy, unless repeatability and reproducibility has already been established for operators.
There are too many people and constituencies with a vested interest when it comes to quality, people who want to believe that quality is always good, or at least good enough. It’s not fun for anyone when the production line is down or field failures are up. It’s easy to discount or ignore data as outliers when they don’t fit the desired story. There are also real situations where data may be suppressed or even fabricated. I think you’re better off with no data than with data that’s been compromised, but of course the better solution is to improve data integrity before making any changes to improve quality.
Common Fallacies That Cause People to Doubt Statistics October 23, 2012Posted by Tim Rodgers in baseball, Process engineering, Quality.
Tags: baseball, factory quality, performance measures, quality engineering, six-sigma, test & inspection
add a comment
Lately I’ve been reviewing some old text books and work files as part of my preparation for the ASQ Six Sigma Black Belt certification exam in March. It’s interesting, and I think often amusing, to contrast the principles of inferential statistics and probability theory with they ways they’re used in the real-world. I think people tend to underestimate how easy it is to misuse statistical methods, or at least apply them incorrectly, and this can lead them to undervalue all statistical analysis, regardless of whether or not the methods were applied correctly.
I see this in baseball and political commentary all the time, particularly in the way people selectively or incorrectly use numbers to defend their point of view, while at the same time mocking those people who use numbers (correctly or not) to defend a different point of view.
Here are a few of the more-common mistakes that I’ve seen in the workplace:
1. Conclusions based on small sample sizes or selective sampling. Yes, we often have to make do with less data than we’d like, but that makes it especially important to put confidence intervals around our conclusions and stay open-minded about the possibility of a completely different version of reality. Also, a sample is supposed to represent the larger population, and we have to beware of sampling bias that excludes relevant members of the population and skews any findings based on that sample. Otherwise the findings are meaningful only for a subset of the population.
2. Unknown or uncontrolled measurement variability. We often assume that our measurement processes are completely trustworthy without considering the possible effects of variability due to equipment or people. If the variance of the measurement process exceeds the variance of the underlying processes that we’re trying to measure, we can’t possibly know what’s really going on.
3. Confusing independent vs. dependent events. There is no such thing as “the law of averages.” If you flip a coin 10 times and it comes up heads every time, the probability of a heads coming up on the 11th flip is still 50%. The results of those previous coin flips do not exert any influence whatsoever on future outcomes, assuming each coin flip is considered a single event. That being said, the event “eleven consecutive coin flips of heads” is an extremely unlikely event. If you take a large enough sample size, the sample statistics will approximate the population statistics (50% heads and 50% tails for an honest coin), sometimes simplistically referred to as “regression to the mean.”
4. Seeing a trend where none exists. This is usually the result of prediction bias, where we start with a conclusion and look for data to support it, and sometimes leading to selection bias, where we exclude data that doesn’t fit the expected behavior. Often we’re so eager for signs of improvement that we accept as proof a single data point that’s in the right direction. This is why it’s important to apply hypothesis tests to determine whether the before and after samples represent statistically significant differences. It’s also why we should never fiddle with a process that varies randomly but operates within control limits.
5. Correlation does not imply causation. You may be able to draw a regression line through a scatter plot, but that doesn’t necessarily mean there’s a cause-and-effect relationship between the two variables. This is where we have to use engineering judgment or even common sense. Earlier this year the Atlanta Braves baseball team lost 16 consecutive games that were played on a Monday. No one has been able to explain how winning or losing a baseball game could possibly be caused by the day of the week. A related logical fallacy is post hoc, ergo propter hoc (after it, therefore because of it). Chronological sequence does not imply causation, either.