jump to navigation

Check Out “Document Center” December 11, 2014

Posted by Tim Rodgers in Quality, Supply chain.
Tags: , , , , , ,
add a comment

I don’t typically use this forum for recommendations, but here’s something I can support enthusiastically. My friends at Document Center manage and sell a comprehensive collection of industry and government standards from around the world. Customers who want to clearly express their requirements and quality expectations should be referencing standards in their communications with suppliers. Standards are developed through the cooperative efforts of experienced teams with deep understanding of their respective industries. While your specific product may have unique requirements, it’s important to use standards as a starting point rather than creating something from scratch. Your suppliers should already be familiar with them, and you should be as well.

If you’re looking for standards that are appropriate for your industry, or the most recent version of a standard that you’re currently using, go to Document Center. While you’re there, take a look at the guest blog that I contributed to the site last month at: Does Anyone In China Pay Attention to Standards?.


Improving Quality in China March 27, 2014

Posted by Tim Rodgers in International management, Process engineering, Product design, Quality, Supply chain.
Tags: , , , , , , , ,
add a comment

Many years ago people would complain about “cheap Japanese” products, but today few people would associate Japanese brands with poor quality. The turn-around is widely-attributed to Deming, and Taguchi, and Juran, and other evangelists who taught not only the tools and processes, but also the long-term benefits that can be realized when a company adopts good practices and a culture of quality.

Today I hear people complaining about poor quality in Chinese-made parts and products, and there have been several widely-publicized incidents (see Aston-Martin and counterfeit parts). Many customers have decided to move their production and seek part suppliers in other locations, including “re-shoring” to North America, in-part because they’ve concluded that any cost savings due to cheaper labor is outweighed by the costs of poor quality. It’s hard to say whether this will have a negative impact on the worldwide consumer perception of Chinese brands such as Lenovo, Haier, and others.

Some people have tried to find cultural explanations, suggesting that individuals in the US, or Europe, or Japan are generally more likely to take pride in their workmanship than their Chinese counterparts, and therefore deliver better quality even if no one is watching. Others look for differences in education and training, and specifically point to the traditional Chinese emphasis on rote learning that discourages creativity and adaptation.

I worked in a factory in China for almost two years (see my other blog “Managing in China”), and I’ve used Chinese suppliers for over ten years. It’s dangerous and un-wise to generalize in a country of over a billion people, but I think the problem has less to do with individual skill and more to do with priorities and expectations. Margins are typically very small at suppliers and contract manufacturers, and unless there are clear incentives or penalties for quality performance these suppliers will cut corners, substitute materials, and, yes, occasionally ship defective parts because it costs money to scrap or repair. The performance of an individual machinist or assembler is determined by the priorities set by their line supervisor, and the highest priority is usually meeting the production quota, not high quality.

That being said, there is a growing movement in China to improve quality as more companies realize the internal and external benefits. Internal: lower cost production, specifically when scrap and rework can be prevented. External: a differentiator when competing for business. Customers can help move this along by making it clear that quality is a requirement for any future business awards. Competition will lead to improved quality if customers insist on it.

I don’t believe this is a uniquely-Chinese issue. Unless we start demanding better quality from our suppliers, we will surely be complaining about poor quality from Indonesia, or Vietnam, or any other alternative. Japanese brands improved their quality in the last century in-part to compete more effectively with US and European brands. If we insist on better quality, Chinese firms will surely do the same.

New Slide Deck: Subcontracting Quality March 20, 2014

Posted by Tim Rodgers in Quality, Supply chain.
Tags: , , , , , , , ,
1 comment so far

In March 2014 I presented a talk called “Subcontracting Quality” at my local chapter meeting of the American Society for Quality. Here’s a link to the file on SlideShare:

Subcontracting Quality: Extending Your Quality System to the Supply Chain from timwrodgers

The Weakest Link In Any Quality System June 29, 2013

Posted by Tim Rodgers in Management & leadership, Quality.
Tags: , , , , , ,

It’s time to start writing again. I officially re-joined the workforce in mid-March and I’ve been very busy with starting a new job and relocating to Colorado. While I’ve had a lot of time for reflection, there’s been little time for composition. Now I want to get back into a blogging rhythm, for my own benefit if for no other reason.

I’m managing a quality department again, and it’s another opportunity to establish a quality system of processes and metrics that can enable the business to “meet or exceed customer expectations” at a reasonable cost. In that role I’ve been spending a lot of time understanding how the company measures quality, both externally (field failures, service calls), and internally (factory yield, defective parts received). These measures must provide an accurate picture of the current state of quality because any set of improvement plans will be based on the perceived status and trends over time. If the measures are wrong we will dedicate ourselves to fixing the wrong things, which means either lower priority targets (missed opportunity), or trying to fix something that isn’t broken (process tampering).

Unfortunately almost all of the current quality measures are compromised because of a fundamental weakness: the human element. We’re counting on individual service reps, factory assemblers, inspectors, and others to log their findings correctly, or even log their findings at all. I’m not sure which is more damaging to our quality planning: no data or invalid data. Either way we’re in danger of running off in the wrong direction and possibly wasting a lot of time and energy on the wrong quality improvement projects.

So, how can can get our people to provide better input? Sure, we can impose harsh directives from above to compel people to follow the process for logging defects (not our management style). Or, we could offer incentives to reward those who find the most defects (a disaster, I’ve seen this fail spectacularly). I think the answer is to educate our teams about the cost of quality, and how all these external and internal failures add up to real money spent, and potentially saved by focusing our improvement efforts on the right targets. Some percentage of that money saved could be directed back to the teams that helped identify the improvement opportunities.

My plan is to hit the road, going out to our service reps and our design centers and our factories and our suppliers to help them understand the importance of complete and accurate reporting of quality. I need everyone’s commitment, or else we will continue to wander around in the dark.

Why Keep Testing If You Never Find Defects? September 26, 2012

Posted by Tim Rodgers in Process engineering, Quality.
Tags: , , ,
add a comment

When I started working at the factory in China I inherited a quality system that included long checklists of visual inspections, almost all of which had been specified by our customer. I’ve never been a big fan of inspections, especially those that rely on subjective human judgment, but I’ll admit that in some cases they’re a quick (but costly) way of detecting defects.

Anyway, one set of inspections seemed especially puzzling. At the end of the production line the customer required an audit of a sample of the finished goods, including the accessories, localized printed materials, and the final packaging before everything was loaded on pallets for the shipping container. Boxed units were taken off the line and opened, and everything inside that had a barcode was removed from the box and hand-scanned to verify that the box contained everything it was supposed to contain.

What made this a head-scratcher was that the end of the production line was ten feet away, where the finished goods, the accessories, and the localized printed materials were each individually barcode-scanned and then put in the box. If the operator tried to put the wrong thing in the box, something that wasn’t on the approved bill-of-materials, it would be detected by the scanner. I wasn’t really surprised to discover that the end-of-line audit never found any missing or wrong parts, and we had a discussion with our customer about the value of the audit.

This brings me to my question: if you never find any defects, is the test or inspection routine still effective or useful? Can’t we get rid of it? Is this test any good?

Let me pause here for a moment and emphasize (again) that you don’t achieve quality by testing or inspections, especially at the end of the production line. Nevertheless, I think we can agree that an audit program, strategically placed in the process flow, can be a useful tool for verifying that customer requirements are being met.

When you’re considering eliminating or changing a test, I think you have to start by asking: What customer requirement does this test correspond to? What defect is this test supposed to find? If the test never fails and doesn’t find those defects, that either means there are no defects (or, at least no defects of that category), or the test isn’t capable of finding them, possibly because of bad test design or bad test execution; which is why the capability of the test should be investigated and verified. By the way, that’s how we discovered the problem with the barcode-scan audit described above: bad design and bad execution.

Assuming it’s a good test, correctly implemented and capable of finding defects that are linked to customer requirements, there are several options if you’re still not finding defects:

1. You can raise the quality standards and tighten the spec limits that define failures. That may give you more failures and an opportunity to eliminate a root cause or reduce variability somewhere in the process. Reducing variability is always a good thing, but you have to consider the cost to do so, and whether this is really a high-value opportunity.

2. You can reduce the audit frequency. Maybe you really have a design and a process that doesn’t generate defects, but maybe it would still be a good idea to check on it from time to time, wouldn’t it?

3. You can eliminate the test altogether. This is a risky move because you’re voluntarily giving up an opportunity to verify that some customer requirement are being met. Before eliminating the test I’d make sure there’s some other way to verify the customer requirement.

Reducing the cost-of-poor-quality by shifting from appraisal activities to prevention activities is certainly a worthy goal, but we shouldn’t be too quick to stop testing just because we aren’t finding any defects.

One View of a Quality Transformation May 8, 2012

Posted by Tim Rodgers in Quality.
Tags: , , , ,
add a comment

When there’s a quality crisis many people confuse a failure to execute a quality system with a failure due to the quality system itself. Establishing an effective quality system is a lot more than designing and implementing processes, training people in the proper use of these processes, and then putting monitoring and audits in place. The answer isn’t more testing and inspection and oversight; it requires a commitment at all levels of the organization and often a cultural transformation.

Here’s one example, from the transformation we undertook at the factory in China. I can’t say that we fully completed the journey while I was leading the team there, but this is where we were headed.

Minimum level of quality management A quality system with a better chance of succeeding
Passive reporting of quality issues Leadership to close quality issues
Waiting to react to customer escalations Proactive quality improvements based on understanding of the customer
Corrective action to fix the problem Understand and eliminate root cause to prevent the problem from re-occurring
A quality issue is closed when a corrective action plan is implemented The issue is closed only when improvements are measured as a result of the corrective action plan
End-of-line quality measures, testing and inspection after the product is finished In-process measures as early indicators
Incoming quality control, sorting, testing, audits, inspection Drive quality upstream (through design and supplier management)
Quality metrics required by the customer Cost of quality (CoQ) managed as an internal business metric
Test plans developed and provided by the customer Quality plans developed with the customer in mind, reducing the need for testing and inspection
Quality is the responsibility of the Quality department Quality culture in the entire organization (it’s everyone’s job)

The Right Way to Resolve a Problem May 4, 2012

Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: , , , ,
1 comment so far

Problem solving can be fun, in the way that solving crossword puzzles can be fun (or, at least interesting and challenging), but at work there’s a tendency to rush the process and make mistakes, and that’s not fun. This is especially true when there’s a highly visible cost associated with the unsolved problem. As time passes and the cost increases, educated and experienced people from all corners of the organization will offer solutions, and those responsible for fixing the problem will feel pressure to try something (and keep trying) until things improve, however temporarily.

I understand the value of experimentation and trial-and-error (fail early and often), and the danger of “analysis paralysis,” but problem resolution really should follow a rigorous process to minimize the number of false starts and false hopes. It’s great to get inputs from a variety of sources about possible root causes and solutions, but it’s worth spending just a little time to understand the situation. Was there a time or were there circumstances when this problem did not exist? Why is this happening now and not at some other time? Why is this happening here and not at some other place?

At the factory in China we called this the “is / is-not list.” If production line was shut down because of a quality problem, the first thing we did was describe and thereby isolate the circumstances. For example, it was important to know whether the problem occurred on this product, but not that one; with this part, but not that one; with parts from this supplier, but not that supplier; in this building or assembly line or shift or lot of material, but not others. These and other clues helped us quickly identify probable root causes and assess any proposed solutions, all before testing the solutions on a small scale to verify the result.

It’s also important to understand what improvement looks like and how that will be measured. At the factory it was easy to quickly determine whether there was an improvement from physical measurements or test results, but in other settings the feedback loop can be long and delayed, making it harder to verify that proposed solutions have the intended positive result. Regardless, it’s a lot easier to get agreement from stakeholders about the desired end state before the problem resolution process begins.

Trying to resolve a problem by jumping to conclusions without structured analysis and planning up-front invariably leads to ineffective solutions and wasted time.

Prototype Build Success and Ramp Readiness April 24, 2012

Posted by Tim Rodgers in Process engineering, Project management, Quality, Supply chain.
Tags: , , , , , ,
add a comment

It’s generally a bad idea to jump directly from product concept to market introduction. Any new hardware product benefits from early prototype builds that enable the design team to evaluate whether cost, quality, reliability, and performance objectives can be achieved. However there’s a big difference between being able to build one product that meets these specifications and being able to build hundreds or up to millions of products over the expected lifespan of the design. Prototype units are necessary for design evaluation, but prototype builds must be used to evaluate supply chain readiness to support the expected production ramp. Successful builds do not automatically lead to a successful transition to full production without explicit planning.

Each prototype build should have specific and measurable objectives, both for the design team and the production and supply chain management teams. When looking at the performance of the factory during the prototype build, there’s a tendency to focus on things like part availability and quality, tool capacity, operator training, and line readiness. Was everything in place to build the planned number of units at the scheduled time? Did the factory and the supply chain quickly and appropriately respond to unexpected events during the scheduled build?

This is all good, but somebody has to keep an eye on ramp readiness. It’s possible to have a series of “successful” builds without any effective preparation for the expected production volumes. Each build should bring the factory closer to a stable, capable, robust and repeatable process for manufacturing and delivery, and you can’t afford to wait until the ramp itself to discover what needs to be improved.

One of the overall measures I used at the factory in China was time-to-quality (TTQ). This is admittedly a lagging indicator that looks at how long it takes after the start-of-ramp to meet the required factory quality targets. The goal is TTQ = 0, meaning that the target is met at start-of-ramp, not several weeks later.

Ramp readiness requires attention to the functionally critical parts and production processes, as determined from an analysis of the design (DFM, FMEA), and an understanding of the risks to assurance of supply for the entire value chain. Standard statistical techniques for process capability and process control provide objective assessments, and contingency planning can minimize the impact of supply interruptions. Attention needs to be given to design stability during the weeks leading up to ramp; a late design change that isn’t fully evaluated in the factory introduces risk. The details will vary, but ramp readiness requires deliberate actions and doesn’t happen by accident.

Always Be Unhappy (About Quality) April 4, 2012

Posted by Tim Rodgers in Process engineering, Quality, Supply chain.
Tags: , , , ,
add a comment

A couple of years ago I presented a factory quality department review to the senior managers in our business group in Shenzhen. At the end of the presentation I was asked to give a summary analysis of the current status and near-term outlook for our quality metrics. I said I was dissatisfied and unhappy with the quality of our finished products and our quality systems generally, and that I would probably never be satisfied and I would always be unhappy. I saw a lot of confused expressions from the audience. I wasn’t sure anyone understood what I was trying to say, so I added: “Being unhappy about quality is part of my job description.”

It’s not enough to meet some assigned quality goal, celebrate, and then sit back and wait until the next fire breaks out. You shouldn’t be satisfied or comfortable if the percent of products conforming to specifications exceeds any target that’s less than 100%. Any part or product that fails to meet specifications requires some kind of extra handling and disposition (customer returns, scrap, or rework/repair and re-test). Regardless, it reduces overall productivity and adds avoidable costs.

Even if 100% of the parts or products meet specifications, there’s value in reducing variability. Genichi Taguchi’s loss functions and cost-of-use model illustrate the benefit to downstream customers of that part or product. In the factory in China we often encountered tolerance stacks that exceeded the top-level assembly design margin. Each of the parts were in-spec, but the cumulative effect of part variability led to a significant number of product failures.

Even if 100% of the parts or products meet specifications, you have to be concerned about whether or not the factory processes are operating within statistical control limits, otherwise you cannot predict the future performance of the processes, and you cannot be confident that you can continue building good parts or products.  You still can’t be satisfied when processes are in-control. A process in-control is in an unnatural state and will surely drift out-of-control at some point due to special causes (or entropy), requiring vigilant monitoring.

Continuous improvement means there will always be something else to work on. At some point there may be a discussion about diminishing returns and opportunity costs for further improvement, but that discussion should lead to a re-assignment of resources to the next item on the Pareto chart.

I admit I was being a bit dramatic about “always being unhappy” to emphasize my point, and you should certainly celebrate progress and intermediate goals, but you can’t be complacent about quality.

Confusing Action for Progress March 8, 2012

Posted by Tim Rodgers in Management & leadership, Quality.
Tags: , , , , ,
1 comment so far

This used to happen at least once a week at the factory in China: an unusual number of test failures or a negative trend in a quality metric would trigger a line shutdown. Ten-to-twenty engineers and managers would typically converge on the scene and work around the clock, desperately searching for a quick fix or adjustment. There would be enormous pressure to get the line up and running again in order to avoid a production shortfall, so the team tended to latch onto the first reasonable-sounding explanation. After sorting parts or doing a small process tweak there would be a short test run of units to verify that there was some improvement in quality, then full production would be turned back on again. Often the crisis required multiple “guess-fix-test” cycles before the team stumbled upon a real solution, or resigned themselves to a sub-optimal level of quality due to exhaustion. Engineering judgment, gut feel, and sometimes irrational management directives replaced any rigorous investigation and problem resolution processes that would have determined the actual root cause to a high confidence level.

I understand that “paralysis by analysis” is a bad thing. I’ve worked with people who can’t make a decision because they seem to never have enough data. There’s value in trial-and-error and rapid prototyping to test new ideas. But, isn’t it worth a little extra time to understand what’s changed, and then to brainstorm alternatives instead of racing off to implement the first proposed fix? Isn’t it worth the time to generalize the root cause and the solution to prevent the same problem from reoccurring elsewhere? Managers and leaders can help ensure better long-term solutions by clearly communicating the need to balance urgency and engineering judgment with critical thinking and thoroughness, and understanding the difference between random, Brownian motion and true achievement.

%d bloggers like this: