The Weakest Link In Any Quality System June 29, 2013Posted by Tim Rodgers in Management & leadership, Quality.
Tags: China, factory quality, management, outsourcing, quality engineering, test & inspection, training
It’s time to start writing again. I officially re-joined the workforce in mid-March and I’ve been very busy with starting a new job and relocating to Colorado. While I’ve had a lot of time for reflection, there’s been little time for composition. Now I want to get back into a blogging rhythm, for my own benefit if for no other reason.
I’m managing a quality department again, and it’s another opportunity to establish a quality system of processes and metrics that can enable the business to “meet or exceed customer expectations” at a reasonable cost. In that role I’ve been spending a lot of time understanding how the company measures quality, both externally (field failures, service calls), and internally (factory yield, defective parts received). These measures must provide an accurate picture of the current state of quality because any set of improvement plans will be based on the perceived status and trends over time. If the measures are wrong we will dedicate ourselves to fixing the wrong things, which means either lower priority targets (missed opportunity), or trying to fix something that isn’t broken (process tampering).
Unfortunately almost all of the current quality measures are compromised because of a fundamental weakness: the human element. We’re counting on individual service reps, factory assemblers, inspectors, and others to log their findings correctly, or even log their findings at all. I’m not sure which is more damaging to our quality planning: no data or invalid data. Either way we’re in danger of running off in the wrong direction and possibly wasting a lot of time and energy on the wrong quality improvement projects.
So, how can can get our people to provide better input? Sure, we can impose harsh directives from above to compel people to follow the process for logging defects (not our management style). Or, we could offer incentives to reward those who find the most defects (a disaster, I’ve seen this fail spectacularly). I think the answer is to educate our teams about the cost of quality, and how all these external and internal failures add up to real money spent, and potentially saved by focusing our improvement efforts on the right targets. Some percentage of that money saved could be directed back to the teams that helped identify the improvement opportunities.
My plan is to hit the road, going out to our service reps and our design centers and our factories and our suppliers to help them understand the importance of complete and accurate reporting of quality. I need everyone’s commitment, or else we will continue to wander around in the dark.
Why Keep Testing If You Never Find Defects? September 26, 2012Posted by Tim Rodgers in Process engineering, Quality.
Tags: China, factory quality, six-sigma, test & inspection
add a comment
When I started working at the factory in China I inherited a quality system that included long checklists of visual inspections, almost all of which had been specified by our customer. I’ve never been a big fan of inspections, especially those that rely on subjective human judgment, but I’ll admit that in some cases they’re a quick (but costly) way of detecting defects.
Anyway, one set of inspections seemed especially puzzling. At the end of the production line the customer required an audit of a sample of the finished goods, including the accessories, localized printed materials, and the final packaging before everything was loaded on pallets for the shipping container. Boxed units were taken off the line and opened, and everything inside that had a barcode was removed from the box and hand-scanned to verify that the box contained everything it was supposed to contain.
What made this a head-scratcher was that the end of the production line was ten feet away, where the finished goods, the accessories, and the localized printed materials were each individually barcode-scanned and then put in the box. If the operator tried to put the wrong thing in the box, something that wasn’t on the approved bill-of-materials, it would be detected by the scanner. I wasn’t really surprised to discover that the end-of-line audit never found any missing or wrong parts, and we had a discussion with our customer about the value of the audit.
This brings me to my question: if you never find any defects, is the test or inspection routine still effective or useful? Can’t we get rid of it? Is this test any good?
Let me pause here for a moment and emphasize (again) that you don’t achieve quality by testing or inspections, especially at the end of the production line. Nevertheless, I think we can agree that an audit program, strategically placed in the process flow, can be a useful tool for verifying that customer requirements are being met.
When you’re considering eliminating or changing a test, I think you have to start by asking: What customer requirement does this test correspond to? What defect is this test supposed to find? If the test never fails and doesn’t find those defects, that either means there are no defects (or, at least no defects of that category), or the test isn’t capable of finding them, possibly because of bad test design or bad test execution; which is why the capability of the test should be investigated and verified. By the way, that’s how we discovered the problem with the barcode-scan audit described above: bad design and bad execution.
Assuming it’s a good test, correctly implemented and capable of finding defects that are linked to customer requirements, there are several options if you’re still not finding defects:
1. You can raise the quality standards and tighten the spec limits that define failures. That may give you more failures and an opportunity to eliminate a root cause or reduce variability somewhere in the process. Reducing variability is always a good thing, but you have to consider the cost to do so, and whether this is really a high-value opportunity.
2. You can reduce the audit frequency. Maybe you really have a design and a process that doesn’t generate defects, but maybe it would still be a good idea to check on it from time to time, wouldn’t it?
3. You can eliminate the test altogether. This is a risky move because you’re voluntarily giving up an opportunity to verify that some customer requirement are being met. Before eliminating the test I’d make sure there’s some other way to verify the customer requirement.
Reducing the cost-of-poor-quality by shifting from appraisal activities to prevention activities is certainly a worthy goal, but we shouldn’t be too quick to stop testing just because we aren’t finding any defects.
One View of a Quality Transformation May 8, 2012Posted by Tim Rodgers in Quality.
Tags: China, factory quality, leadership, quality engineering, test & inspection
add a comment
When there’s a quality crisis many people confuse a failure to execute a quality system with a failure due to the quality system itself. Establishing an effective quality system is a lot more than designing and implementing processes, training people in the proper use of these processes, and then putting monitoring and audits in place. The answer isn’t more testing and inspection and oversight; it requires a commitment at all levels of the organization and often a cultural transformation.
Here’s one example, from the transformation we undertook at the factory in China. I can’t say that we fully completed the journey while I was leading the team there, but this is where we were headed.
|Minimum level of quality management||A quality system with a better chance of succeeding|
|Passive reporting of quality issues||Leadership to close quality issues|
|Waiting to react to customer escalations||Proactive quality improvements based on understanding of the customer|
|Corrective action to fix the problem||Understand and eliminate root cause to prevent the problem from re-occurring|
|A quality issue is closed when a corrective action plan is implemented||The issue is closed only when improvements are measured as a result of the corrective action plan|
|End-of-line quality measures, testing and inspection after the product is finished||In-process measures as early indicators|
|Incoming quality control, sorting, testing, audits, inspection||Drive quality upstream (through design and supplier management)|
|Quality metrics required by the customer||Cost of quality (CoQ) managed as an internal business metric|
|Test plans developed and provided by the customer||Quality plans developed with the customer in mind, reducing the need for testing and inspection|
|Quality is the responsibility of the Quality department||Quality culture in the entire organization (it’s everyone’s job)|
The Right Way to Resolve a Problem May 4, 2012Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: China, factory quality, leadership, problem resolution, quality engineering
1 comment so far
Problem solving can be fun, in the way that solving crossword puzzles can be fun (or, at least interesting and challenging), but at work there’s a tendency to rush the process and make mistakes, and that’s not fun. This is especially true when there’s a highly visible cost associated with the unsolved problem. As time passes and the cost increases, educated and experienced people from all corners of the organization will offer solutions, and those responsible for fixing the problem will feel pressure to try something (and keep trying) until things improve, however temporarily.
I understand the value of experimentation and trial-and-error (fail early and often), and the danger of “analysis paralysis,” but problem resolution really should follow a rigorous process to minimize the number of false starts and false hopes. It’s great to get inputs from a variety of sources about possible root causes and solutions, but it’s worth spending just a little time to understand the situation. Was there a time or were there circumstances when this problem did not exist? Why is this happening now and not at some other time? Why is this happening here and not at some other place?
At the factory in China we called this the “is / is-not list.” If production line was shut down because of a quality problem, the first thing we did was describe and thereby isolate the circumstances. For example, it was important to know whether the problem occurred on this product, but not that one; with this part, but not that one; with parts from this supplier, but not that supplier; in this building or assembly line or shift or lot of material, but not others. These and other clues helped us quickly identify probable root causes and assess any proposed solutions, all before testing the solutions on a small scale to verify the result.
It’s also important to understand what improvement looks like and how that will be measured. At the factory it was easy to quickly determine whether there was an improvement from physical measurements or test results, but in other settings the feedback loop can be long and delayed, making it harder to verify that proposed solutions have the intended positive result. Regardless, it’s a lot easier to get agreement from stakeholders about the desired end state before the problem resolution process begins.
Trying to resolve a problem by jumping to conclusions without structured analysis and planning up-front invariably leads to ineffective solutions and wasted time.
Prototype Build Success and Ramp Readiness April 24, 2012Posted by Tim Rodgers in Process engineering, Project management, Quality, Supply chain.
Tags: China, factory quality, leadership, product development, project management, quality engineering, test & inspection
add a comment
It’s generally a bad idea to jump directly from product concept to market introduction. Any new hardware product benefits from early prototype builds that enable the design team to evaluate whether cost, quality, reliability, and performance objectives can be achieved. However there’s a big difference between being able to build one product that meets these specifications and being able to build hundreds or up to millions of products over the expected lifespan of the design. Prototype units are necessary for design evaluation, but prototype builds must be used to evaluate supply chain readiness to support the expected production ramp. Successful builds do not automatically lead to a successful transition to full production without explicit planning.
Each prototype build should have specific and measurable objectives, both for the design team and the production and supply chain management teams. When looking at the performance of the factory during the prototype build, there’s a tendency to focus on things like part availability and quality, tool capacity, operator training, and line readiness. Was everything in place to build the planned number of units at the scheduled time? Did the factory and the supply chain quickly and appropriately respond to unexpected events during the scheduled build?
This is all good, but somebody has to keep an eye on ramp readiness. It’s possible to have a series of “successful” builds without any effective preparation for the expected production volumes. Each build should bring the factory closer to a stable, capable, robust and repeatable process for manufacturing and delivery, and you can’t afford to wait until the ramp itself to discover what needs to be improved.
One of the overall measures I used at the factory in China was time-to-quality (TTQ). This is admittedly a lagging indicator that looks at how long it takes after the start-of-ramp to meet the required factory quality targets. The goal is TTQ = 0, meaning that the target is met at start-of-ramp, not several weeks later.
Ramp readiness requires attention to the functionally critical parts and production processes, as determined from an analysis of the design (DFM, FMEA), and an understanding of the risks to assurance of supply for the entire value chain. Standard statistical techniques for process capability and process control provide objective assessments, and contingency planning can minimize the impact of supply interruptions. Attention needs to be given to design stability during the weeks leading up to ramp; a late design change that isn’t fully evaluated in the factory introduces risk. The details will vary, but ramp readiness requires deliberate actions and doesn’t happen by accident.
Always Be Unhappy (About Quality) April 4, 2012Posted by Tim Rodgers in Process engineering, Quality, Supply chain.
Tags: China, factory quality, quality engineering, supply chain, test & inspection
add a comment
A couple of years ago I presented a factory quality department review to the senior managers in our business group in Shenzhen. At the end of the presentation I was asked to give a summary analysis of the current status and near-term outlook for our quality metrics. I said I was dissatisfied and unhappy with the quality of our finished products and our quality systems generally, and that I would probably never be satisfied and I would always be unhappy. I saw a lot of confused expressions from the audience. I wasn’t sure anyone understood what I was trying to say, so I added: “Being unhappy about quality is part of my job description.”
It’s not enough to meet some assigned quality goal, celebrate, and then sit back and wait until the next fire breaks out. You shouldn’t be satisfied or comfortable if the percent of products conforming to specifications exceeds any target that’s less than 100%. Any part or product that fails to meet specifications requires some kind of extra handling and disposition (customer returns, scrap, or rework/repair and re-test). Regardless, it reduces overall productivity and adds avoidable costs.
Even if 100% of the parts or products meet specifications, there’s value in reducing variability. Genichi Taguchi’s loss functions and cost-of-use model illustrate the benefit to downstream customers of that part or product. In the factory in China we often encountered tolerance stacks that exceeded the top-level assembly design margin. Each of the parts were in-spec, but the cumulative effect of part variability led to a significant number of product failures.
Even if 100% of the parts or products meet specifications, you have to be concerned about whether or not the factory processes are operating within statistical control limits, otherwise you cannot predict the future performance of the processes, and you cannot be confident that you can continue building good parts or products. You still can’t be satisfied when processes are in-control. A process in-control is in an unnatural state and will surely drift out-of-control at some point due to special causes (or entropy), requiring vigilant monitoring.
Continuous improvement means there will always be something else to work on. At some point there may be a discussion about diminishing returns and opportunity costs for further improvement, but that discussion should lead to a re-assignment of resources to the next item on the Pareto chart.
I admit I was being a bit dramatic about “always being unhappy” to emphasize my point, and you should certainly celebrate progress and intermediate goals, but you can’t be complacent about quality.
Confusing Action for Progress March 8, 2012Posted by Tim Rodgers in Management & leadership, Quality.
Tags: China, factory quality, leadership, management, problem resolution, quality engineering
1 comment so far
This used to happen at least once a week at the factory in China: an unusual number of test failures or a negative trend in a quality metric would trigger a line shutdown. Ten-to-twenty engineers and managers would typically converge on the scene and work around the clock, desperately searching for a quick fix or adjustment. There would be enormous pressure to get the line up and running again in order to avoid a production shortfall, so the team tended to latch onto the first reasonable-sounding explanation. After sorting parts or doing a small process tweak there would be a short test run of units to verify that there was some improvement in quality, then full production would be turned back on again. Often the crisis required multiple “guess-fix-test” cycles before the team stumbled upon a real solution, or resigned themselves to a sub-optimal level of quality due to exhaustion. Engineering judgment, gut feel, and sometimes irrational management directives replaced any rigorous investigation and problem resolution processes that would have determined the actual root cause to a high confidence level.
I understand that “paralysis by analysis” is a bad thing. I’ve worked with people who can’t make a decision because they seem to never have enough data. There’s value in trial-and-error and rapid prototyping to test new ideas. But, isn’t it worth a little extra time to understand what’s changed, and then to brainstorm alternatives instead of racing off to implement the first proposed fix? Isn’t it worth the time to generalize the root cause and the solution to prevent the same problem from reoccurring elsewhere? Managers and leaders can help ensure better long-term solutions by clearly communicating the need to balance urgency and engineering judgment with critical thinking and thoroughness, and understanding the difference between random, Brownian motion and true achievement.
Subcontracting Quality March 1, 2012Posted by Tim Rodgers in Quality, strategy, Supply chain.
Tags: China, factory quality, outsourcing, quality engineering, supply chain
1 comment so far
It’s hard enough to get people to focus on quality when their company’s name is on the product. It’s even more challenging when the design and/or manufacturing of the product is outsourced. How do you effectively manage product quality indirectly through suppliers and subcontractors?
In one of my previous jobs my responsibilities included managing a factory quality organization at one of the world’s leading contract manufacturers. This was a very high volume production environment, and our customer required weekly and sometimes daily monitoring of several key performance metrics for all their CM’s. These were also the focus at our quarterly business reviews attended by high-level management on both sides, so managing these metrics got a lot of attention.
Cost, inventory, and throughput metrics are fairly easy to understand and manage, but quality is not as straightforward. Ultimately, user-appreciated quality as measured by field failures, return rates and other warranty costs is a trailing indicator that isn’t detectable until weeks or months after the product leaves the factory. It’s obviously not practical to do full functional and life testing on every finished product, so manufacturers (and sometimes their customers) sample the finished products and perform measurements and abbreviated tests that are designed to give everyone a high level of confidence about the quality of the larger population. The yields and defect rates from these end-of-line tests and audits are typically used as a proxy measure for product quality.
This is necessary and useful, however these measures alone can’t be used to isolate the contribution of the supplier’s performance to product quality, particularly where the customer provides the product design, part specifications, and sometimes even the manufacturing process design. What is the specific contribution of the supplier and how can that be measured?
The supplier is clearly accountable for those elements of the value delivery system that they directly control, and in the case of quality this includes all sources of production variability. Suppliers should be measured according to their understanding and management of these sources of variability.
Where the customer provides the original design, someone (depending on the contractual relationship) must perform an analysis of the design to identify the critical part and performance specifications — ideally, variable data, not attribute data — that must be controlled. It’s the unique responsibility of the supplier to implement a process that can deliver products with dimensions and other characteristics that vary randomly according to common causes and are in conformance to the specs. That means the supplier must implement measurement and tracking of the critical part characteristics to build control charts, assess the variability of the production process, identify and eliminate special causes to establish a stable process, and then assess the capability of the process to meet the specs.
This is simpler when the supplier’s output is a single part, but it’s harder to manage when the output is a fully-assembled product. End-of-line tests and measurements are too late and don’t provide enough detail to understand special causes of variability. The challenge to suppliers is to use the initial design analysis and information from failed units to identify intermediate or in-process measures that can provide an early indicator of quality.
Unfortunately, this kind of failure analysis is not typically valued in a manufacturing environment that’s focused on meeting the production schedule, where failed units are reworked and retested until they can be added to the outgoing shipment. It’s ironic because one of the biggest opportunities to improve throughput and productivity is to spend a little time understanding and eliminating the causes of test failures.
The supplier must demonstrate their ability to understand and manage the variability of the individual process steps that are critical to final product quality. Sources of variability include incoming parts, materials, consumables and sub-assemblies; the performance of operators, tools, jigs, and fixtures; and production environmental conditions such as temperature, humidity and cleanliness.
So, what does this mean for people who want to assess the quality performance of their suppliers? Assuming the critical parameters and performance characteristics have been already defined, the supplier should be able to present evidence that all processes that contribute to those parameters and characteristics are stable and in control. By the way, that should be a necessary criteria for start-of-production. If the processes are not stable, the supplier should be able to explain what they’re doing to eliminate special causes, supported by sound statistical and engineering analysis. If the processes are stable but not capable of meeting specifications, that requires a collaborative investigation to determine how to modify the process, again something that should be done before start-of-production. Under no circumstances should a supplier tamper with a process that’s unstable (they should be removing special causes instead), or otherwise take unauthorized action in response to a quality problem.
If you’re not asking for this information, then you don’t understand your supplier’s responsibility for quality.
The Unpopular Promotion January 18, 2012Posted by Tim Rodgers in International management, Management & leadership, Organizational dynamics.
Tags: career growth, change management, China, hiring, leadership, manager, retention, training
add a comment
I think just about everyone likes the idea of “promoting from within,” filling an open management or leadership position by promoting a person who is already part of the team. It sends an important message to all employees: this is an organization that values its internal resources, and there is a possibility of upward mobility for those who have demonstrated the talents and capabilities to do so. Another plus: a person promoted internally should already have a good understanding of the issues facing the team and the larger business context, so they don’t need a lengthy transtional period in order to become effective in the new role.
Unfortunately this rarely seems to go well. Unless the decision is universally recognized as the obvious choice, other members of the team may become quietly jealous, passive-aggressive, or openly hostile toward the recently-promoted person, creating a lot of turmoil in the workplace.
In my last job in China I managed two examples of unpopular promotions. In the first case, a manager reporting to me wanted to promote a hard-working and detail-oriented junior engineer to a newly-created lead position with responsibility for managing schedules, resource planning, and customer communication. The junior engineer definitely had the skills to do the assigned job, and I gave my full support to the manager’s recommendation. The trouble started almost immediately when the other leads in the team resisted the junior engineer’s efforts to introduce new processes, and never gave that person the support they needed to be effective. Over the next several months the manager worked with the junior engineer and the other leads to try to make it work, but ultimately he had to surrender and reverse his promotion decision, which was obviously a disappointing outcome for the junior engineer, but necessary to get the team back on-track.
The manager and I struggled to understand why this didn’t succeed. Unfortunately neither of us spoke Mandarin, so we probably never got to the real reasons why the other leads didn’t support the internal promotion. We suspected that the junior engineer was a victim of sexism and cronyism. She possessed the capabilities to do the job, but she didn’t have enough self-confidence and social skills to break through the resistance from the other leads. Everybody involved took hard-line positions, and the situation deteriorated from there. The manager felt he had made a justifiable, merit-based decision, the junior engineer was working according to a mandate from the manager, and the other leads figured they could make enough trouble to eventually undermine the whole thing and get back to the way things were.
The other example was similar, but turned out a little better. I promoted a lead engineer to a recently-vacated management position, and before finalizing the decision I discussed it with my peers in other departments to get their assessment and buy-in. I was already convinced that this person had the right skills and temperment, but I wanted to estimate their chance for success. This new manager would be taking over an under-performing team, and I needed their help to drive some necessary-but-potentially-unpopular changes. I wasn’t entirely surprised when I heard there was some resistance to the new manager and his assertive style, and eventually I was invited to join an urgent meeting with several members of the team who presented a signed document (in English) threatening to resign. I have to admit I was stubborn about the decision to promote the lead engineer, and I decided I could risk the possibility of losing some people as part of a needed organizational transformation. At the same time, I worked with the new manager and coached him to reach out to the resisters to understand their concerns, instead of using his new authority to silence them. He was able to convince some of them to stay in the group, and we didn’t miss the ones who left.
What I learned is that when considering an internal candidate for promotion you have to assess their social skills and emotional intelligence to determine how well-equipped they are to overcome resistance from their former peers. Oddly, an external candidate typically doesn’t face this; maybe familiarity really does breed contempt. You also have to understand and anticipate the kind of resistance they are likely to encounter, for example if you’re asking them to take a lead role in change management. And, you have to line up allies and supporters from among the rest of the organization; people who support your decision and, more importantly, support the new person (or at least not actively undermine them).
I can imagine few things more discouraging to a person’s career than to be promoted to a position of leadership and to fail completely because of a mutiny from the team. I’ve learned that it’s not enough for the manager to make the “right” hiring decision and announce it to the team; the manager must also provide ongoing coaching and support to ensure the decision is successfully implemented.
Consistency and Predictability January 9, 2012Posted by Tim Rodgers in Management & leadership, strategy.
Tags: change management, China, communication, management, performance measures, strategy
add a comment
During my time in China managing a team that had a limited understanding of English I was reminded every day of the value of keeping it simple. This was harder than I thought, partly because I’m used to “thinking out loud” and bouncing ideas off others, inviting opposing views, weighing pros and cons, and brainstorming possible solutions before coming to a decision. I’ve always believed that being comfortable with uncertainty is a valuable state-of-mind in the workplace, but the language barrier that I faced in China (and the typical urgency that came with working in a high-volume factory) made it necessary for me to ask simple questions and give simple directions. I tried to reduce ambiguity and possible confusion by being as consistent as possible, with the hope that this predictability would help guide the actions and decisions of the team when I was unavailable.
It’s hard to work with a manager who is inconsistent and unpredictable. Every team relies on their manager to provide a framework that allows them to work independently with a high confidence that they’re doing the right things and making the right choices, aligned with the objectives of the business. Individual performance objectives are a good foundation, but the team also needs implicit guidelines from their management in order to exercise the judgment necessary to deal with unplanned events. An unpredictable manager can deprive the organization of that independent judgment with confusing and even conflicting direction.
Being consistent doesn’t mean that a manager has to behave exactly the same way under any and all circumstances. A manager isn’t a machine that generates the same output every time from the same input, but ultimately the team deserves to know what actions and behaviors are necessary for business success, and their manager has a critical role in providing that context. It starts with an aligned set of department performance measures and individual objectives, but those are empty words unless the manager uses every meeting and informal conversation as an opportunity to reinforce objectives, recognize achievement consistent with those objectives, and challenge actions that are inconsistent with those objectives. It also means quickly communicating strategic changes to help the team re-align around new objectives. Without this framework and active reinforcement people are less productive, and more likely to become disoriented and discouraged.