Getting People to Care About Quality December 4, 2013Posted by Tim Rodgers in Management & leadership, Quality, strategy.
Tags: performance measures, factory quality, quality engineering, change management, test & inspection
add a comment
Quality sounds like something that everyone will support on principle, unless you have a saboteur working in your midst. It’s probably safe to assume that no one is deliberately acting to produce a defective product or service. The problem is that everyone makes daily decisions that balance quality against other considerations, whether to save money, or meet a committed date, or keep the factory running. We tell ourselves that quality is free, but even in highly-evolved organizations it doesn’t happen without deliberate effort. The challenge to quality professionals is helping people understand how good quality contributes to the business and thereby provide a more useful basis for decision making.
Here’s a little not-so-secret secret: all decisions in for-profit businesses eventually come down to how to bring in more revenue while controlling expenses. If you want people to pay attention to quality, talk about money.
For better or worse, this is a lot easier after the cost has already been incurred. If you have to spend more money or time because of scrap or rework, or you have to repair or replace product at the customer, or you’re liable for warranty or other contractual post-sale costs, everyone will be interested to know how it happened and how it can be prevented in the future. After some investigation you may identify the cause or causes, and you can recommend actions to eliminate them. Of course those corrective actions will have a cost of their own, and you will have to determine if there’s a net gain.
All of that is based on the assumption that there’s a 100% probability of that bad thing happening again if you do not implement the corrective action, and a 0% probability if you do. If you want to get more analytical you can estimate those probabilities based on engineering analysis, historical trends, or just good old-fashioned judgment, and then apply a de-rating factor to the cost. This is where an FMEA analysis is useful, along with early prototyping and testing to check those assumptions about probability and impact.
Here it’s important to note that there are indirect costs of poor quality that are harder to factor in to this calculation. For example, even a single incident at a key customer could cause a significant decline in future revenue if it affects brand reputation. Low-probability yet high-severity events are also problematic.
Of course it’s generally harder to look ahead and assess the unknown probability and impact of a quality risk that has not yet been encountered. As long as the bad thing hasn’t happened yet, it’s easy to underestimate it. This is what causes organizations to reduce cost by using cheaper parts or removing design safeguards or eliminating quality checks. They’re saving real money today and implicitly accepting the uncertain risk (and cost) of a poor quality event in the future. Again, if you can say with 100% certainty that this bad thing will happen without specific actions being taken, then your choice is clear. Unfortunately there are many choices that are not clear, or even recognizable.
Are you really willing to spend whatever it takes to prevent any quality problem? Of course not. Managing quality is managing risk, and looking for ways to assess and minimize that risk while under pressure to reduce cost now. It’s not very satisfying to say “I told you so.”
Quick Note: Theory of Constraints and Six Sigma November 24, 2013Posted by Tim Rodgers in Process engineering, Quality.
Tags: factory quality, performance measures, process, six-sigma
add a comment
Last week I attended the monthly meeting of the Northern Colorado chapter of the American Society for Quality. The featured speaker was Dr. Russ Johnson, President of Improvement Quest, a local management consulting firm. Dr. Johnson’s talk “Creating a Culture of Harmony by Using the Theory of Constraints Concepts to Focus and Integrate Lean and Six Sigma” included several interesting insights about to effectively integrate these strategies in a production environment.
Of course the key to successful implementation of the Theory of Constraints is identifying the bottleneck, or constraint, in the production process and then optimizing the rest of system around the constraint (“exploit, subordinate, elevate”) in order to maximize overall throughput while controlling inventory (including work-in-progress, WIP) and operating expense. At the risk of oversimplifying, Six Sigma can be described as “reduce variability,” and the lean philosophy is essentially “eliminate waste.”
These strategies are not different ways of solving the same problem. They can and should be implemented as elements in an integrated improvement effort. The trick is understanding that not all processes are equally good targets for a six sigma or lean improvement plan. It depends where the process is in relation to the constraint.
Any yield improvement or waste elimination that occurs upstream from the bottleneck doesn’t improve throughput because it effectively increases the input to the bottleneck that is already limited. In fact, it can be detrimental to the operation as a whole if it increases WIP and associated costs for material and operating expenses. The focus should be on downstream processes where yield improvement or waste elimination effectively increases the capacity of the constraint. Scrap or rework that occurs after the constraint is especially damaging because it essentially requires another pass through the constraint.
The point is that you can’t assume that all improvements at the micro level are equally beneficial at the macro level. Yes, generally there’s value in reducing variability and eliminating waste, but when your resources are limited and you have to focus, consider the constraint and whether your improvements are really improving the metrics that matter.
What Does Almost Done Really Mean? November 17, 2013Posted by Tim Rodgers in Communication, Project management.
Tags: communication, management, performance measures, project management
1 comment so far
About a year ago I earned a Project Management Professional certificate after learning the methodologies and structured processes formalized by the Project Management Institute. Almost all of my experience in project management has been in product development, and the PMP training provided a broader perspective on other types of projects. I was particularly intrigued and somewhat amused by the use of quantitative measures of project status based on Earned Value Management (EVM).
I can see why EVM would appeal to a lot of project managers and their sponsors and stakeholders. Everybody wants to know how the project is going and whether it’s on-track, both in terms of schedule and budget. They want a simple, unambiguous answer, without having to look at all the details. The EVM metrics provide project status and a projection of the future, in terms of the value and expenses of the project’s tasks that are already completed and still remaining.
The problem for many projects is that it requires a lot of planning and discipline to use EVM. Not only do you have to generate a full Gannt chart showing all tasks and dependencies, but you also have to estimate the cost and incremental value-added for each of those tasks. That’s going to be just a guess for projects with little historical reference or leverage. Quantitative metrics are generally less valuable when they’re based on a lot of qualitative assumptions, despite the appearance of analytical precision.
Whether or not you use EVM, everybody wants to express project status in terms of a percentage. “We’re about 90% done, just a little bit more to go, and we’re looking good to meet the deadline.” This kind of oversimplification often fails to recognize that the pace of progress in the past is not necessarily the pace of the future, especially when sub-projects and their deliverables are integrated together and tested against the requirements. There’s an old saying in software development that the last 10% of any software project takes 90% of the time, which is one of the reasons why agile development techniques have become popular.
While I applaud the attempts to quantify project status, I would assess a project in terms of tasks and deliverables actually either fully-completed or not, not “90% complete.” For large projects it’s useful to report deliverable completion status at checkpoint reviews where stakeholders can confirm that previously-agreed-upon milestone criteria have been met. This binary approach (done or not-done) may seem less quantitative, but it’s also less squishy. The overall status of the project is defined by the phase you’re currently in and the most-recent milestone completed, which means that all of those tasks leading up to the milestone have been completed.
That still leaves the problem of assessing the likelihood of future success: will the project finish on-time and on-budget? At some point you’re going to have to use your best judgment as a project manager, but instead of trying to distill your status to a single number isn’t it more useful to talk about the remaining tasks, risks, and alternatives? Sometimes more information really is better.
Are Your Suppliers Really Committed to Quality? November 6, 2013Posted by Tim Rodgers in Management & leadership, Process engineering, Quality, Supply chain.
Tags: factory quality, leadership, management, outsourcing, performance measures, process, quality engineering, six-sigma, supply chain, test & inspection, training
1 comment so far
Suppliers always declare their commitment to the highest standards of quality as a core value, but many have trouble living up to that promise. I can’t tell you how many times I’ve visited suppliers who proudly display their framed ISO certificates in the lobby yet suffer from persistent quality problems that lead to higher cost and schedule delays. Here’s how you can tell if they’re really serious:
1. Do they have an on-going program of quality improvement, or do they wait until you complain? Do they have an understanding of the sources of variability in their value stream, and can they explain what they’re doing to reduce variability without being asked to do so? Look for any testing and measurements that occur before outgoing inspection. Award extra credit if the supplier can show process capability studies and control charts. Ask what they’re doing to analyze and reduce the internal cost of quality (scrap and rework).
2. Do they accept responsibility for misunderstandings regarding specifications and requirements? Or, do they make a guess at what you want, and later insist they just did what they were told? Quality means meeting or exceeding customer expectations, and a supplier who is truly committed to quality will ensure those expectations are clear before they start production.
3. Do you find defects when you inspect their first articles, or samples from their first shipment? If the supplier can’t get these right when there’s no schedule pressure, you should have serious concerns about their ability to ramp up to your production levels. By the way, if you’re not inspecting a small sample of first articles, you’ll have to accept at least half of the blame for any subsequent quality problems.
4. Has the supplier ever warned you of a potential quality problem discovered on their side, or do they just hope that you won’t notice? I realize this is a sign of a more mature relationship between supplier and customer, but a true commitment to quality means that the supplier understands their role in your value stream, and upholds your quality standards without being asked.
Ultimately, you will get the level of quality you deserve, depending on what suppliers you select and the messages you give them. You may be willing to trade quality for lower unit cost, shorter lead time, or assurance of supply. The real question is: What level of quality do you need? What level of poor quality can you tolerate?
Natural Variation, Outliers, and Quality November 2, 2013Posted by Tim Rodgers in Product design, Quality, Supply chain.
Tags: factory quality, performance measures, quality engineering, six-sigma, test & inspection
add a comment
When you work in quality, people want to tell you when bad things happen. A product has failed in the field. A customer is unhappy. The factory has received a lot of bad parts. You’ve got to figure out the scope of the problem, what to do about it, and how this could have possibly happened in the first place. Is this the beginning of a string of catastrophes derived from the same cause, or is this a one-time event? And, by the way, isn’t it “your job” to prevent anything terrible from happening?
People who haven’t been trained in quality may have a hard time understanding the concept of natural variation. Sometimes bad things happen, even when the underlying processes are well-characterized and generally under-control. A six-sigma process does not guarantee a zero percent probability of failure, and of course you probably have very few processes that truly perform at a six-sigma level, especially when humans have the opportunity to intervene and influence the outcome. Every process is subject to variation, even at the sub-atomic level.
So, how can you tell whether this bad thing is just an outlier, or an indicator of something more serious? And, how will you convince your colleagues that this is not the first sign of disaster? Or, maybe it is. How would you know?
You may possess process capability studies and control charts that show that all processes that contribute to this result are stable and deliver outcomes that are within specifications. That would allow you to show that this incident is a low probability event, unlikely, but not impossible. However, I’m not sure that any organization can honestly say they have that level of understanding of all the processes that influence the quality of their ultimate product or service.
In the absence of hard data to establish process capability, you’re left with judgment. Could this happen again? Was it a fluke occurrence, an accident, an oversight, something that happened because controls were temporarily ignored or overridden? Or, are there underlying influences that would make a reoccurrence not just possible, but likely? These are the causes that are hard to address, because they force organizations to decide how important quality really is. Is the organization really prepared to do what is necessary to prevent any quality failure? What level of “escapes” is acceptable, and how much is the organization willing to spend to achieve that level? While it’s easy to find someone to blame for the bad thing, it’s harder to understand how it happened, and how to prevent it from happening again.
Can Managers Make Innovation Happen? February 12, 2013Posted by Tim Rodgers in International management, Management & leadership, strategy.
Tags: innovation, job satisfaction, leadership, management, performance measures, strategy
add a comment
I’m hearing a lot about innovation these days. It seems that everyone is looking for new breakthrough ideas in products and services in order to grow revenue, differentiate from competition, and establish sustainable profitability. However, waiting for a flash of inspiration or “Eureka” moment is too random and unpredictable for most businesses. They would like to actively innovate, or at least provide an environment where productive innovation is more likely to happen.
What role do managers play in an organization that’s looking for innovation? What can managers do to inspire or foster innovation? I’ve always operated under the assumption that innovation is a creative, “out of box,” right-brain activity that can’t be managed with performance objectives and a schedule. I’m not convinced that you can innovate on-demand. I can’t recall ever attending a scheduled group brainstorming session that led to breakthrough ideas.
Some years ago I visited a peer manager at a different HP site to do some internal benchmarking and look for some best practices that I could bring back to my team. On a monthly dashboard of department metrics this manager included a bar chart showing the number of patent applications proposed by the team. I was astonished that this group of about 30 engineers and managers were averaging 30-40 applications every month. I was especially curious because this was a software quality team, and it wasn’t clear to me what part of our work could be patentable.
It turned out that the patent applications up to that time had nothing to do with software quality, or software testing, or anything remotely related to the products we were working on. Most of them seemed to be new applications of existing HP products. There may have been some occasional good ideas for new products in there somewhere, but I can almost guarantee that none of those patent applications were new, or unique, or valuable enough to be actually filed by the HP legal staff.
At the time I wasn’t eager to challenge the HP manager who was hosting my visit, but I still wonder what they were trying to do. The energy put into patent proposals didn’t seem to provide any direct contribution to the department’s objectives. I suppose it’s possible that the team brought more creativity and innovation to their work in software quality as a result of their patent efforts, but I couldn’t tell how that positively affected their other performance measures. I don’t think this was a good example of inspiring innovation.
I’m still not sure what managers can do to make innovation happen, but I think managers have a lot of influence over the work environment, and that can create conditions where innovation is more likely to happen:
1. Managers can communicate the business’s strategic interest in innovation, and help channel the team’s creativity to address specific needs (e.g., new products, new processes to reduce cost or improve quality).
2. Managers can identify those people in the team who are inherently creative and encourage them. Good ideas can certainly come from anywhere, but the fact is that some people are better able to think outside the box and make unexpected connections.
3. Managers can keep an open mind about new ideas and provide sufficient time and resources to evaluate them. This can be hard when resources are limited and the innovation is unfamiliar and risky. On the other hand, you shouldn’t expect the team to be innovative when there’s no chance their ideas will be given an opportunity to prove themselves.
I don’t think of myself as an innovative person who can generate creative ideas. I do think of myself as someone who understands the value of innovation to the business, and I want to do what I can to enable others to innovate effectively.
Decisions Based on Psuedo-Quantitative Processes December 28, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering, Quality, strategy.
Tags: leadership, management, performance measures, problem resolution, project management, six-sigma, strategy
add a comment
I’ve spent a lot of time working with engineers and managers who used to be engineers, people who generally apply objective analysis and logical reasoning. When faced with a decision these folks will look for ways to simplify the problem by applying a process that quantitatively compares the possible options. The “right” answer is the one that yields the highest (or lowest) value for the appropriate performance measure.
That makes sense in many situations, assuming that improvement of the performance measure is consistent with business strategy. You can’t argue with the numbers, right? Well, maybe we should. In our rush to reduce a decision to a quantitative comparison we may overlook the process used to create those numbers. Is it really as objective as it seems?
There’s a common process for decision making that goes by several different names. Some people call it a Pugh diagram or a prioritization matrix. A more sophisticated version called a Kepner Tragoe decision model includes an analysis of possible adverse effects.
These all follow a similar sequence of steps. The options are listed as rows in a table. The assessment criteria are listed as columns, and each criterion is given a weighting factor based on its relative importance. Each row option is evaluated on how well it meets each column criterion (for example, using a scale from 1 to 5), and this assigned value is multiplied by the weighting factor for the column criterion. Finally, the “weighted fitness” values are summed for each row option, and the option with the highest overall score is the winner.
At the end there’s a numerical ranking of the options, and one will appear to be the best choice, but the process is inherently subjective because of the evaluation criteria, the weighting factors, and the “how well it meets the criteria” assessment. It’s really not that hard to game the system and skew the output to provide any desired ranking of the options.
I’m not saying this is a bad process or that the result is automatically invalid. What I am saying is that this isn’t like weighing two bags of apples. The value of a decision analysis process isn’t just the final ranking, it’s the discussion and disagreements between the evaluators, which are obviously subjective. We shouldn’t consider the process to be an infallible oracle that delivers an indisputable answer just because there’s math involved.
I’m sure there are other examples of psuedo-quantitative processes that shouldn’t be accepted at face value. Leaders should question assumptions, listen to dissenting opinions, and check for biases. It’s rarely as cut-and-dried as it seems.
Expecting More from Performance Appraisals October 29, 2012Posted by Tim Rodgers in Management & leadership.
Tags: career growth, job satisfaction, leadership, management, manager, performance measures, power
add a comment
I’ve gotten to the point where I can’t trust a performance appraisal written by another manager. Appraisals are inherently objective and I have no idea how “well-calibrated” that manager is. In the world of metrology we would say that this process for measuring performance has serious issues with bias, stability, repeatability, and reproducibility. Yet the impact — positive or negative — on an employee’s salary, job security, and career growth within a company is huge. This is a lousy system that needs improvement. What can be done?
I think we have to start by asking: what are we trying to accomplish with a performance appraisal? On the surface this is a formal, written record of feedback given to the employee that ends up in their HR file. At minimum it includes the manager’s judgment about whether or not the employee achieved their assigned objectives, and usually some commentary about how those objectives were achieved. Note that despite encouragement to managers to provide informal performance feedback throughout the review period, the formal review may be the only time they’ve done so.
It’s not enough to report that the employee completed this task or failed to complete that one. Here are the questions I’d like to ask when I read about what an employee did during the review period:
- How challenging were those assigned objectives?
- Did the employee only meet expectations when they had an opportunity to exceed expectations?
- Did the employee have enough authority and positional power to personally influence the outcome?
- Were the performance measures granulated enough to enable the manager to judge individual accountability?
- Were there mitigating factors beyond the employee’s control that prevented them from achieving the objectives?
However, I think we’re trying to do something more than just provide a summary of the employee’s accomplishments. Organizations value certain behaviors and want to encourage employees to exhibit those behaviors. Unfortunately these are often not stated explicitly, and it may be left to individual managers to apply their own values and preferences. In my teams I value creativity, teamwork, persistence, adaptability, proactive “fire prevention,” eagerness-to-learn, and independent judgment. When I write performance reviews I look for examples of these behaviors that I can cite. When that information is missing from a review someone else has written, it’s a huge gap in my understanding of that employee.
Writing annual performance reviews is one of the most dreaded responsibilities for any manager, and we owe it to the employee to provide an assessment based on consistent and job-appropriate expectations. The process may be subjective and imprecise, but the long-term implications are significant.
Should Stretch Goals be Achievable? October 25, 2012Posted by Tim Rodgers in Communication, International management, Management & leadership, Organizational dynamics.
Tags: change management, leadership, management, performance measures, strategy
add a comment
In the early 90s our division went through a week-long training session to learn some basic process improvement techniques. As part of the program we were divided into teams of 6-8 people and given a simulation exercise based on a bare-bones “order fulfillment” process. We were instructed to run the process as a team several times as quickly as possible and record our average time. When all the teams reconvened we reported back to the instructor and discovered that everyone’s average time was between 3 and 4 minutes, and no one had achieved better than 2:50.
The instructor wrote the times on the whiteboard, then turned to the class and told us our performance was pathetic. He revealed that our imaginary competition had achieved an average process time of 20 seconds. If we couldn’t at least meet that benchmark level of performance, we would lose our imaginary customers and go out of business.
Everyone was stunned. No one could imagine how the process could be completed more quickly. I don’t remember if it was the instructor or one of the students, but someone suggested throwing out the script and changing the process completely, removing unnecessary steps and focusing on speed. Of course that was the point of the whole exercise. We achieved the 20 second goal, and I think we even managed to get down to 10 seconds.
I remembered that story the other day when I read a caution about setting aggressive goals for a team. The question is whether “crazy-high” goals are demoralizing because they’re impossible to reach, or whether they’re useful in getting people to think outside the box. Besides, who can say whether or not a goal is actually achievable?
Someone once told me that the metaphor is a rubber band. If you stretch the rubber band you can generate a lot of potential energy that can drive the organization to achieve unexpected things. On the other hand, if you stretch the rubber band too far it will break and there will be no gain.
I think it’s good for an organization to have aggressive goals that may seem just out of reach. It keeps people motivated and focused, and it can prevent them from becoming complacent. At the same time there should also be some kind of market advantage or competitive advantage or financial advantage associated with achieving those goals, otherwise this is just manipulation or sadism. I’m not sure it even matters whether or not the goal is ever achieved. Progress toward the goal can still be recognized and celebrated because it represents a finite improvement that supports business objectives. However, the real breakthroughs won’t come from incremental improvements to meet an easy goal.
Common Fallacies That Cause People to Doubt Statistics October 23, 2012Posted by Tim Rodgers in baseball, Process engineering, Quality.
Tags: baseball, factory quality, performance measures, quality engineering, six-sigma, test & inspection
add a comment
Lately I’ve been reviewing some old text books and work files as part of my preparation for the ASQ Six Sigma Black Belt certification exam in March. It’s interesting, and I think often amusing, to contrast the principles of inferential statistics and probability theory with they ways they’re used in the real-world. I think people tend to underestimate how easy it is to misuse statistical methods, or at least apply them incorrectly, and this can lead them to undervalue all statistical analysis, regardless of whether or not the methods were applied correctly.
I see this in baseball and political commentary all the time, particularly in the way people selectively or incorrectly use numbers to defend their point of view, while at the same time mocking those people who use numbers (correctly or not) to defend a different point of view.
Here are a few of the more-common mistakes that I’ve seen in the workplace:
1. Conclusions based on small sample sizes or selective sampling. Yes, we often have to make do with less data than we’d like, but that makes it especially important to put confidence intervals around our conclusions and stay open-minded about the possibility of a completely different version of reality. Also, a sample is supposed to represent the larger population, and we have to beware of sampling bias that excludes relevant members of the population and skews any findings based on that sample. Otherwise the findings are meaningful only for a subset of the population.
2. Unknown or uncontrolled measurement variability. We often assume that our measurement processes are completely trustworthy without considering the possible effects of variability due to equipment or people. If the variance of the measurement process exceeds the variance of the underlying processes that we’re trying to measure, we can’t possibly know what’s really going on.
3. Confusing independent vs. dependent events. There is no such thing as “the law of averages.” If you flip a coin 10 times and it comes up heads every time, the probability of a heads coming up on the 11th flip is still 50%. The results of those previous coin flips do not exert any influence whatsoever on future outcomes, assuming each coin flip is considered a single event. That being said, the event “eleven consecutive coin flips of heads” is an extremely unlikely event. If you take a large enough sample size, the sample statistics will approximate the population statistics (50% heads and 50% tails for an honest coin), sometimes simplistically referred to as “regression to the mean.”
4. Seeing a trend where none exists. This is usually the result of prediction bias, where we start with a conclusion and look for data to support it, and sometimes leading to selection bias, where we exclude data that doesn’t fit the expected behavior. Often we’re so eager for signs of improvement that we accept as proof a single data point that’s in the right direction. This is why it’s important to apply hypothesis tests to determine whether the before and after samples represent statistically significant differences. It’s also why we should never fiddle with a process that varies randomly but operates within control limits.
5. Correlation does not imply causation. You may be able to draw a regression line through a scatter plot, but that doesn’t necessarily mean there’s a cause-and-effect relationship between the two variables. This is where we have to use engineering judgment or even common sense. Earlier this year the Atlanta Braves baseball team lost 16 consecutive games that were played on a Monday. No one has been able to explain how winning or losing a baseball game could possibly be caused by the day of the week. A related logical fallacy is post hoc, ergo propter hoc (after it, therefore because of it). Chronological sequence does not imply causation, either.