jump to navigation

Measuring Cost of Quality August 29, 2016

Posted by Tim Rodgers in Operations, Process engineering, Quality, Supply chain.
Tags: , , , , , , ,
add a comment

I’ve always thought “cost of quality” was a great idea in principle. If you could take the costs associated with defects, field failures, returns, and warranty claims, and add the costs of inspection, testing, scrap, and rework, then you could get everyone’s attention.

Quality would no longer be some abstract “nice to have” thing, but a real expense category that could be monitored and managed. With an objective, quantitative model to view how much money is actually being spent because of poor quality and associated practices, you would be able to evaluate proposed improvement programs and measure their performance. You would have something concrete to discuss with design and production teams to compare with estimates of future sales and operating expenses, apples to apples. All of this would lead to informed, balanced, and better decisions.

It sounds great, but it’s a lot harder than it sounds. You may be measuring yields and defects and returns, but now you’ve got to measure costs.

(more…)

Advertisements

Measuring Service Quality August 1, 2016

Posted by Tim Rodgers in Operations, Quality.
Tags: , , , ,
add a comment

Product quality seems easy to measure. We just have to sit down with the people who will be using the product or the part or the subassembly and ask them what physical characteristics are important: dimensions and tolerances, chemical composition, electrical performance measurements, strength, weight, and the like. These are things we can then measure, either directly or through test results, on a representative sample from the production process. If we’ve defined the “fitness for use” characteristics correctly, based on what the customer tells us, then we can determine whether or not our processes can reliably produce products that meet those requirements.

Service quality is harder to measure. The trouble starts with defining the requirements. Who are the customers and what do they want? There may be a lot of them, possibly millions of them, with new ones every day. They each have their own set of unique expectations that might change from day to day. They may not be able to articulate their requirements, or at least not in a way that can be acted upon. Services are typically customized for individual customers, and there’s no standard level of performance. What’s acceptable to one customer may not be acceptable to another.

(more…)

Is This The Right Problem To Work On? July 11, 2016

Posted by Tim Rodgers in Management & leadership, Process engineering, Project management, strategy.
Tags: , , , ,
add a comment

The ability to prioritize and focus is widely praised as a characteristic of successful business leaders. There are too many things to do, and not enough time or resources to do them all, much less do them all well. Leaders have to make choices, not just to determine how they spend their own time, but how their teams should be spending theirs. This is the definition of opportunity cost: when we consciously choose to do this instead of that, we forgo or at least postpone any benefits or gains that might have been achieved otherwise.

One of the most common choices that we consider in business is between short-term operational goals vs. longer-term strategic change management. Some people talk about the challenges of “building the plane while flying the plane,” or “changing the tires while driving the bus.” Both of these metaphors emphasize the difficulties of keeping the business running and generating revenue under the current model while developing and implementing a new business model or strategic direction.

(more…)

What Is the Quality Team Responsible For? (Part 1) January 2, 2016

Posted by Tim Rodgers in Process engineering, Quality.
Tags: , , , ,
add a comment

A few weeks ago I had coffee with a quality manager who’s now the president of our local chapter of the American Society of Quality. I’ve made a couple of presentations at the chapter meetings, and I’m helping to manage their web site, so I suppose it was only a matter of time before they asked me to take a leadership position. I declined. My short answer is that I’m “just too busy,” but of course that just means that I’m not willing to make time for it. To quote Bob Dylan: “I used to care, but things have changed.” More on that later.

The chapter president and I shared war stories about our experiences in quality management. His stories are a little more recent, but the underlying themes are the same, and I suspect quality managers one hundred years from now will be experiencing similar frustrations and telling similar stories. Everybody has stories, but I think the unique issues that quality managers face come down to a few fundamental questions:

(more…)

“Dare to Know” Reliability Engineering Podcasts January 12, 2015

Posted by Tim Rodgers in Process engineering, Product design, Quality.
Tags: , , , , , ,
1 comment so far

Over the last several months I’ve been working on a project with my friend and former colleague Fred Schenkelberg on a series of podcasts with thought leaders in the world of reliability engineering. Reliability and quality professionals have a tough job, but they’re not alone. There’s a large and growing community of experienced engineers, managers, authors, and other experts who are available to share their practical expertise and insights. Our Dare to Know interviews provide the opportunity to hear from these leaders and learn about the latest developments in analysis techniques, reliability standards, and business processes.

You can access the interviews at Fred’s Accendo Reliability web site: http://www.fmsreliability.com/accendo/dare-to-know/

Let me know what you think, or if you’re interested in joining us for a future interview.

Teaching Students and Managing Subordinates August 29, 2014

Posted by Tim Rodgers in Management & leadership.
Tags: , , ,
add a comment

Last week I finished teaching a class in project management at a local university. I’ve always planned to spend my time teaching as an adjunct professor after “retirement,” and this extended period of involuntary unemployment has given me the chance to pursue that plan a little earlier. It was a small class, only ten students. I enjoyed it thoroughly, and I think I did a pretty good job passing on some knowledge and maybe inspiring some of the students. I’m looking forward to teaching this class again sometime soon.

While reviewing the homework assignments and papers to prepare the final grades I’ve been thinking about the different kinds of students in the class and how they remind me of some of the different employees I used to manage.

1. The “literalists” are the students who really internalize the grading rubric and do exactly what was assigned, no more and no less. For example, this is a class that requires participation in on-line discussion threads between the classroom sessions, and the students are graded according to how often they post, including posting early in the week (instead of posting several times in one day). This is to encourage actual discussion among the students. The literalists do the minimum required number of posts, but miss the point about engaging with other students. They’re like the employees who want to know how their job performance will be measured, including what it looks like when they “exceed expectations,” but fail to understand how their performance fits into the larger picture. They don’t have the inner drive to learn more or contribute more, and they’re unlikely to exercise leadership, unless it’s something they’re going to be “graded on.” In class, they’ll get a good grade, but I wonder how valuable they’re going to be to their future employer.

2. On the other hand, the “generalists” seem to have a deeper understanding of the material, ask questions in class, and engage in the discussions, but don’t always turn in the homework, or turn it in late. They’re not going to get the best grade if I follow the strict guidelines of the grading rubric. They remind me of the rare employees who aren’t thinking only about how they’re going to do on their next performance review. Maybe they don’t care about external rewards like salary increases. They may be motivated by a desire to learn and grow and master new skills. These are the folks I used to really enjoy working with because they tended to take a longer view that was less self-centered. They could be inspired to take on more challenging assignments, including leadership positions.

3. The “strugglers” are the students who are really trying, but just can’t seem to get it. They turn in the work, participate in the classroom, make mistakes, sometimes get frustrated, and appreciate any help they can get. I like them because they try, and I look for ways to give them extra points for the effort. They’re not going to become project managers. This isn’t the right class for them, but they’re going to get through it, and I hope they find a different class or focus area that will be a better use of their talents and skills. I’ve managed people like that who are in the wrong job. Sometimes the organization can accommodate a change in responsibilities that will help these people shine, and if they can they’ll get a lot more out of these folks.

4. Finally, there are the “slackers” who don’t always show up for class, don’t always turn in the homework, don’t participate in class discussions, and generally aren’t engaged at all. They may or may not be aware of the fact that they’re not going to pass the class, and they may or may not care. When they get negative feedback and written warnings that they’re headed for a failing grade, there’s no response. I can’t “fire” these students, and there’s a limit to how much effort I’m willing to put into improving their performance. It doesn’t work if I care more about how a student is doing than they do. As with the strugglers, they will eventually move on to other things, but will they find something that inspires them? Do they have skills and inner drive, and what will it take to draw them out?

Anyway, as I said, these are some of my observations and comparisons. The classroom is a different environment than the workplace, and it’s unlikely that I’ll see any of these students again. The grade they get from this class or any other is not necessarily a reflection of how well they’ll do after they graduate or otherwise leave the university.

How Do You Know It Needs Fixing? August 22, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: , , , ,
add a comment

We’ve always been told that “if it isn’t broken, don’t fix it.” In the world of statistical process control, making changes to a production process that’s already stable and in-control is considered tampering. There’s a very real chance that you’re going to make it worse, and at the least it’s a bad use of resources that should be committed elsewhere to solve real problems. That’s great advice if you have the historical data and a control chart, but how do you know if a business process needs fixing if you don’t have the data to perform a statistical analysis? How can you tell the difference between a bad business process that can’t consistently deliver good results, and an outlier from a good business process? How do you know if it’s broken?

We often judge a process from its results, an “the ends justify the means” effectiveness standard. If we get what we want from the process, we’re happy. However, we’re also often concerned with how the ends were achieved, which means looking at the cost of the process. This can be literally in terms of the time required and the labor cost, and also in terms of the organizational overhead required and the non-value, or waste, that a process can generate. A process that isn’t giving the desired results is obviously a problem that needs fixing, but so is a process that’s inefficient. You have to spend time with the people who are routinely using the process to understand the costs.

Sometimes we’re not clear about what results we expect from a process. Simply getting outputs from inputs is not enough; we also usually care about how much time is required, or if rework is required (the “quality” of the outputs). We have to look at the process as part of a larger operational system, and how the process helps or hinders the business achieve greater objectives. This is often our starting point for identifying processes that need fixing because these processes create bigger problems that are highly-visible.

Sometimes we jump to the conclusion that a process is broken because we get one bad result. This is where a little root cause analysis is needed to determine if there are any differences or extenuating circumstances that may explain the undesirable outcome. In statistical process control we refer to these as special causes. Finding and eliminating special causes is the recommended approach here, not demolishing the process and starting all over again.

If the process does appear to be broken, it’s important to distinguish between a problem with the design of the process and a problem with how the process is executed. A process can look great on paper, but if the assigned people are poorly trained or lack motivation or don’t understand why the process is even necessary, then they’re unlikely to follow the process. People are usually the biggest source of variability in any process. They also tend to blame the process as the problem, even if they’ve never actually used it as intended. You might think this can be solved by simply applying positional power to compel people, but it’s often possible to make small changes to the process design to make compliance easier, essentially reducing the influence of “operator variation.” Once again, you have to experience the process through the eyes of the people who are using it, what we in quality call a Gemba Walk.

It’s easy to blame “the process” as some kind of inanimate enemy, and there are surely processes in every business that can be improved. However it’s worth spending a little time to determine exactly what the process is supposed to accomplish, how it fits into the larger business context, and whether there are special causes before launching a major overhaul. Sometimes it’s not really broken at all.

Quality and Production, Partners or Enemies? June 18, 2014

Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Quality.
Tags: , , , ,
add a comment

Lately I’ve been finding interesting topics on LinkedIn group discussions. The other day a contributor described his difficulty in moving from a quality job to a production job, and then back again. He  found it hard to maintain the mindset and attitude required  to advance quality goals while working in production, and vice versa. In his workplace it seems that quality is often viewed as the enemy of production, and he wondered if this could be reconciled.

I understand what this person is talking about. I’ve worked in high-volume manufacturing environments where any quality hiccup, whether it’s found internally or reported by the customer, is viewed as a threat to the production plan. There can be tremendous pressure to ignore defects and their causes, underestimate their frequency, and de-value their severity in order to keep production going. That is, until the customer complains, in which case everyone looks for someone else to blame.

Ideally, the quality team should be focused on defect prevention, helping to ensure that the design of the product and the production processes are robust and less-likely to result in defects, and monitoring processes to identify and eliminate special causes. The production team is clearly a vital and necessary partner in this effort, specifically because they are the ones actually executing these processes, and any adjustment or change to address the causes of defects will be their responsibility.

A little bit of shared insight may help here. The production team needs to understand that building units that do not meet quality requirements will lead to higher internal costs due to scrap and rework. This consumes time and people, and reduces throughput and net production, even if you just throw the bad units into the corner. Shipping bad product to customers jeopardizes the business relationship and could ultimately lead to zero production. Deliberately shipping bad product will get you there faster. The quality team isn’t trying to prevent production, and defects should never be hidden from them.

The quality team needs to provide a clear definition of defects, based on customer requirements, and establish testing, inspection and audit procedures to find defects, based on expected defect frequency, severity, and acceptable quality levels. This should include procedures for stopping production after a defect is found, segregating suspect material to prevent inadvertent shipment, identifying and eliminating the root cause, and re-starting production. The production team needs to know how quality issues will be handled, and that the processes will be run quickly in a way that minimizes the impact to the production plan.

Finally, senior management, and customers, need to make it clear that meeting the production plan and on-time delivery performance measures mean nothing without quality. There’s an allocation of responsibility in every organization to different functions. Quality and production have specialized skills and separate responsibilities, but they must support the same overall goals for the business. Management cannot allow these functions to become enemies.

I Need a New Plan, This One Isn’t Working (Or, Is It?) June 4, 2014

Posted by Tim Rodgers in job search, Process engineering.
Tags: , , ,
1 comment so far

At what point should you abandon your plan and make a new one? How do you know when the old plan isn’t going to work? I’ve been thinking about this a lot lately during my current job search. Everyone tells me that I’m doing the right things, that’s it’s a “numbers game,” and if I just keep doing those things then eventually I’ll land a job. On the other hand I keep thinking of that quote attributed to Einstein, that the definition of insanity is doing the same thing over and over again and expecting a different result.

This seems like a general problem with any plan, including a change management process. When you implement the plan you have some confidence that the plan will yield the results or the improvement that you’re looking for. When those positive results don’t happen as expected, does that mean the plan was the wrong one? Is it the right plan, but you’ve overlooked some hidden influence that prevents the plan from working? Does the system have some kind of latency that delays the impact of the plan, and you just have to be a little patient? If you quit now and change the plan or revert to the old process, are you giving up just before the miracle happens? Conversely, if you do abandon the plan and things suddenly improve, does that prove the plan was the wrong one?

One way to determine if the plan is on-track would be to schedule intermediate checkpoints to measure effectiveness and progress. That works for situations where progress can be measured as a continuous variable. If you’re moving the needle in the right direction, you only have to decide if the magnitude of the improvement is acceptable. This doesn’t work if success/failure is strictly binary. Until I actually get a job, I can’t say whether my current plan (or any other) is working. A job search plan that leads to more job interviews could be said to be more effective in one sense because it improves my chances, but my ultimate goal isn’t just an interview.

So, how do you judge a plan if you can’t measure its progress? This is where the thinking that went into creating the plan or process improvement becomes important. Any plan assumes some kind of cause-and-effect relationship between inputs and outputs. If I do this, or stop doing this, or do this differently, then I will get the desired outcome. If I don’t get the desired outcome, then it may be time to revisit the assumed relationship between inputs and outputs. The plan may be based on a weak causal model that does not account for input variables with a stronger influence on the outcome. The desired outcome itself may be poorly defined: Am I looking for any job, or a specific kind of job? My plan may be the right one for an outcome that’s different than the one I really want.

Patience has never been my strong suit. It can be hard to stick with a plan that doesn’t seem to be delivering the right results. However, before abandoning a plan for something else that has no better chance of succeeding, it’s worth spending a little time examining the assumed model to check for flaws.

 

 

Wasting Time On Meaningless Data May 28, 2014

Posted by Tim Rodgers in baseball, Management & leadership, Process engineering.
Tags: , ,
add a comment

One of the things I love about baseball is that it’s amenable to assessing and studying performance using numbers, and that appeals to my analytical side. Since I started following the game as a boy there’s been a lot of questioning of the traditional measures of player performance, particularly with the emergence of sabermetrics as a serious area of study. Older metrics such as pitcher wins and batter RBIs have been challenged as inadequate and sometimes misleading because they don’t reflect the isolated contribution of a single player. Just because they’re easy to measure doesn’t mean that they contribute to a better understanding.

One of the breakthroughs was when people started looking for ways of associating the performance of a player with the objectives of the team, which is to win games. The team wins games when they score more runs than the other team, so a player who significantly improves the likelihood of scoring runs or preventing runs is valuable. That may seem obvious, but the older, traditional metrics don’t reflect that. It’s a complicated empirical model, and the sabermetric community has been experimenting with new metrics based on data that has never been systematically collected before. As a devoted fan with an analytical disposition I’ve been watching this with great interest.

All of this reminds me of organizations that collect and report data that are disconnected from or unrelated to the real drivers of success. I’ve written about this before, arguing that if the data isn’t being used to manage the business, then the business should step back and re-visit their metrics. More data and more metrics doesn’t always lead to better insight and decision-making.

Here are some key questions that should be asked when evaluating business metrics:

1. Can you control it? Can you “move the needle?” If you can’t control it, you can’t manage it. This is worth knowing. It may still be worth tracking because it provides some interesting background information that could be used as input for future decisions.

2. Is there a correlation between this metric and business success? If your deliberate action helps to move the needle in the right direction, is the business significantly better off? Conversely, if your lack of action leads to no change, or a change in the wrong direction, is the business worse off? I realize that we’re talking about complex interactions with imperfect models, but if there isn’t some kind of causation and correlation, then the metric has no value.

Finally, a word about balanced scorecards. I’m all for simplicity and focus in our communications. We should seek a minimum set of metrics that describes status and progress. In our desire to provide a complete picture we run the risk of ambiguity and confusion.

Metrics should be regularly reviewed for their effectiveness. If they’re not helping to manage the business, we should find new ones.

 

%d bloggers like this: