jump to navigation

Quality Decisions in Hindsight July 25, 2016

Posted by Tim Rodgers in Management & leadership, Operations, Organizational dynamics, Process engineering, Product design.
Tags: , , , , , ,
add a comment

For the last several years there’s been at least one high-profile case of quality failure that captures the attention of the business press for months at a time. Since late 2015 and early 2016 we’ve been watching to see if air-bag supplier Takata, iconic auto maker Volkswagen, and fast food chain Chipotle will survive their highly-publicized quality missteps. There’s always a lot of apologizing to the public, and a commitment to conduct internal investigations to identify and eliminate the causes of field failures. Senior management and boards of directors scramble to regain the trust of their customers.

I’m not at all surprised by the frequency of these events. What surprises me is that these events don’t happen more often. We should expect to continue to hear about similar catastrophic quality problems from otherwise reputable companies despite all the talk about six sigma and customer satisfaction, and despite all the investments in quality improvement programs. It’s the nature of business.



Is This The Right Problem To Work On? July 11, 2016

Posted by Tim Rodgers in Management & leadership, Process engineering, Project management, strategy.
Tags: , , , ,
add a comment

The ability to prioritize and focus is widely praised as a characteristic of successful business leaders. There are too many things to do, and not enough time or resources to do them all, much less do them all well. Leaders have to make choices, not just to determine how they spend their own time, but how their teams should be spending theirs. This is the definition of opportunity cost: when we consciously choose to do this instead of that, we forgo or at least postpone any benefits or gains that might have been achieved otherwise.

One of the most common choices that we consider in business is between short-term operational goals vs. longer-term strategic change management. Some people talk about the challenges of “building the plane while flying the plane,” or “changing the tires while driving the bus.” Both of these metaphors emphasize the difficulties of keeping the business running and generating revenue under the current model while developing and implementing a new business model or strategic direction.


Formulas Without Understanding February 22, 2016

Posted by Tim Rodgers in Education, Management & leadership.
Tags: , , ,
add a comment

I’ve just started my second year teaching courses in supply chain management and operations management at two local universities. It’s been a long time since I was a teaching assistant as a graduate student, and my time outside the academic world has taught me a few things about educational objectives and what students really should be learning. One of the things I’ve noticed in my business classes is a tendency of some teachers and textbook authors to focus on formulas that give a “right” answer. I think that’s a mistake, and when we do that we’re not helping business students or their future employers.


The Danger of Quick Fixes September 17, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: , , , ,
add a comment

I think it’s fair to say that most people make better decisions when they have more time. With more time we can collect more data, consult with people who have more experience, and weigh the alternatives before choosing a course of action. In the specific case of problem solving, we can propose alternate root causes and perform experiments to verify the cause before implementing a solution. This kind of disciplined approach helps ensure that the problem doesn’t reoccur.

The thing is that in business we rarely have enough time, or all the time we wish we had. All of us make daily, small decisions about how to spend our time and resources based on external priorities and internal heuristics. Some of us have jobs in rapidly-changing or unstable environments, with periodic crises that need management attention. Unresolved situations create ambiguity in the organization, and ultimately these situations cost money, and this cost creates pressure to do something quickly. There’s an emotional and perception component as well: it “looks better” when we’re doing “something” instead of sitting and thinking about it. After all, “you can always fix it later.”

Of course “fixing it later” comes at its own cost, but that’s often underestimated and under appreciated. It’s tempting to implement a quick fix while continuing to investigate the problem. It takes the pressure off by addressing the organization’s need for action, which is both good and bad. The danger is that the quick fix becomes the de facto solution when the urgency is removed and we become distracted by another problem. The quick fix can also bias subsequent root cause analysis, especially if it appears to be effective in the short term.

Please note that I’m not suggesting that every decision or problem solving effort requires more time and more inputs. I’m not advocating “analysis paralysis.” We’re often faced with situations where we have to work with incomplete and sometimes even inaccurate data that may not even accurately represent the true problem. Sometimes a quick fix is exactly what’s needed: a tourniquet to stop the bleeding. However, corrective action is not the same as preventive action. If we want better decisions and better long-term outcomes, let’s not forget that a quick fix is a temporary measure.

Teaching Students and Managing Subordinates August 29, 2014

Posted by Tim Rodgers in Management & leadership.
Tags: , , ,
add a comment

Last week I finished teaching a class in project management at a local university. I’ve always planned to spend my time teaching as an adjunct professor after “retirement,” and this extended period of involuntary unemployment has given me the chance to pursue that plan a little earlier. It was a small class, only ten students. I enjoyed it thoroughly, and I think I did a pretty good job passing on some knowledge and maybe inspiring some of the students. I’m looking forward to teaching this class again sometime soon.

While reviewing the homework assignments and papers to prepare the final grades I’ve been thinking about the different kinds of students in the class and how they remind me of some of the different employees I used to manage.

1. The “literalists” are the students who really internalize the grading rubric and do exactly what was assigned, no more and no less. For example, this is a class that requires participation in on-line discussion threads between the classroom sessions, and the students are graded according to how often they post, including posting early in the week (instead of posting several times in one day). This is to encourage actual discussion among the students. The literalists do the minimum required number of posts, but miss the point about engaging with other students. They’re like the employees who want to know how their job performance will be measured, including what it looks like when they “exceed expectations,” but fail to understand how their performance fits into the larger picture. They don’t have the inner drive to learn more or contribute more, and they’re unlikely to exercise leadership, unless it’s something they’re going to be “graded on.” In class, they’ll get a good grade, but I wonder how valuable they’re going to be to their future employer.

2. On the other hand, the “generalists” seem to have a deeper understanding of the material, ask questions in class, and engage in the discussions, but don’t always turn in the homework, or turn it in late. They’re not going to get the best grade if I follow the strict guidelines of the grading rubric. They remind me of the rare employees who aren’t thinking only about how they’re going to do on their next performance review. Maybe they don’t care about external rewards like salary increases. They may be motivated by a desire to learn and grow and master new skills. These are the folks I used to really enjoy working with because they tended to take a longer view that was less self-centered. They could be inspired to take on more challenging assignments, including leadership positions.

3. The “strugglers” are the students who are really trying, but just can’t seem to get it. They turn in the work, participate in the classroom, make mistakes, sometimes get frustrated, and appreciate any help they can get. I like them because they try, and I look for ways to give them extra points for the effort. They’re not going to become project managers. This isn’t the right class for them, but they’re going to get through it, and I hope they find a different class or focus area that will be a better use of their talents and skills. I’ve managed people like that who are in the wrong job. Sometimes the organization can accommodate a change in responsibilities that will help these people shine, and if they can they’ll get a lot more out of these folks.

4. Finally, there are the “slackers” who don’t always show up for class, don’t always turn in the homework, don’t participate in class discussions, and generally aren’t engaged at all. They may or may not be aware of the fact that they’re not going to pass the class, and they may or may not care. When they get negative feedback and written warnings that they’re headed for a failing grade, there’s no response. I can’t “fire” these students, and there’s a limit to how much effort I’m willing to put into improving their performance. It doesn’t work if I care more about how a student is doing than they do. As with the strugglers, they will eventually move on to other things, but will they find something that inspires them? Do they have skills and inner drive, and what will it take to draw them out?

Anyway, as I said, these are some of my observations and comparisons. The classroom is a different environment than the workplace, and it’s unlikely that I’ll see any of these students again. The grade they get from this class or any other is not necessarily a reflection of how well they’ll do after they graduate or otherwise leave the university.

How Do You Know It Needs Fixing? August 22, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: , , , ,
add a comment

We’ve always been told that “if it isn’t broken, don’t fix it.” In the world of statistical process control, making changes to a production process that’s already stable and in-control is considered tampering. There’s a very real chance that you’re going to make it worse, and at the least it’s a bad use of resources that should be committed elsewhere to solve real problems. That’s great advice if you have the historical data and a control chart, but how do you know if a business process needs fixing if you don’t have the data to perform a statistical analysis? How can you tell the difference between a bad business process that can’t consistently deliver good results, and an outlier from a good business process? How do you know if it’s broken?

We often judge a process from its results, an “the ends justify the means” effectiveness standard. If we get what we want from the process, we’re happy. However, we’re also often concerned with how the ends were achieved, which means looking at the cost of the process. This can be literally in terms of the time required and the labor cost, and also in terms of the organizational overhead required and the non-value, or waste, that a process can generate. A process that isn’t giving the desired results is obviously a problem that needs fixing, but so is a process that’s inefficient. You have to spend time with the people who are routinely using the process to understand the costs.

Sometimes we’re not clear about what results we expect from a process. Simply getting outputs from inputs is not enough; we also usually care about how much time is required, or if rework is required (the “quality” of the outputs). We have to look at the process as part of a larger operational system, and how the process helps or hinders the business achieve greater objectives. This is often our starting point for identifying processes that need fixing because these processes create bigger problems that are highly-visible.

Sometimes we jump to the conclusion that a process is broken because we get one bad result. This is where a little root cause analysis is needed to determine if there are any differences or extenuating circumstances that may explain the undesirable outcome. In statistical process control we refer to these as special causes. Finding and eliminating special causes is the recommended approach here, not demolishing the process and starting all over again.

If the process does appear to be broken, it’s important to distinguish between a problem with the design of the process and a problem with how the process is executed. A process can look great on paper, but if the assigned people are poorly trained or lack motivation or don’t understand why the process is even necessary, then they’re unlikely to follow the process. People are usually the biggest source of variability in any process. They also tend to blame the process as the problem, even if they’ve never actually used it as intended. You might think this can be solved by simply applying positional power to compel people, but it’s often possible to make small changes to the process design to make compliance easier, essentially reducing the influence of “operator variation.” Once again, you have to experience the process through the eyes of the people who are using it, what we in quality call a Gemba Walk.

It’s easy to blame “the process” as some kind of inanimate enemy, and there are surely processes in every business that can be improved. However it’s worth spending a little time to determine exactly what the process is supposed to accomplish, how it fits into the larger business context, and whether there are special causes before launching a major overhaul. Sometimes it’s not really broken at all.

Quality Under Constraints: Making the Best of It July 9, 2014

Posted by Tim Rodgers in Management & leadership, Product design, Quality.
Tags: , , , , , ,
add a comment

Lately I’ve been seeing news reports that illustrate the difficult environment that most quality professionals operate in. Here’s one example: executives from the US Chemical Safety Board (CSB) were recently called to testify before the US House of Representatives Committee on Oversight and Government Reform to address recent, highly-publicized delays and whistle-blower complaints. Former board members and employees have described a dysfunctional culture where criticism of management is considered “disloyal.” Independent investigators have reported a large and growing backlog of unfinished investigations, a situation made worse by employee attrition. The former employees report a failure to prioritize the pending investigations, “nor is there any discussion of the priorities.” The current CSB Chairman cited a lack of resources in his testimony: “We are a very small agency charged with a huge mission of investigating far more accidents than we have the resources to tackle.”

Obviously the report of a dysfunctional culture at the CSB is something that should be seriously investigated and addressed. However, my interest in this story is the struggle to prioritize investigations, do a thorough job, and close them out while operating within a constrained budget and increasing workload. I think everyone has to deal with this kind of problem in their work: too much to do and not enough time or resources to do it all with the level of completeness and quality that we would like. The old joke is you can’t have cost and schedule and quality, you can only choose two.

However, people who work in quality feel this problem more acutely than most. After all, you can directly measure cost and schedule, but it’s a lot harder to measure quality objectively. Quality professionals deal with statistical probabilities and risks, rarely with 100% certainties. In most cases, all you can do is minimize the risk of failure within the given constraints, and make sure everyone understands the inherent assumptions.

A good example is the hardware product development environment. The release schedule and ship dates are often constrained by contractual commitments to channels or customers. If the design work runs longer than planned, as it almost always does when you’re doing something new, the time to fully test and qualify the design before going into production gets squeezed. This is the same problem that happens with teams that use the old waterfall model for software development.

Yes, you shouldn’t wait until the end of the project to start thinking about quality, and there are certainly things you can do to enhance quality while you’re doing the work, and sometimes quality itself can be a constraint (as in highly-regulated environments). However I contend that managing quality will always be about prioritizing; applying good judgment based on experience, and data, and statistical models; and generally doing the best you can within constraints. Ultimately our success in managing quality should be judged by the soundness of our processes and methods, and our commitment to continuous improvement.

For the CSB, these are the questions I would ask if I were on the House Committee looking into their effectiveness: What is the quality requirement for their investigation reports, and how well do they understand it? How much faster could they release reports if those requirements were changed, while still operating under a limited staff and budget? What is their process for prioritizing investigations? CSB management should certainly be changed if the work environment has become dysfunctional, but they should also be changed if they can’t articulate a clear process for managing quality within the constraints they’ve been given.




Are Face-to-Face Meetings Obsolete? April 24, 2014

Posted by Tim Rodgers in Communication, International management, Management & leadership.
Tags: , , , , , ,
add a comment

I’ll get right to it: no, they’re not. Yes, we have the technology to communicate instantly with people all over the world, whether by voice or text or even video. We can share files and review the same presentation in real-time. If a project has been parsed into reasonable chunks, we can multiply our productivity by “following the sun.”

And yet, the technology does not guarantee effective communication. Faster worldwide access to co-workers or customers or suppliers will not necessarily overcome distrust, misalignment, ambiguity, and confusion. Communication requires not just a channel, but also a sender and a receiver where the signal is processed into usable information. It’s that signal processing step that we overlook when we focus only on speed and accessibility.

I believe it’s important to establish a professional relationship with distant partners before relying on electronic communication. This is going to sound bad, but it’s easier for me to ignore someone’s e-mail or voice mail if I’ve never met them face-to-face and spent time with them, ideally over a meal. This is especially important if I live in a different country than the other person. Differences in language and culture can be very hard to overcome without a foundation of trust.

I understand that business travel costs money, and this is yet another situation where we try to balance real costs in the present against hoped-for benefits in the future. Travel budgets are always an easy target during times of expense reductions. I don’t have the numbers to build a financial justification, but I still believe it’s worth it, at least at the start of a new relationship with remote co-workers or a supplier. Periodic travel after that helps to maintain the relationship and head off any sources of confusion or deviation.

Expanded wireless access and faster speeds will enable better video conferencing, but I doubt it will ever provide a substitute for the informal and spontaneous communication that happens when people are in the same place at the same time. I love the new technology, but in the end it’s people who do the work.



Managing By The Numbers, Continued April 10, 2014

Posted by Tim Rodgers in Communication, Management & leadership.
Tags: , , , ,
add a comment

Picture this: a group of senior managers are sitting around a conference table listening to a monthly or quarterly business review. It’s a procession of slides, mostly the usual bullet-text items but occasionally punctuated by a table or a graph with some numbers. The numbers are important because they suggest some kind of analytical rigor and objectivity, but the fact is that most of the people in the room have no idea where the numbers come from, whether they have any relationship to the business’s current or future success, what can or should be done to make the numbers look different next time, and whether they should care.

I’ve written about this before, and I’m still surprised at how many organizations continue to routinely report numbers and KPIs without making any sense of them. Here’s where this often breaks down:

1. A single measurement means nothing without a point of comparison. You have to either measure the same thing repeatedly over a period of time (to determine stability or detect a trend), or measure the same thing under different conditions (to determine the effect of the environment). I’ve been in meetings where people argue over whether an ROI for a project is high enough without comparing it with other possible investments or a standard investment hurdle rate.

2. Any performance measurement must be assessed within the larger business context. Not all results or trends require action. In fact, it’s best to define targets or control limits that trigger action rather than overreacting and committing time and money to address imagined performance “problems.”

3. If you’re not actually going to act on that performance metric, then it may not be worth measuring and reporting in the first place. I recommend a bottoms-up review of all KPIs, at least annually, to verify that they’re still valuable indicators of the overall health of the business and/or indicators that progress is being made on the strategic priorities.

4. The two questions that should be asked during any KPI review: (a) If the metric is not hitting the target, what are we doing about it? (b) If the metric is hitting or exceeding the target, what did we do to achieve that (or, is it just a random occurrence), and is that performance sustainable?

Measuring performance is the beginning of understanding, not the end. We’re not doing our job as managers if we’re measuring performance but ignoring the message, relying only on our gut.

What Are Individual Accomplishments Within a Team Environment? April 7, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Project management.
Tags: , , , , , ,
add a comment

The other day I responded to a question on LinkedIn about whether performance reviews were basically worthless because we all work in teams and individual accomplishments are hard to isolate. It’s true that very few jobs require us to work entirely independently, and our success does depend in large part on the performance of others. But, does that really mean that individual performance can’t be evaluated at all?

If I assign a specific task or improvement project to someone, I should be able to determine whether the project was completed, although there may be qualifiers about schedule (completed on-time?), cost (within budget?), and quality (all elements completed according to requirements?). However, regardless of whether the task was completed or not, or if the results weren’t entirely satisfactory, how much of that outcome can be attributed to the actions of a single person? If they weren’t successful, how much of that failure was due to circumstances that were within or beyond their control? If they were successful, how much of the credit can they rightfully claim?

I believe we can evaluate individual performance, but we have to consider more than just whether tasks were completed or if improvement occurred, and that requires a closer look. We have to assess what got done, how it got done, and the influence of each person who was involved. Here are some of the considerations that should guide individual performance reviews:

1. Degree of difficulty. Some assignments are obviously more challenging with a higher likelihood of failure. Olympic athletes get higher scores when they attempt more-difficult routines, and we should credit those who have more difficult assignments, especially when they volunteer for those challenges.

2. Overcoming obstacles and mitigating risks. That being said, simply accepting a challenging assignment is enough. We should look for evidence of assessing risks, taking proactive steps to minimize those risks, and making progress despite obstacles. I want to know what each person did to avoid trouble, and what they did when it happened anyway.

3. Original thinking and creative problem solving. Innovation isn’t just something we look for in product design. We should encourage and reward people who apply reasoning skills based on their training and experience.

4. Leadership and influence. Again, this gets to the “how.” Because the work requires teams and other functions and external partners and possibly customers, I want to know how each person interacted with others, and how they obtained their cooperation. Generally, how did they use the resources available to them?

5. Adaptability. Things change, and they can change quickly. Did this person adapt and adjust their plans, or perhaps even anticipate the change?

This is harder for managers when writing performance reviews, but not impossible. It requires that we monitor the work as it’s being done instead of evaluating it after it’s completed, and recognizing the behaviors that we value in the organization.

%d bloggers like this: