jump to navigation

Is This The Right Problem To Work On? July 11, 2016

Posted by Tim Rodgers in Management & leadership, Process engineering, Project management, strategy.
Tags: , , , ,
add a comment

The ability to prioritize and focus is widely praised as a characteristic of successful business leaders. There are too many things to do, and not enough time or resources to do them all, much less do them all well. Leaders have to make choices, not just to determine how they spend their own time, but how their teams should be spending theirs. This is the definition of opportunity cost: when we consciously choose to do this instead of that, we forgo or at least postpone any benefits or gains that might have been achieved otherwise.

One of the most common choices that we consider in business is between short-term operational goals vs. longer-term strategic change management. Some people talk about the challenges of “building the plane while flying the plane,” or “changing the tires while driving the bus.” Both of these metaphors emphasize the difficulties of keeping the business running and generating revenue under the current model while developing and implementing a new business model or strategic direction.

(more…)

Advertisements

How Do You Know It Needs Fixing? August 22, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: , , , ,
add a comment

We’ve always been told that “if it isn’t broken, don’t fix it.” In the world of statistical process control, making changes to a production process that’s already stable and in-control is considered tampering. There’s a very real chance that you’re going to make it worse, and at the least it’s a bad use of resources that should be committed elsewhere to solve real problems. That’s great advice if you have the historical data and a control chart, but how do you know if a business process needs fixing if you don’t have the data to perform a statistical analysis? How can you tell the difference between a bad business process that can’t consistently deliver good results, and an outlier from a good business process? How do you know if it’s broken?

We often judge a process from its results, an “the ends justify the means” effectiveness standard. If we get what we want from the process, we’re happy. However, we’re also often concerned with how the ends were achieved, which means looking at the cost of the process. This can be literally in terms of the time required and the labor cost, and also in terms of the organizational overhead required and the non-value, or waste, that a process can generate. A process that isn’t giving the desired results is obviously a problem that needs fixing, but so is a process that’s inefficient. You have to spend time with the people who are routinely using the process to understand the costs.

Sometimes we’re not clear about what results we expect from a process. Simply getting outputs from inputs is not enough; we also usually care about how much time is required, or if rework is required (the “quality” of the outputs). We have to look at the process as part of a larger operational system, and how the process helps or hinders the business achieve greater objectives. This is often our starting point for identifying processes that need fixing because these processes create bigger problems that are highly-visible.

Sometimes we jump to the conclusion that a process is broken because we get one bad result. This is where a little root cause analysis is needed to determine if there are any differences or extenuating circumstances that may explain the undesirable outcome. In statistical process control we refer to these as special causes. Finding and eliminating special causes is the recommended approach here, not demolishing the process and starting all over again.

If the process does appear to be broken, it’s important to distinguish between a problem with the design of the process and a problem with how the process is executed. A process can look great on paper, but if the assigned people are poorly trained or lack motivation or don’t understand why the process is even necessary, then they’re unlikely to follow the process. People are usually the biggest source of variability in any process. They also tend to blame the process as the problem, even if they’ve never actually used it as intended. You might think this can be solved by simply applying positional power to compel people, but it’s often possible to make small changes to the process design to make compliance easier, essentially reducing the influence of “operator variation.” Once again, you have to experience the process through the eyes of the people who are using it, what we in quality call a Gemba Walk.

It’s easy to blame “the process” as some kind of inanimate enemy, and there are surely processes in every business that can be improved. However it’s worth spending a little time to determine exactly what the process is supposed to accomplish, how it fits into the larger business context, and whether there are special causes before launching a major overhaul. Sometimes it’s not really broken at all.

I Need a New Plan, This One Isn’t Working (Or, Is It?) June 4, 2014

Posted by Tim Rodgers in job search, Process engineering.
Tags: , , ,
1 comment so far

At what point should you abandon your plan and make a new one? How do you know when the old plan isn’t going to work? I’ve been thinking about this a lot lately during my current job search. Everyone tells me that I’m doing the right things, that’s it’s a “numbers game,” and if I just keep doing those things then eventually I’ll land a job. On the other hand I keep thinking of that quote attributed to Einstein, that the definition of insanity is doing the same thing over and over again and expecting a different result.

This seems like a general problem with any plan, including a change management process. When you implement the plan you have some confidence that the plan will yield the results or the improvement that you’re looking for. When those positive results don’t happen as expected, does that mean the plan was the wrong one? Is it the right plan, but you’ve overlooked some hidden influence that prevents the plan from working? Does the system have some kind of latency that delays the impact of the plan, and you just have to be a little patient? If you quit now and change the plan or revert to the old process, are you giving up just before the miracle happens? Conversely, if you do abandon the plan and things suddenly improve, does that prove the plan was the wrong one?

One way to determine if the plan is on-track would be to schedule intermediate checkpoints to measure effectiveness and progress. That works for situations where progress can be measured as a continuous variable. If you’re moving the needle in the right direction, you only have to decide if the magnitude of the improvement is acceptable. This doesn’t work if success/failure is strictly binary. Until I actually get a job, I can’t say whether my current plan (or any other) is working. A job search plan that leads to more job interviews could be said to be more effective in one sense because it improves my chances, but my ultimate goal isn’t just an interview.

So, how do you judge a plan if you can’t measure its progress? This is where the thinking that went into creating the plan or process improvement becomes important. Any plan assumes some kind of cause-and-effect relationship between inputs and outputs. If I do this, or stop doing this, or do this differently, then I will get the desired outcome. If I don’t get the desired outcome, then it may be time to revisit the assumed relationship between inputs and outputs. The plan may be based on a weak causal model that does not account for input variables with a stronger influence on the outcome. The desired outcome itself may be poorly defined: Am I looking for any job, or a specific kind of job? My plan may be the right one for an outcome that’s different than the one I really want.

Patience has never been my strong suit. It can be hard to stick with a plan that doesn’t seem to be delivering the right results. However, before abandoning a plan for something else that has no better chance of succeeding, it’s worth spending a little time examining the assumed model to check for flaws.

 

 

Sustaining Improvements: Self-Governance vs. Policing May 21, 2014

Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering.
Tags: , , , , ,
add a comment

Last month I listened to a presentation about an implementation of 5S methodology at a local company. The work group spent a lot of time cleaning up, installing shadow boxes to keep their tools in order, and getting rid of unused equipment that was stored in the workplace. The before-and-after photos showed a lot of improvement, and it certainly looked like a successful initiative.

The last slide in the presentation described this company’s efforts to “Sustain,” the 5th S. The speaker reported that there was some initial enthusiasm for the changes, but unfortunately it seems that within a short period of time people started to fall back to their old ways of casual disorganization. Sustaining the improvement apparently required periodic audits and reminders, and I’m guessing that this company’s management is wondering why the team couldn’t stick with it on their own.

This is a common complaint in many process improvement efforts, and one of the most-frequent reasons why these efforts fail. Unless the target group acknowledges and appreciates the benefits of the improvement, and therefore are committed to maintain the improvement, it’s more likely that they will revert to the old ways of doing things. Unfortunately change management leaders sometimes neglect to gain the support of the target group, or impose a solution to a problem that hasn’t been recognized. This is especially true for changes that are significantly different or “unnatural,” where the work habits and skepticism of the team provide resistance.

It’s far more effective for teams to monitor their own implementation and maintenance of the change instead of relying on audits or management oversight to make sure everyone is now “doing it right.” Self-governance requires buy-in, but it also means that the change is really an improvement with clear benefits.

How Important Is Industry Familiarity? April 17, 2014

Posted by Tim Rodgers in job search, Organizational dynamics, strategy.
Tags: , , , , ,
add a comment

I’m once again “between jobs” and “in-transition,” and I’ve been spending a lot of time looking at job postings. Every position seems to emphasize the preference or requirement for applicants with industry experience. It’s easy to imagine that many applicants are immediately eliminated from consideration without it.

I understand why familiarity with an industry is valued in a candidate. Different industries are characterized by different combinations of suppliers, internal value delivery systems, channels, competitors, and customers. People who work in the industry understand the relationships between these elements, and that understanding is an important consideration when setting priorities and making decisions. It takes time to learn that in a new job, and people who already have the experience don’t need to go through a learning curve and theoretically can make a more-immediate impact.

Industry familiarity doesn’t seem to be something you can acquire through independent study and observation; you have to actually work in the industry. This means that your preferred candidates are likely going to be people who have worked at your competitors, or possibly your suppliers, channels, or customers, depending on how broadly you define your industry.

This leads to a question I’ve been puzzling over: what are the unique characteristics of an industry that are true differentiators? What really distinguishes one industry from another, and what is the significance of those differences when considering job applicants?

In my career I’ve worked at a defense contractor, several OEMs in the consumer electronics industry, a supplier to the semiconductor manufacturing industry, and most-recently a supplier to the power generation and utilities industries. Different customers, different sales channels, different production volumes, and different quality expectations and regulatory environments. Some of the suppliers were the same, but most were different. Some produced internally, and some outsourced. Some of these companies competed on cost, some on technology. My modest assessment is that I’ve been successful in all of these industries.

Industry experience provides familiarity, but is industry experience an accurate predictor of success in a new job? What skills are really needed to succeed, and how transferable are a person’s skills from one industry to another? Could a unique perspective derived from a diversity of experiences be more valuable than industry familiarity? These are the questions that should be considered when writing a job posting and evaluating applicants.

 

Fake It Until You Make It: Is Structural Change a Prerequisite? April 14, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering.
Tags: , , , ,
add a comment

Change management is frustrating. It can take a maddeningly long time to convince the required supporters that change is necessary, and then more time to get everyone moving in the new direction. Then once the tipping point is reached it becomes a runaway train. When the benefits of the change are significant and well-understood by all, there’s a lot of pressure to get it done already without regard to process, or documentation, or training other “details.” It’s hard to manage expectations during this transitional phase. Can’t we just hold on for a minute while we get our act together?

Well, you could, but would you risk losing momentum and tempting people to revert to the old ways of doing things? Can change management succeed when it’s implemented on a limited basis without structural changes? Can you “make it up as you go along,” and firm it up later?

I don’t see why not. In fact, there’s a lot of value in this kind of launch-and-learn approach. It’s unlikely that you’ll be able to anticipate and plan for all the consequences of the change, and early experiences after the change is implemented will identify these improvements. For example, the change may impact processes and organizations that are infrequently engaged, and it might take time to discover that the change is more or less effective than you expected.

Consolidating the change and fully-integrating it into the system is still important and shouldn’t be neglected. I’ve seen many quality management systems and process documents that are completely out-of-date because they were never updated after the change. At minimum that’s a non-compliance on an external audit, but it’s also a potential source of confusion when new people try to figure out what the right process is.

Nevertheless, if the change is worth doing, then it’s worth looking into ways of implementing it quickly in order to realize some benefits, demonstrate some early successes, and build momentum.

 

Managing By The Numbers, Continued April 10, 2014

Posted by Tim Rodgers in Communication, Management & leadership.
Tags: , , , ,
add a comment

Picture this: a group of senior managers are sitting around a conference table listening to a monthly or quarterly business review. It’s a procession of slides, mostly the usual bullet-text items but occasionally punctuated by a table or a graph with some numbers. The numbers are important because they suggest some kind of analytical rigor and objectivity, but the fact is that most of the people in the room have no idea where the numbers come from, whether they have any relationship to the business’s current or future success, what can or should be done to make the numbers look different next time, and whether they should care.

I’ve written about this before, and I’m still surprised at how many organizations continue to routinely report numbers and KPIs without making any sense of them. Here’s where this often breaks down:

1. A single measurement means nothing without a point of comparison. You have to either measure the same thing repeatedly over a period of time (to determine stability or detect a trend), or measure the same thing under different conditions (to determine the effect of the environment). I’ve been in meetings where people argue over whether an ROI for a project is high enough without comparing it with other possible investments or a standard investment hurdle rate.

2. Any performance measurement must be assessed within the larger business context. Not all results or trends require action. In fact, it’s best to define targets or control limits that trigger action rather than overreacting and committing time and money to address imagined performance “problems.”

3. If you’re not actually going to act on that performance metric, then it may not be worth measuring and reporting in the first place. I recommend a bottoms-up review of all KPIs, at least annually, to verify that they’re still valuable indicators of the overall health of the business and/or indicators that progress is being made on the strategic priorities.

4. The two questions that should be asked during any KPI review: (a) If the metric is not hitting the target, what are we doing about it? (b) If the metric is hitting or exceeding the target, what did we do to achieve that (or, is it just a random occurrence), and is that performance sustainable?

Measuring performance is the beginning of understanding, not the end. We’re not doing our job as managers if we’re measuring performance but ignoring the message, relying only on our gut.

What Are Individual Accomplishments Within a Team Environment? April 7, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Project management.
Tags: , , , , , ,
add a comment

The other day I responded to a question on LinkedIn about whether performance reviews were basically worthless because we all work in teams and individual accomplishments are hard to isolate. It’s true that very few jobs require us to work entirely independently, and our success does depend in large part on the performance of others. But, does that really mean that individual performance can’t be evaluated at all?

If I assign a specific task or improvement project to someone, I should be able to determine whether the project was completed, although there may be qualifiers about schedule (completed on-time?), cost (within budget?), and quality (all elements completed according to requirements?). However, regardless of whether the task was completed or not, or if the results weren’t entirely satisfactory, how much of that outcome can be attributed to the actions of a single person? If they weren’t successful, how much of that failure was due to circumstances that were within or beyond their control? If they were successful, how much of the credit can they rightfully claim?

I believe we can evaluate individual performance, but we have to consider more than just whether tasks were completed or if improvement occurred, and that requires a closer look. We have to assess what got done, how it got done, and the influence of each person who was involved. Here are some of the considerations that should guide individual performance reviews:

1. Degree of difficulty. Some assignments are obviously more challenging with a higher likelihood of failure. Olympic athletes get higher scores when they attempt more-difficult routines, and we should credit those who have more difficult assignments, especially when they volunteer for those challenges.

2. Overcoming obstacles and mitigating risks. That being said, simply accepting a challenging assignment is enough. We should look for evidence of assessing risks, taking proactive steps to minimize those risks, and making progress despite obstacles. I want to know what each person did to avoid trouble, and what they did when it happened anyway.

3. Original thinking and creative problem solving. Innovation isn’t just something we look for in product design. We should encourage and reward people who apply reasoning skills based on their training and experience.

4. Leadership and influence. Again, this gets to the “how.” Because the work requires teams and other functions and external partners and possibly customers, I want to know how each person interacted with others, and how they obtained their cooperation. Generally, how did they use the resources available to them?

5. Adaptability. Things change, and they can change quickly. Did this person adapt and adjust their plans, or perhaps even anticipate the change?

This is harder for managers when writing performance reviews, but not impossible. It requires that we monitor the work as it’s being done instead of evaluating it after it’s completed, and recognizing the behaviors that we value in the organization.

Trends and Early Indicators March 24, 2014

Posted by Tim Rodgers in Management & leadership, Process engineering, Quality.
Tags: , , , , , , ,
1 comment so far

Everybody understands that it’s better to be proactive and avoid problems rather than be reactive and respond after the problem has surfaced. In quality we try to shift from fixing defects to defect prevention. In strategic planning and project management we identify risks, assess their impact, and develop mitigation plans. If we could know in advance that something bad is about to happen, we could surely avoid it.

And of course that’s the problem: it’s hard to accurately predict the future. We may have identified a serious risk, but we underestimated its likelihood. We knew there was a good chance it would happen, but we couldn’t predict when it would happen. We put a plan in place to reduce the risk, but we had no way of knowing if the plan worked until it was too late.

Control charts are a great tool for monitoring a process. Once you’ve established process stability and eliminated special causes, the process will operate within a range of variability defined by common causes. Rules based on probability and statistical significance help determine when the process is starting to drift from stability, which gives the process owners time to investigate and eliminate the cause.

That’s great for measurable processes that are repeated frequently, but there are lot of business processes that are neither. We can’t identify, much less eliminate, the causes of variability. You can wait until the process is complete, measure its effectiveness, and make improvements before the next iteration, but that’s still reactive, and it can be expensive when we miss the target. We need tools to predict the outcome before the process is complete so we can perform course corrections as necessary.

We need leading indicators to determine if we’re on-track or heading off the cliff. In project management you can look at schedule, task completion, earned value, and budget trends. Risk planning can include triggers that provide early warning (i.e., if this happens, then we know we’re OK / we’re in trouble). Products and software can be designed to enable early testing of high-risk subsystems and interfaces, and manufacturing process parameters that impact critical performance requirements (determined from FMEA or PFMEA) can be monitored. We will always rely on judgment and experience to minimize risk, but if we don’t implement warning systems we might as well use a crystal ball.

Managing a New Team, Replacing a Manager March 10, 2014

Posted by Tim Rodgers in Communication, Management & leadership, Organizational dynamics.
Tags: , , , , , ,
add a comment

I’ve gone through several job transitions where I was the new manager, replacing another manager and assuming responsibility for an established team. Obviously this can be a very stressful time. The previous manager may have been involuntarily removed from their position, and may even still be working at the company. The specific reasons for their departure may be kept a secret in order to protect the company from liability, in which case the recent history is known to everyone but you. During the interviews and transitional period it’s likely that someone else, often the hiring manager, has been managing this team in addition to their other responsibilities, so the team may have been operating with limited supervision.

In addition to all that, many people have developed expectations for your performance, based on your interview and whatever the hiring manager has told them about you. You will be compared to the previous manager, and you will be expected to be at least as effective while addressing that person’s shortcomings (whatever those may be). There will be some patience during your first few weeks as you learn the new environment, but that may end abruptly before you think it will.

It may seem like a good idea to do spend your time doing some investigative research to learn more about the previous manager’s failings so you can avoid those mistakes, but I don’t recommend it. I think it’s better to be yourself instead of the opposite of someone else. Your way is comfortable to you, and you’ve had some success with it in the past, so stick with it.

Here’s what I recommend instead:

1. Meet the team, spend time with them, assess their skills and how well those skills fit their current job. You may need to re-arrange their responsibilities based on your assessment.

2. Communicate frequently and establish yourself as a manager with a specific style, consistent expectations, integrity and ethics. This team has had some turmoil, and they need some stability so they can focus.

3. Check-in frequently with your manager to make sure you’ve got a good understanding of their expectations, which are surely the most important of all expectations. Find out if there’s an urgent imperative that must be addressed immediately. Do things need to change dramatically and quickly? Your team needs to know that.

There’s a lot to consider in any new position, but if you’re a new manager you must demonstrate your ability to mobilize the team you’ve been assigned, establish credibility, and enable them to achieve higher performance. Getting off to a good start is the key.

%d bloggers like this: