Can Managers Make Innovation Happen? February 12, 2013Posted by Tim Rodgers in International management, Management & leadership, strategy.
Tags: innovation, job satisfaction, leadership, management, performance measures, strategy
add a comment
I’m hearing a lot about innovation these days. It seems that everyone is looking for new breakthrough ideas in products and services in order to grow revenue, differentiate from competition, and establish sustainable profitability. However, waiting for a flash of inspiration or “Eureka” moment is too random and unpredictable for most businesses. They would like to actively innovate, or at least provide an environment where productive innovation is more likely to happen.
What role do managers play in an organization that’s looking for innovation? What can managers do to inspire or foster innovation? I’ve always operated under the assumption that innovation is a creative, “out of box,” right-brain activity that can’t be managed with performance objectives and a schedule. I’m not convinced that you can innovate on-demand. I can’t recall ever attending a scheduled group brainstorming session that led to breakthrough ideas.
Some years ago I visited a peer manager at a different HP site to do some internal benchmarking and look for some best practices that I could bring back to my team. On a monthly dashboard of department metrics this manager included a bar chart showing the number of patent applications proposed by the team. I was astonished that this group of about 30 engineers and managers were averaging 30-40 applications every month. I was especially curious because this was a software quality team, and it wasn’t clear to me what part of our work could be patentable.
It turned out that the patent applications up to that time had nothing to do with software quality, or software testing, or anything remotely related to the products we were working on. Most of them seemed to be new applications of existing HP products. There may have been some occasional good ideas for new products in there somewhere, but I can almost guarantee that none of those patent applications were new, or unique, or valuable enough to be actually filed by the HP legal staff.
At the time I wasn’t eager to challenge the HP manager who was hosting my visit, but I still wonder what they were trying to do. The energy put into patent proposals didn’t seem to provide any direct contribution to the department’s objectives. I suppose it’s possible that the team brought more creativity and innovation to their work in software quality as a result of their patent efforts, but I couldn’t tell how that positively affected their other performance measures. I don’t think this was a good example of inspiring innovation.
I’m still not sure what managers can do to make innovation happen, but I think managers have a lot of influence over the work environment, and that can create conditions where innovation is more likely to happen:
1. Managers can communicate the business’s strategic interest in innovation, and help channel the team’s creativity to address specific needs (e.g., new products, new processes to reduce cost or improve quality).
2. Managers can identify those people in the team who are inherently creative and encourage them. Good ideas can certainly come from anywhere, but the fact is that some people are better able to think outside the box and make unexpected connections.
3. Managers can keep an open mind about new ideas and provide sufficient time and resources to evaluate them. This can be hard when resources are limited and the innovation is unfamiliar and risky. On the other hand, you shouldn’t expect the team to be innovative when there’s no chance their ideas will be given an opportunity to prove themselves.
I don’t think of myself as an innovative person who can generate creative ideas. I do think of myself as someone who understands the value of innovation to the business, and I want to do what I can to enable others to innovate effectively.
Decisions Based on Psuedo-Quantitative Processes December 28, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering, Quality, strategy.
Tags: leadership, management, performance measures, problem resolution, project management, six-sigma, strategy
add a comment
I’ve spent a lot of time working with engineers and managers who used to be engineers, people who generally apply objective analysis and logical reasoning. When faced with a decision these folks will look for ways to simplify the problem by applying a process that quantitatively compares the possible options. The “right” answer is the one that yields the highest (or lowest) value for the appropriate performance measure.
That makes sense in many situations, assuming that improvement of the performance measure is consistent with business strategy. You can’t argue with the numbers, right? Well, maybe we should. In our rush to reduce a decision to a quantitative comparison we may overlook the process used to create those numbers. Is it really as objective as it seems?
There’s a common process for decision making that goes by several different names. Some people call it a Pugh diagram or a prioritization matrix. A more sophisticated version called a Kepner Tragoe decision model includes an analysis of possible adverse effects.
These all follow a similar sequence of steps. The options are listed as rows in a table. The assessment criteria are listed as columns, and each criterion is given a weighting factor based on its relative importance. Each row option is evaluated on how well it meets each column criterion (for example, using a scale from 1 to 5), and this assigned value is multiplied by the weighting factor for the column criterion. Finally, the “weighted fitness” values are summed for each row option, and the option with the highest overall score is the winner.
At the end there’s a numerical ranking of the options, and one will appear to be the best choice, but the process is inherently subjective because of the evaluation criteria, the weighting factors, and the “how well it meets the criteria” assessment. It’s really not that hard to game the system and skew the output to provide any desired ranking of the options.
I’m not saying this is a bad process or that the result is automatically invalid. What I am saying is that this isn’t like weighing two bags of apples. The value of a decision analysis process isn’t just the final ranking, it’s the discussion and disagreements between the evaluators, which are obviously subjective. We shouldn’t consider the process to be an infallible oracle that delivers an indisputable answer just because there’s math involved.
I’m sure there are other examples of psuedo-quantitative processes that shouldn’t be accepted at face value. Leaders should question assumptions, listen to dissenting opinions, and check for biases. It’s rarely as cut-and-dried as it seems.
Expecting More from Performance Appraisals October 29, 2012Posted by Tim Rodgers in Management & leadership.
Tags: career growth, job satisfaction, leadership, management, manager, performance measures, power
add a comment
I’ve gotten to the point where I can’t trust a performance appraisal written by another manager. Appraisals are inherently objective and I have no idea how “well-calibrated” that manager is. In the world of metrology we would say that this process for measuring performance has serious issues with bias, stability, repeatability, and reproducibility. Yet the impact — positive or negative — on an employee’s salary, job security, and career growth within a company is huge. This is a lousy system that needs improvement. What can be done?
I think we have to start by asking: what are we trying to accomplish with a performance appraisal? On the surface this is a formal, written record of feedback given to the employee that ends up in their HR file. At minimum it includes the manager’s judgment about whether or not the employee achieved their assigned objectives, and usually some commentary about how those objectives were achieved. Note that despite encouragement to managers to provide informal performance feedback throughout the review period, the formal review may be the only time they’ve done so.
It’s not enough to report that the employee completed this task or failed to complete that one. Here are the questions I’d like to ask when I read about what an employee did during the review period:
- How challenging were those assigned objectives?
- Did the employee only meet expectations when they had an opportunity to exceed expectations?
- Did the employee have enough authority and positional power to personally influence the outcome?
- Were the performance measures granulated enough to enable the manager to judge individual accountability?
- Were there mitigating factors beyond the employee’s control that prevented them from achieving the objectives?
However, I think we’re trying to do something more than just provide a summary of the employee’s accomplishments. Organizations value certain behaviors and want to encourage employees to exhibit those behaviors. Unfortunately these are often not stated explicitly, and it may be left to individual managers to apply their own values and preferences. In my teams I value creativity, teamwork, persistence, adaptability, proactive “fire prevention,” eagerness-to-learn, and independent judgment. When I write performance reviews I look for examples of these behaviors that I can cite. When that information is missing from a review someone else has written, it’s a huge gap in my understanding of that employee.
Writing annual performance reviews is one of the most dreaded responsibilities for any manager, and we owe it to the employee to provide an assessment based on consistent and job-appropriate expectations. The process may be subjective and imprecise, but the long-term implications are significant.
Should Stretch Goals be Achievable? October 25, 2012Posted by Tim Rodgers in Communication, International management, Management & leadership, Organizational dynamics.
Tags: change management, leadership, management, performance measures, strategy
add a comment
In the early 90s our division went through a week-long training session to learn some basic process improvement techniques. As part of the program we were divided into teams of 6-8 people and given a simulation exercise based on a bare-bones “order fulfillment” process. We were instructed to run the process as a team several times as quickly as possible and record our average time. When all the teams reconvened we reported back to the instructor and discovered that everyone’s average time was between 3 and 4 minutes, and no one had achieved better than 2:50.
The instructor wrote the times on the whiteboard, then turned to the class and told us our performance was pathetic. He revealed that our imaginary competition had achieved an average process time of 20 seconds. If we couldn’t at least meet that benchmark level of performance, we would lose our imaginary customers and go out of business.
Everyone was stunned. No one could imagine how the process could be completed more quickly. I don’t remember if it was the instructor or one of the students, but someone suggested throwing out the script and changing the process completely, removing unnecessary steps and focusing on speed. Of course that was the point of the whole exercise. We achieved the 20 second goal, and I think we even managed to get down to 10 seconds.
I remembered that story the other day when I read a caution about setting aggressive goals for a team. The question is whether “crazy-high” goals are demoralizing because they’re impossible to reach, or whether they’re useful in getting people to think outside the box. Besides, who can say whether or not a goal is actually achievable?
Someone once told me that the metaphor is a rubber band. If you stretch the rubber band you can generate a lot of potential energy that can drive the organization to achieve unexpected things. On the other hand, if you stretch the rubber band too far it will break and there will be no gain.
I think it’s good for an organization to have aggressive goals that may seem just out of reach. It keeps people motivated and focused, and it can prevent them from becoming complacent. At the same time there should also be some kind of market advantage or competitive advantage or financial advantage associated with achieving those goals, otherwise this is just manipulation or sadism. I’m not sure it even matters whether or not the goal is ever achieved. Progress toward the goal can still be recognized and celebrated because it represents a finite improvement that supports business objectives. However, the real breakthroughs won’t come from incremental improvements to meet an easy goal.
Common Fallacies That Cause People to Doubt Statistics October 23, 2012Posted by Tim Rodgers in baseball, Process engineering, Quality.
Tags: baseball, factory quality, performance measures, quality engineering, six-sigma, test & inspection
add a comment
Lately I’ve been reviewing some old text books and work files as part of my preparation for the ASQ Six Sigma Black Belt certification exam in March. It’s interesting, and I think often amusing, to contrast the principles of inferential statistics and probability theory with they ways they’re used in the real-world. I think people tend to underestimate how easy it is to misuse statistical methods, or at least apply them incorrectly, and this can lead them to undervalue all statistical analysis, regardless of whether or not the methods were applied correctly.
I see this in baseball and political commentary all the time, particularly in the way people selectively or incorrectly use numbers to defend their point of view, while at the same time mocking those people who use numbers (correctly or not) to defend a different point of view.
Here are a few of the more-common mistakes that I’ve seen in the workplace:
1. Conclusions based on small sample sizes or selective sampling. Yes, we often have to make do with less data than we’d like, but that makes it especially important to put confidence intervals around our conclusions and stay open-minded about the possibility of a completely different version of reality. Also, a sample is supposed to represent the larger population, and we have to beware of sampling bias that excludes relevant members of the population and skews any findings based on that sample. Otherwise the findings are meaningful only for a subset of the population.
2. Unknown or uncontrolled measurement variability. We often assume that our measurement processes are completely trustworthy without considering the possible effects of variability due to equipment or people. If the variance of the measurement process exceeds the variance of the underlying processes that we’re trying to measure, we can’t possibly know what’s really going on.
3. Confusing independent vs. dependent events. There is no such thing as “the law of averages.” If you flip a coin 10 times and it comes up heads every time, the probability of a heads coming up on the 11th flip is still 50%. The results of those previous coin flips do not exert any influence whatsoever on future outcomes, assuming each coin flip is considered a single event. That being said, the event “eleven consecutive coin flips of heads” is an extremely unlikely event. If you take a large enough sample size, the sample statistics will approximate the population statistics (50% heads and 50% tails for an honest coin), sometimes simplistically referred to as “regression to the mean.”
4. Seeing a trend where none exists. This is usually the result of prediction bias, where we start with a conclusion and look for data to support it, and sometimes leading to selection bias, where we exclude data that doesn’t fit the expected behavior. Often we’re so eager for signs of improvement that we accept as proof a single data point that’s in the right direction. This is why it’s important to apply hypothesis tests to determine whether the before and after samples represent statistically significant differences. It’s also why we should never fiddle with a process that varies randomly but operates within control limits.
5. Correlation does not imply causation. You may be able to draw a regression line through a scatter plot, but that doesn’t necessarily mean there’s a cause-and-effect relationship between the two variables. This is where we have to use engineering judgment or even common sense. Earlier this year the Atlanta Braves baseball team lost 16 consecutive games that were played on a Monday. No one has been able to explain how winning or losing a baseball game could possibly be caused by the day of the week. A related logical fallacy is post hoc, ergo propter hoc (after it, therefore because of it). Chronological sequence does not imply causation, either.
Baseball and Measuring Individual Performance October 4, 2012Posted by Tim Rodgers in baseball, Management & leadership.
Tags: baseball, job satisfaction, management, manager, performance measures
add a comment
In early October 2012 one of the biggest current controversies in baseball is the question of who should be the American League’s Most Valuable Player: third-baseman Miguel Cabrera of the Detroit Tigers or center-fielder Mike Trout of the Los Angeles Angels. Both have had outstanding seasons by any measure, and Cabrera has received worthy praise for being the first player since Carl Yastrzemski in 1967 to win the hitter’s Triple Crown: leading the league in batting average, home runs, and runs-batted-in (RBIs).
For many baseball writers, commentators, and fans, this Triple Crown achievement is the strongest argument for Cabrera as league MVP. On the other side is the growing movement of the sabermetrics community, which for over 20 years has challenged the conventional wisdom about what constitutes a good season for an individual player, and how we compare the performance of different players. One of their issues with Cabrera and the Triple Crown is the importance given to RBIs. If a batter gets a hit (or in some cases even an out) that enables a baserunner to score, they get an RBI. If a batter gets the same hit in a different situation where no baserunner scores, there’s no RBI. The point is that what the hitter did is the same in each case. RBIs are not a measure of the hitter’s isolated performance because it depends on what other people have accomplished (getting on-base), or will accomplish (scoring a run after the hitter does his thing).
This suggests that any good hitter would get roughly the same number of RBIs if they had the same opportunities to bat with runners on base. Or, conversely, Cabrera would have significantly fewer RBIs if he were on a different team that did not put as many people on base.
BTW, some people have argued that any high RBI total is evidence that the batter is a “clutch hitter” who somehow performs better in high-impact situations. Unfortunately for those folks, there’s absolutely no evidence to support the idea that the “clutch hitter” exists. When you examine any player’s performance over an extended period (large sample size), there’s no statistical difference between how they hit with runners in scoring position vs. how they hit with no one on base.
Another example: pitcher won-lost record. Certainly a pitcher who doesn’t give up runs is valuable, but whether his team wins or loses the game depends on how many runs the team scores. As with RBIs, the won-lost record of a pitcher is not a good measure of his isolated performance, although certainly the team will ultimately be measured by their wins over the course of the season.
This is interesting to me, not just as a baseball fan. As managers we’re often responsible for measuring the performance of individuals and teams. I wrote about this in an earlier post in 2009 (see Individual Performance Measures). Team performance can be judged by examining their accomplishments and contributions to strategic business goals. Individual performance is harder because it’s harder to isolate and measure the unique contribution of one person without considering the context and environment, yet at most companies the compensation and bonus plans are tied to individual performance.
We have to determine what this person did to enable team success or avoid team failure. We need to take into account the interrelated nature of work and the limited power and influence of one person, since there are few jobs where one person has full control over the outcomes. On an individual level, we need to match performance measures to the person’s assignments, which makes them binary: Did this person accomplish this goal, or not? The challenge for the manager is to understand context, and differentiate between isolated good performance and average performance under favorable circumstances.
Six Sigma Without Management Support September 24, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering, Quality.
Tags: change management, leadership, management, performance measures, process, six-sigma
add a comment
Early in my career I routinely signed up for on-site classes and workshops designed to teach some new methodology to improve our management of people or projects. I remember returning to my work group afterwards, always eager to put my lessons into practice, but often struggling against the real world that turned out to be indifferent or even resistant. I was usually able to integrate some element from the class into my evolving personal philosophy of management, so it wasn’t a complete loss.
However, it never seemed that the company was getting a very good return on the training cost. My manager typically didn’t attend the same class, and neither did most of my peers, so I was usually left on my own to figure out how to implement the new methodology. I don’t recall ever getting any follow-up or support after the class, or any verification that my performance had somehow improved as a result.
I was thinking about all this the other day when I read an article emphasizing the importance of management support when working on a six-sigma project. Obviously any change management or process improvement initiative can be undermined by lack of executive sponsorship, especially if there’s a cost associated with the change. What’s surprising (to me) is why some organizations would create an army of change agents by investing in training and certifying green belts and black belts, and then be surprised when these people want to actually change things.
Certainly implementing change requires management support, but that support should already be secured when the improvement project is first proposed and approved, if not earlier. Black belts and green belts shouldn’t be left alone to figure out where and when to apply their training. Their proposals should be guided by the organization’s business objectives and strategic differentiators. Their performance should be judged by the measurable improvements realized. Their success requires overcoming obstacles, but their managers shouldn’t set them up for failure.
Benchmarking: It’s the Process, Not the Data That Matters September 20, 2012Posted by Tim Rodgers in Management & leadership, Process engineering, strategy.
Tags: change management, leadership, management, performance measures, process, strategy
add a comment
It seems not that long ago that benchmarking was another “next big thing,” touted as an invaluable tool for strategic planning and operational improvement. I remember attending several training seminars that cited examples from clever firms that used publicly-available information or found benchmarking partners who were willing to share details about some industry-leading or “best in class” process. Whatever the source, these firms were able to make dramatic improvements in their process by leveraging these best practices, typically after some customizing for their own local ecosystem.
Or at least that’s how it’s supposed to work.
I think one big reason why we don’t hear much about benchmarking any more is that many organizations either misunderstood the concept, or discovered that it’s harder than they expected. I’ve seen a lot of presentations that included “benchmarking data” that showed the performance of our competitors in some key area. Obviously it’s good to know how your competition is doing, but benchmarking isn’t about compiling a table of numbers comparing your business to theirs.
What’s often lost is the reason to do this in the first place. It’s supposed to start with a prioritized need to improve some key process or function, learning how to do it better, and then committing to a change management program to implement best practices. Because it’s pretty unlikely that your competitors are going to help you, benchmarking requires identifying companies (or possibly even other organizations within your own business) who perform that process or function well, regardless of their industry. In addition, the breakthrough ideas are more likely to come from outside your industry, and those folks are much more likely to cooperate with your investigation.
It’s easy to say “that can’t work here,” particularly if there are significant changes required to implement the lessons. This is when it’s important to go back to the start to re-visit the business need and why it was a good idea to try benchmarking in the first place. That’s no different from any other change management initiative that requires high-level support and perseverance.
Helping Engineers Become Leaders September 14, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics.
Tags: career growth, job satisfaction, leadership, management, manager, performance measures
add a comment
This week I attended a panel discussion with a group of high-level managers from a local company who shared training and coaching strategies for developing the leadership skills of their engineering staff. Generally, leadership training assumes that leaders can be made (and aren’t just born), and there are some specific challenges associated with leadership training targeted for engineers.
Many engineers are not interested in leadership and consider management to be an unwelcome career path. I have a lot of respect for people who have the self-awareness and emotional intelligence to know that they aren’t “management material,” and can resist the usual financial or status rewards that go with the management track (see Happy Not to Be a Manager). However, engineers may find themselves attracted to leadership positions in order to have more control and influence over outcomes in their organization that have nothing to do with an individual’s technical knowledge or skills.
Leaders add value at all levels in a business, but even engineers who resist leadership or management can benefit from learning leadership skills and other soft skills that are outside their immediate technical domain. Engineers tend to be analytical, logical, and they rely on fact-based arguments. Effective collaboration typically requires a range of communication, alignment, and influencing strategies.
It’s hard to directly measure the success of leadership development in any organization, but two indicators were suggested during the panel discussion. One is the availability of qualified internal candidates for promotion to open positions in the business. At the company represented at this week’s meeting, managers are encouraged and supported to develop leaders within their teams, even though that could result in the transfer of talented people to other teams. They understand that it’s a win for the business, not a loss for their department.
Another indicator of a successful leadership development program is the performance of new leaders in their first efforts. If more-experienced managers or leaders are forced to intervene and take over from a struggling new leader, that’s a sign that the development program (which should include post-training coaching, mentoring and monitoring) has failed.
Collecting Data Not a Substitute For Strategy September 5, 2012Posted by Tim Rodgers in Management & leadership, Process engineering, strategy.
Tags: leadership, management, performance measures, strategy
add a comment
Some years ago I managed a team that was required to present a monthly report to a VP who drove us crazy. We spent at least a week preparing for every meeting, scrambling to pull together data that we were criticized for not having available at the previous meeting. It was never enough, and we were never able to understand or anticipate what this guy was looking for. The VP wouldn’t explain the logic or underlying business need, and after a while we were all just too intimidated to do anything but grimly re-group and try again the following month. I was very glad to leave that position.
It’s possible that this VP was just that kind of manager who thought that intimidating his subordinates was part of his job description, or maybe our monthly failures gave him a useful scapegoat for more significant shortcomings elsewhere in the business. Regardless, I did notice that all of the other departments seemed to be just as occupied with repeated cycles of data collection and reporting. We all spent a lot of time searching for numbers and plotting them on graphs, I suppose in the hope that something useful would come out of it all, some truths would be revealed, and the path forward would become clear.
Over the years I’ve met a lot of managers who get this wrong. Data is a means to an end, but you don’t start with data as a substitute for strategy. The proper sequence is this: determine what it is you want to improve, or what business goal you want to achieve, then collect data to help you assess whether you’re approaching that goal. If the team understands what actions and behaviors push the needle in the right direction, then they can make the appropriate choices on their own.