Innovative Design vs. Lean Product Development April 17, 2013Posted by Tim Rodgers in Management & leadership, Product design, Project management, Quality.
Tags: innovation, management, process, product development, six-sigma, strategy
I’ve been very busy focusing on my job search and some self-improvement projects, and unfortunately it’s been harder to find some time to address my accumulated backlog of topics. I regularly follow several group discussions on LinkedIn related to product development and quality, and lately a popular discussion topic is how to inspire innovation in product design.
See for example Wayne Simmons and Keary Crawford “Innovation versus Product Development” (http://www.innovationexcellence.com/blog/2013/04/12/innovation-versus-product-development/), and Rachel Corn’s blog “Is Process Killing Your Innovation?” (http://blog.cmbinfo.com/bid/87795/South-Street-Strategy-Guest-Blog-Is-Process-Killing-Your-Innovation?goback=%2Egde_2098273_member_229196205). The latter post quotes a former 3M vice president who says that Six Sigma killed innovation at 3M, apparently because 3M’s implementation of Six Sigma required “a full blown business case and even a 5-year business plan to get a new idea off the ground and into production.” The VP wonders: how do you institutionalize innovation without stifling it?
The conventional wisdom seems to be that product design is inherently a creative, right-brain activity that will fail or at least fall short if constrained by process. You can’t make art on a schedule.
I think this is a false conflict. I don’t see any reason why teams shouldn’t be able to conceive new designs within a structured and disciplined product development environment. Obviously the ultimate objective is to get a product to market, so at some point the experimentation must end, doesn’t it?
Six Sigma is about reducing variation. The lean movement is about eliminating waste. I understand that the early stages of product development may be wildly unpredictable and seemingly inefficient. Shouldn’t the latter stages focus on predictable outcomes, standardized processes, fast time-to-market, defect prevention, and efficient production?
Tags: leadership, management, power, process
add a comment
I’ve been puzzling over this one for some time: Why is it so hard for companies to leverage best practices developed internally? At HP we used to think the problem was poor knowledge sharing mechanisms within the corporation, especially across geographically-dispersed and independent business units, but I think it goes deeper than that. You can tell people to document and archive their processes on SharePoint, and you can host internal conferences to provide a forum for learning, but unless people are open to the possibility that there’s a better way you’re going to waste money reinventing the wheel.
The “not invented here” syndrome leads to bias against ideas that come from the outside. “They don’t understand our unique environment,” and, “Just because it works there doesn’t mean it will work here.” Even when compelled to use the new process there’s often passive-aggressive undermining or outright sabotage. Unfortunately these internal antibodies are often more antagonistic towards ideas from within the same company. If we use someone else’s ideas, doesn’t that imply that they’re smarter than we are? We don’t want them to get the credit, do we?
Sorry, but the smarter one (and the more valuable one to the organization) is the person who focuses their attention on the unsolved problems instead of those that were already solved. We all build on the foundations of engineering and process development that came before. Of course the local environment may indeed be different, and that may require a tweaking of the imported process. However, senior leadership should encourage leveraging of internal processes as another example of maximizing return-on-assets, and both the exporter and importer should be recognized as efficient collaborators. Also, when teams insist on using their own process they should bear the burden of proof to explain why the company should incur the additional expense to maintain more than one means to accomplish the same goal.
Changing the Tires While Driving the Car December 13, 2012Posted by Tim Rodgers in Management & leadership, Process engineering, strategy.
Tags: 30-60-90 day plans, change management, leadership, management, process, strategy
add a comment
That’s a phrase we often use to describe a chaotic work environment, but what if anything can be done when you’re faced with this situation? How should we manage when the current processes are incomplete, insufficient, ineffective, or even missing? How do you evaluate and implement process improvements without jeopardizing commitments to deliverables and performance metrics? Is there a logical way of managing these changes, or do we muddle through it, and later smile sympathetically when we hear about another manager’s struggles?
Obviously the whole point of introducing a new process or making a process change is to gain some improvement in performance, output, and/or cost. However there’s no getting around the fact that any process change will be accompanied by at least a short-term loss of productivity until you’re past the learning curve.
Will the current activities or projects continue long enough to benefit from an immediate change? If the benefit doesn’t outweigh the “distraction cost,” then it’s probably better to wait for scheduled downtime or a natural break between projects (in other words, wait until the car is stopped before changing the tires). If there is no natural break, then at least one project will have to pay the price so that future projects can realize the advantage. Which project can best tolerate the cost, or the risk of failure to meet scope or schedule requirements?
One practical question is whether it’s even possible to switch processes in mid-stream. If you’ve already started with the old process, can you finish the job with a new one? Starting over again from the beginning should be considered a last resort, only practical if the existing process is so unsatisfactory that you’re willing to sacrifice time for better results.
Another key concern is whether or not the organization as a whole is aligned with the need for a process change. It may be politically useful to roll out the new process on a small scale in order to build support for broader implementation. On the other hand, if there’s enough critical mass, it can be highly effective to “burn the boats,” essentially making it impossible to return to the old process.
If it’s the right thing to do, it’s just a question of when. If the benefits can’t be clearly articulated, it will never be the right time.
Not So Fast: Baseline That Process Before Changing It November 29, 2012Posted by Tim Rodgers in Process engineering, Quality.
Tags: change management, process, quality engineering, six-sigma
add a comment
Teams are usually in a big hurry to make an improvement in an under-performing process because there’s some degree of unhappiness or pain (usually financial) associated with that process. The sooner the process improves, the sooner the pain goes away. However, in the rush to move the needle a little in the right direction many teams fail to establish a performance baseline for their current process.
On the surface this is bad because without an understanding of the current state you won’t know if you’ve actually made any improvement. If your process is unstable and subject to special causes of variation, it’s impossible to tell whether the process has improved because of your deliberate action, or because of the influence of those special causes. In fact, those special causes may overwhelm and mask any positive effect of the intended process improvement.
I realize it may not always be practical to fully characterize a process and establish stability before trying to improve it, but if you can’t isolate and eliminate special causes then you can’t draw any conclusions about the success of your efforts.
Business Processes: Configuration, Customization and Convergence November 27, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering.
Tags: change management, leadership, management, process, product development
add a comment
In the mid-00s after Mark Hurd’s accession to the CEO office HP’s senior leadership aggressively pursued overhead cost reduction to improve the firm’s profitability. HP’s CIO used this opportunity to consolidate IT functions, simplify processes, and eliminate redundancies due in-part to “shadow IT” groups that had grown up during the previous era of independent business units and local autonomy.
One of the early targets were the defect tracking systems used by product development teams for reporting, management, and disposition of defects found during pre-release testing. I don’t remember the exact count, but apparently the number of active defect tracking systems was in double figures. The IT team determined that convergence on a single, common, corporate-wide DTS hosted on corporate servers would lead to significant savings: from reduced headcount and other support resources, and harder-to-quantify efficiency, productivity and communication improvements.
The DTS convergence project did not go smoothly. Many product development teams objected strongly to the plan. They claimed that they would not be able to meet schedule and quality commitments for projects already in-progress if they were forced to switch to a different DTS in mid-stream. Unfortunately it didn’t help that the corporate IT team seriously underestimated the effort required to manage the transitions. Some product development teams appealed to their business unit executive, looking for an exemption or at least a delay. Others adopted a passive-aggressive position that only hardened the resolve of the IT team.
The corporate IT team objected strongly to the idea of local customization of the DTS and supporting processes, arguing that the overall cost savings and other benefits would be significantly reduced if every product development team were allowed to run their own system. I think the IT folks would have had a better chance of success if they had acknowledged the value of local solutions and introduced a system that enabled local configuration instead of customization.
The core functionality of the DTS could be retained, and the support resources minimized, while allowing the local product development team to define or select options to match their familiar and preferred style. Certainly this would not have eliminated all resistance to the change, but it would have enabled a more balanced discussion of the benefits of a common DTS that recognizes and values the needs of product development.
This happens frequently in business processes of all kinds, especially in multi-site organizations or in the aftermath of a merger/acquisition . One side pushes for convergence and the other side insists that they need a custom solution because of their unique requirements. The trick is to find a way to design a common system that can be locally configured without giving up the expected benefits in support cost and communication.
Six Sigma Without Management Support September 24, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering, Quality.
Tags: change management, leadership, management, performance measures, process, six-sigma
add a comment
Early in my career I routinely signed up for on-site classes and workshops designed to teach some new methodology to improve our management of people or projects. I remember returning to my work group afterwards, always eager to put my lessons into practice, but often struggling against the real world that turned out to be indifferent or even resistant. I was usually able to integrate some element from the class into my evolving personal philosophy of management, so it wasn’t a complete loss.
However, it never seemed that the company was getting a very good return on the training cost. My manager typically didn’t attend the same class, and neither did most of my peers, so I was usually left on my own to figure out how to implement the new methodology. I don’t recall ever getting any follow-up or support after the class, or any verification that my performance had somehow improved as a result.
I was thinking about all this the other day when I read an article emphasizing the importance of management support when working on a six-sigma project. Obviously any change management or process improvement initiative can be undermined by lack of executive sponsorship, especially if there’s a cost associated with the change. What’s surprising (to me) is why some organizations would create an army of change agents by investing in training and certifying green belts and black belts, and then be surprised when these people want to actually change things.
Certainly implementing change requires management support, but that support should already be secured when the improvement project is first proposed and approved, if not earlier. Black belts and green belts shouldn’t be left alone to figure out where and when to apply their training. Their proposals should be guided by the organization’s business objectives and strategic differentiators. Their performance should be judged by the measurable improvements realized. Their success requires overcoming obstacles, but their managers shouldn’t set them up for failure.
Benchmarking: It’s the Process, Not the Data That Matters September 20, 2012Posted by Tim Rodgers in Management & leadership, Process engineering, strategy.
Tags: change management, leadership, management, performance measures, process, strategy
add a comment
It seems not that long ago that benchmarking was another “next big thing,” touted as an invaluable tool for strategic planning and operational improvement. I remember attending several training seminars that cited examples from clever firms that used publicly-available information or found benchmarking partners who were willing to share details about some industry-leading or “best in class” process. Whatever the source, these firms were able to make dramatic improvements in their process by leveraging these best practices, typically after some customizing for their own local ecosystem.
Or at least that’s how it’s supposed to work.
I think one big reason why we don’t hear much about benchmarking any more is that many organizations either misunderstood the concept, or discovered that it’s harder than they expected. I’ve seen a lot of presentations that included “benchmarking data” that showed the performance of our competitors in some key area. Obviously it’s good to know how your competition is doing, but benchmarking isn’t about compiling a table of numbers comparing your business to theirs.
What’s often lost is the reason to do this in the first place. It’s supposed to start with a prioritized need to improve some key process or function, learning how to do it better, and then committing to a change management program to implement best practices. Because it’s pretty unlikely that your competitors are going to help you, benchmarking requires identifying companies (or possibly even other organizations within your own business) who perform that process or function well, regardless of their industry. In addition, the breakthrough ideas are more likely to come from outside your industry, and those folks are much more likely to cooperate with your investigation.
It’s easy to say “that can’t work here,” particularly if there are significant changes required to implement the lessons. This is when it’s important to go back to the start to re-visit the business need and why it was a good idea to try benchmarking in the first place. That’s no different from any other change management initiative that requires high-level support and perseverance.
How Do You Plan to Catch Up? June 12, 2012Posted by Tim Rodgers in Management & leadership, Project management.
Tags: leadership, management, problem resolution, process, project management, software development, software quality
add a comment
When a project falls behind schedule — as it almost inevitably does — there’s a tendency for project managers and their teams to enter a state of denial and convince themselves that they can catch up by “working harder.” Maybe they remembered an earlier stretch when they weren’t working especially hard, maybe thinking there were still many weeks or months of budgeted time. Perhaps they encountered a barrier that couldn’t be easily overcome right away, and decided to “work around it” and come back to it later. Senior managers who are responsible for providing oversight to the project teams may accept this “working harder” plan, either because they’re sympathetic or don’t want to be bothered with the details.
How often does that actually succeed? Senior management shouldn’t accept an explanation that the project team will simply “work harder.” They should require the project team to explain how they will work differently. Doing things the same way is not going to help the team catch up; there needs to be a change in the scope of the project or how the work is done, for example adding resources, re-assigning tasks, or re-arranging tasks on the Gannt chart.
When I managed software testing we would track the number of defects found and fixed each week over the course of the project. The goal was always to have zero open defects by the time of the scheduled software release date. From historical data we had a rough idea of how many defects one developer could fix each week, so at any point in the project we could extrapolate an end date based on the number of open defects. If the extrapolated date was later than the scheduled date, we knew we were behind schedule and had find a way to catch up. In a case like this “working harder” means somehow increasing the rate at which the work (defect fixing) gets done, because continuing at the current rate was not going to reach the goal.
We used a variety of strategies, depending on the software project. Most often we would reduce the scope of the remaining work by de-rating defects and removing them from the fix queue, essentially accepting a lower level of quality. Sometimes we would add resources (developers) to fix defects, but anyone who has read “The Mythical Man-Month” from Fred Brooks knows that adding resources to a late software project just makes it later. We would finesse that problem by changing priorities and re-assigning tasks within the existing team instead of adding people (it didn’t always work).
Regardless, project managers shouldn’t be allowed to wave their hands and offer vague plans when projects fall behind. The catch-up plan must explain how things will be done differently, and it must be explicitly reviewed and approved by senior management to ensure a successful project completed without further surprises.
Answers Seeking Questions May 10, 2012Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering, strategy.
Tags: leadership, management, problem resolution, process, strategy
add a comment
My first real job out of college was at a defense contractor during an era of free spending in that segment of the US economy. This particular company decided they wanted to have the best-equipped analytical laboratory in the industry, so they went on a spree and purchased a variety of specialized instruments that were better known in academic settings, then hired a bunch of Ph.D. analytical chemists (like me) to figure out how to use the instruments for applications like incoming material inspection, process control, and failure analysis.
I didn’t give it much thought at the time, so it wasn’t until later that I realized how ridiculous this was. Most of the expensive instruments were ill-suited for that kind of routine testing in an industrial environment, and all of them had capabilities that far exceeded what was required. Obviously it was also overkill to hire Ph.D. chemists to basically qualify the instruments and run the daily tests.
It’s easy to chalk this up as an example of what happens when there’s too much money to spend, but there’s another lesson here about the folly of insisting on an answer without being clear about the question.
I remembered this story the other day when I heard about a quality manager who claimed to be an expert in statistics but couldn’t explain the difference between a t-test and an F-test. The difference is subtle, but isn’t it more important to be able to explain what they’re used for, not just what they are? How often do we switch on the auto-pilot and apply a tool or technique that’s inappropriate for the situation? Even worse, how often do we ignore evidence that contradicts the decision we’ve already made? It’s worth taking a minute to state clearly what problem we’re trying to solve instead of looking for a question that fits the answer.
Three Product Development Models April 19, 2012Posted by Tim Rodgers in Product design, Project management, strategy.
Tags: early stage companies, management, process, product development, project management, strategy
add a comment
Product development organizations should always consider how they can reduce the costs associated with designing, prototyping, testing and qualifying new SKUs or software releases. One easy way to do that is to re-use a part or component or subsystem that has already been qualified, assuming the finished subsystem can still meet the functional requirements of the new product with a minimum of integration effort. Obviously if the the initial design cost and tooling cost can be amortized over a larger number of products (or releases), that’s a good thing.
A strategy of leverage and re-use clearly saves money, but it can also prevent the organization from considering alternate designs that may do a better job of meeting performance requirements. The most-appropriate product development model requires a trade-off between projected features and minimizing costs over the long-term. Here are three models for consideration:
Single Product: Development of a single product, feature set, and price point; one-at-a-time to respond to the market
- Advantages: Laser-focus, enabling design to achieve minimum cost for that feature set without being constrained by earlier choices
- Limitations: Limited design re-use, possibly lengthening the development time; design costs, tooling and parts not amortized across a larger number of units
NB: Early-stage companies typically develop one product at a time, responding to the unique requirements of each customer, however the lack of leverage will be debilitating in the long term (see more on this topic in the book Crossing the Chasm from Geoffrey Moore).
Platform: Development of a fixed core or foundation that becomes the leveraged, common basis for follow-on derivative products
- Advantages: Initial design and tooling investment in the platform is leveraged to subsequent derivatives or product extensions with rapid time-to-market
- Limitations: The initial platform design limits derivative product design options, reducing flexibility and market responsiveness
Architecture: Development of a set of interchangeable modules or assets with well-defined interfaces that are designed to enable forecasted product options to meet anticipated customer needs
- Advantages: High level of design flexibility to respond to market requirements; initial investment in architecture design enables lower product development cost, leveraged tooling and faster time-to-market; can quickly move to Platform or Single Product development models
- Limitations: Modules are optimized for leverage and re-use, not necessarily for absolute minimum cost