Getting People to Care About Quality December 4, 2013Posted by Tim Rodgers in Management & leadership, Quality, strategy.
Tags: change management, factory quality, performance measures, quality engineering, test & inspection
add a comment
Quality sounds like something that everyone will support on principle, unless you have a saboteur working in your midst. It’s probably safe to assume that no one is deliberately acting to produce a lower defective product or service. The problem is that everyone makes daily decisions that balance quality against other considerations, whether to save money, or meet a committed date, or keep the factory running. We tell ourselves that quality is free, but even in highly-evolved organizations it doesn’t happen without deliberate effort. The challenge to quality professionals is helping people understand how good quality contributes to the business and thereby provide a more useful basis for decision making.
Here’s a little not-so-secret secret: all decisions in for-profit businesses eventually come down to how to bring in more revenue while controlling expenses. If you want people to pay attention to quality, talk about money.
For better or worse, this is a lot easier after the cost has already been incurred. If you have to spend more money or time because of scrap or rework, or you have to repair or replace product at the customer, or you’re liable for warranty or other contractual post-sale costs, everyone will be interested to know how it happened and how it can be prevented in the future. After some investigation you may identify the cause or causes, and you can recommend actions to eliminate them. Of course those corrective actions will have a cost of their own, and you will have to determine if there’s a net gain.
All of that is based on the assumption that there’s a 100% probability of that bad thing happening again if you do not implement the corrective action, and a 0% probability if you do. If you want to get more analytical you can estimate those probabilities based on engineering analysis, historical trends, or just good old-fashioned judgment, and then apply a de-rating factor to the cost. This is where an FMEA analysis is useful, along with early prototyping and testing to check those assumptions about probability and impact.
Here it’s important to note that there are indirect costs of poor quality that are harder to factor in to this calculation. For example, even a single incident at a key customer could cause a significant decline in future revenue if it affects brand reputation. Low-probability yet high-severity events are also problematic.
Of course it’s generally harder to look ahead and assess the unknown probability and impact of a quality risk that has not yet been encountered. As long as the bad thing hasn’t happened yet, it’s easy to underestimate it. This is what causes organizations to reduce cost by using cheaper parts or removing design safeguards or eliminating quality checks. They’re saving real money today and implicitly accepting the uncertain risk (and cost) of a poor quality event in the future. Again, if you can say with 100% certainty that this bad thing will happen without specific actions being taken, then your choice is clear. Unfortunately there are many choices that are not clear, or even recognizable.
Are you really willing to spend whatever it takes to prevent any quality problem? Of course not. Managing quality is managing risk, and looking for ways to assess and minimize that risk while under pressure to reduce cost now. It’s not very satisfying to say “I told you so.”
Change Management and “Moneyball” (Movie Version) December 1, 2013Posted by Tim Rodgers in baseball, Communication, Management & leadership, strategy.
Tags: baseball, change management, leadership, management, power, strategy
add a comment
The other day I watched the movie “Moneyball” again and was reminded of a few important characteristics of successful change management. Brad Pitt stars as Billy Beane, the general manager of the Oakland A’s baseball team, an organization struggling with a limited budget to develop, attract, and retain players.
At the beginning of the movie we learn that before the 2002 season the A’s have lost three of their best players who have signed more lucrative contracts elsewhere. Beane is trying to figure out how to replace these players, and more generally put together a winning team within the financial constraints imposed by ownership. After a chance encounter with a low-level analyst from a rival organization, Beane realizes that he cannot compete if he builds a team using the traditional ways of assigning value to players. Almost out of desperation, he decides on an unconventional strategy based on the emerging science of sabermetrics. He immediately faces resistance from his experienced staff, specifically the field manager and scouts who are unconvinced and in some cases actively working against the strategy.
Ultimately it’s fairly happy ending: despite public criticism of Beane’s decisions and early disappointments on the field, the A’s have a successful season. At one point they win 20 straight games, setting a new league record, and they make the playoffs, but lose in the first round. Beane is offered a significant raise to leave the A’s and join the Boston Red Sox where he would have the opportunity to apply the same principles with a much larger budget. Beane declines the offer, but the unconventional strategy has been seemingly validated.
The movie focuses Beane’s underdog status and uphill battle during the season, and I’m sure some of the real-life events have been changed for dramatic effect. Regardless of whether they actually happened or not, there are several scenes that illustrate elements of successful change management.
1. A clear explanation of the new direction. In the movie, Beane leads a meeting of his senior staff to discuss plans for acquiring players for the upcoming season. This looks like Beane’s first opportunity to apply his new strategy, but he misses an important chance to align with his team. It’s clear that he’s the boss with the final authority, and it’s not necessary for everyone in the room to agree, but Beane could have taken the time to explain the new direction and acknowledge the objections. In later scenes, Beane acknowledges this mistake to his field manager who has been undermining the strategy through his tactical decisions, and fires a senior staff member who has been especially vocal in opposition.
The lesson: the team may not agree with the change, but they should be very clear about why change is needed. Team members should have the opportunity to raise objections, but once the direction has been set, their only choices are to support the change or leave the team.
2. Removing options to force compliance. Beane is frustrated by opposition from his field manager who gives more playing time to players whose skills are not highly valued in Beane’s new system. Beane stops short of giving a direct order to the manager to be make decisions that are more consistent with the strategy, and instead Beane trades these players to other teams, effectively removing those undesirable options. This is a variation of what is sometimes called “burning the boats,” from the Spanish conquest of the Aztec empire. You can’t go back to the old way of doing things because that way is no longer an option. As Beane replaces players, his manager has fewer opportunities to not follow the strategy.
The lesson: this seems like passive-aggressive behavior from both parties, but I can see how it can be effective. My preference would be to reinforce the desired change rather than take away choices, but if the old way is very well established you need to help people move on and not be tempted to return.
3. Giving it a chance to work. The A’s get off to a slow start and pressure builds on Beane to abandon the new strategy. In one scene he meets with the team’s owner and assures the owner that the plan is sound and things will get better. It eventually does, despite all the skepticism and opposition, and the movie audience gets the underdog story they were promised.
The lesson: even the best ideas take time. It’s absolutely critical to set expectations with stakeholders to help them understand how and when they will detect whether the change is working. Impatience is one of the biggest causes of failure when it comes to change management.
Quick Note: Theory of Constraints and Six Sigma November 24, 2013Posted by Tim Rodgers in Process engineering, Quality.
Tags: factory quality, performance measures, process, six-sigma
add a comment
Last week I attended the monthly meeting of the Northern Colorado chapter of the American Society for Quality. The featured speaker was Dr. Russ Johnson, President of Improvement Quest, a local management consulting firm. Dr. Johnson’s talk “Creating a Culture of Harmony by Using the Theory of Constraints Concepts to Focus and Integrate Lean and Six Sigma” included several interesting insights about to effectively integrate these strategies in a production environment.
Of course the key to successful implementation of the Theory of Constraints is identifying the bottleneck, or constraint, in the production process and then optimizing the rest of system around the constraint (“exploit, subordinate, elevate”) in order to maximize overall throughput while controlling inventory (including work-in-progress, WIP) and operating expense. At the risk of oversimplifying, Six Sigma can be described as “reduce variability,” and the lean philosophy is essentially “eliminate waste.”
These strategies are not different ways of solving the same problem. They can and should be implemented as elements in an integrated improvement effort. The trick is understanding that not all processes are equally good targets for a six sigma or lean improvement plan. It depends where the process is in relation to the constraint.
Any yield improvement or waste elimination that occurs upstream from the bottleneck doesn’t improve throughput because it effectively increases the input to the bottleneck that is already limited. In fact, it can be detrimental to the operation as a whole if it increases WIP and associated costs for material and operating expenses. The focus should be on downstream processes where yield improvement or waste elimination effectively increases the capacity of the constraint. Scrap or rework that occurs after the constraint is especially damaging because it essentially requires another pass through the constraint.
The point is that you can’t assume that all improvements at the micro level are equally beneficial at the macro level. Yes, generally there’s value in reducing variability and eliminating waste, but when your resources are limited and you have to focus, consider the constraint and whether your improvements are really improving the metrics that matter.
Higher Value for Higher Priced Employees November 22, 2013Posted by Tim Rodgers in International management, Product design, strategy, Supply chain.
Tags: job security, management, product development, software development, supply chain
You can complain about it, but offshoring is not going away. Businesses will always look to reduce cost, and wherever there’s a significant difference in labor cost, that difference is going to attract interest. I’ve spent almost my entire career working at companies that have moved their supply chain and production factories to locations that have lower labor cost. For manufactured goods this savings must be weighed against other expenses to determine whether there’s a net gain, such as shipping costs and finished goods in-transit. For knowledge work where there’s virtually zero cost to instantly move the output from one part of the world to another (such as software), the advantage is even greater.
You can complain about it, but if you want to justify a higher cost of labor in one part of the world, you have to demonstrate that this labor provides higher value. The added cost must be offset by some benefit, ideally something that can be quantified. It’s important to distinguish between sources of higher value that are fundamental and relatively stable vs. those that can be eroded over time.
Here are some examples:
1. “We know how to do it here, they don’t know how to do it there.” Your design team, and factory, and supply base may be well-established in one location, but you’re wrong if you think that can’t be replicated somewhere else. There are smart, well-educated people all over the world, and it’s easier than ever to access their skills, especially for knowledge work. There will be training, start-up, and switching costs, and those will have to be evaluated against the steady-state labor cost savings, but it’s not impossible.
2. Cost of quality. This is related to #1 above. You may be able to produce output at a different location with lower labor cost, but does the quality of that output lead to additional expenses later, such as rework, field repair, and loss of customer loyalty? These can be addressed with specific improvement plans, depending on the causes of poor quality, and are not necessarily permanent conditions. As above, the costs to improve or maintain quality at any location should be compared with the labor savings.
3. Geography. This is an example of a more fundamental difference that may justify higher labor cost. Many businesses benefit from close physical proximity to their customers, enabling them to respond quickly to changes in market demand and mix without the burden of a long finished goods pipeline from their production sites. A hybrid approach is late-point differentiation where platforms are built ahead at low cost and later customized depending on the specific order. Another benefit of geography is co-design, where frequent, real-time interaction with customers leads to a better fit to their requirements. Some companies will overcome this one by using available technology to communicate with remote teams, or performing rapid prototyping locally to verify the design before shifting volume production elsewhere.
Note that geography can also be an overriding factor when there are political or economic barriers, such as regulatory or “local content” requirements.
My point is that if you insist on doing the work in a location with higher labor cost, you can’t assume that the corresponding value will always be worth the higher cost. Your survival as a business depends on your ability to identify, develop, exploit, and maintain a source of competitive advantage. Your choices about labor cost and geographic location should support your strategy to maintain competitive advantage, and that strategy should be regularly reviewed and updated to make sure you’re getting the value your paying for.
What Does Almost Done Really Mean? November 17, 2013Posted by Tim Rodgers in Communication, Project management.
Tags: communication, management, performance measures, project management
1 comment so far
About a year ago I earned a Project Management Professional certificate after learning the methodologies and structured processes formalized by the Project Management Institute. Almost all of my experience in project management has been in product development, and the PMP training provided a broader perspective on other types of projects. I was particularly intrigued and somewhat amused by the use of quantitative measures of project status based on Earned Value Management (EVM).
I can see why EVM would appeal to a lot of project managers and their sponsors and stakeholders. Everybody wants to know how the project is going and whether it’s on-track, both in terms of schedule and budget. They want a simple, unambiguous answer, without having to look at all the details. The EVM metrics provide project status and a projection of the future, in terms of the value and expenses of the project’s tasks that are already completed and still remaining.
The problem for many projects is that it requires a lot of planning and discipline to use EVM. Not only do you have to generate a full Gannt chart showing all tasks and dependencies, but you also have to estimate the cost and incremental value-added for each of those tasks. That’s going to be just a guess for projects with little historical reference or leverage. Quantitative metrics are generally less valuable when they’re based on a lot of qualitative assumptions, despite the appearance of analytical precision.
Whether or not you use EVM, everybody wants to express project status in terms of a percentage. “We’re about 90% done, just a little bit more to go, and we’re looking good to meet the deadline.” This kind of oversimplification often fails to recognize that the pace of progress in the past is not necessarily the pace of the future, especially when sub-projects and their deliverables are integrated together and tested against the requirements. There’s an old saying in software development that the last 10% of any software project takes 90% of the time, which is one of the reasons why agile development techniques have become popular.
While I applaud the attempts to quantify project status, I would assess a project in terms of tasks and deliverables actually either fully-completed or not, not “90% complete.” For large projects it’s useful to report deliverable completion status at checkpoint reviews where stakeholders can confirm that previously-agreed-upon milestone criteria have been met. This binary approach (done or not-done) may seem less quantitative, but it’s also less squishy. The overall status of the project is defined by the phase you’re currently in and the most-recent milestone completed, which means that all of those tasks leading up to the milestone have been completed.
That still leaves the problem of assessing the likelihood of future success: will the project finish on-time and on-budget? At some point you’re going to have to use your best judgment as a project manager, but instead of trying to distill your status to a single number isn’t it more useful to talk about the remaining tasks, risks, and alternatives? Sometimes more information really is better.
Does Your Company Need a Quality Department? November 13, 2013Posted by Tim Rodgers in Management & leadership, Process engineering, Product design, Project management, Quality, Supply chain.
Tags: early stage companies, factory quality, organizational models, outsourcing, process, product development, quality engineering, six-sigma, supply chain
add a comment
You already have a quality department, you just don’t realize it. Do you have suppliers or service providers? You have people managing supplier quality when you receive parts or services that don’t meet your specifications. Is your product manufactured? Whether you build it yourself or outsource to a contract manufacturer, you’ve got quality issues. Do your customers have problems with your product or service? Somebody in your team is managing your response. Poor quality is costing you money, whether through internal rework or post-sale costs. The question is whether you want to pull all this activity together into a separate, centralized organization.
Some organizations, particularly early stage companies, may feel they can’t afford a dedicated quality team. After all, quality is fundamentally a non-value-added function. It doesn’t contribute directly to the delivery of a product or service. However, we live in a world of variability, where every step in the delivery process can cause defects. You may be passionate about eliminating defects and saving money, but do you really know how? Quality professionals understand how to determine root cause, and they can investigate from an impartial perspective. They have expertise in sampling and statistics, and that enables them to distinguish between a one-time occurrence and a downward trend that requires focused resources.
Do you care about ISO 9001 certification? If you do, you need someone to develop and maintain a quality management system, monitor process conformance, and host the auditors. If you’re in regulated industry, you need someone to understand and communicate process and documentation requirements throughout your organization. Other responsibilities that could be assigned to the quality team include environmental, health and safety (EHS), new employee training, equipment calibration, and new supplier qualification.
All of these tasks can theoretically be handled by people in other functional groups, but you have to ask yourself whether you’re getting the results your business requires. Organizational design derives from a logical division of labor. The sales team is separate from product (or service) fulfillment so that one group can focus on the customer and another can focus on meeting customer needs. Fulfillment may require separate teams for development (design) and delivery. As the business grows, other functions are typically created to handle tasks that require specialized skills, such as accounting and human resources.
Quality is another example of a specialized function, one that can help identify and eliminate waste and other costs that reduce profit and productivity. Maybe those costs are tolerable during periods of rapid growth, but at some point your market will mature, growth will slow, and you won’t be able to afford to waste money anywhere in your value stream. That’s when you need quality professionals, and a function that can coordinate all the little quality management activities that are already underway in your organization.
Why We Need Quality Police November 10, 2013Posted by Tim Rodgers in Management & leadership, Organizational dynamics, Process engineering, Quality.
Tags: early stage companies, management, power, process, quality engineering
add a comment
I’ve said it myself many times: the quality department shouldn’t be the quality police. We tell ourselves that everyone is responsible for quality, and we therefore ask people to police their own behavior and make the right choices. This sounds good and noble, and it’s certainly more cost-effective than relying on a separate functional group to keep an eye on things.
And yet: it seems to be the only way. We need quality police.
When we’re left on our own, we tend to look for the fastest and easiest way to complete our assignments. We don’t spend much time thinking about the priorities or needs of other groups, or how decisions have future consequences. To eliminate chaos, businesses establish work standards and processes to enable coordinated activities and a smooth flow of information. Certainly we want our work processes to be effective, but what matters most are the consistent results that are achieved when everyone follows the process.
Somebody has to keep en eye on all this, to check for process conformance and process improvement opportunities. Managers can monitor the performance of their assigned teams, but a manager will tend to optimize within their team according to their objectives. Second-level or higher managers have a broader (and possibly cross-functional) perspective, but they probably lack the deeper understanding of the work processes.
If you have a quality team, this is their job. They’re the ones who pull together all the processes into a corporate quality management system (QMS). They’re the ones who train and audit the QMS, not just to make sure it’s being followed, but also to make sure it’s meeting the needs of the business. They’re the ones who monitor the performance of the processes to identify opportunities for improvement. And, if you care about ISO 9001 certification, they’re the ones who make sure you “document what you do, and do what you’ve documented.”
This isn’t the quality police looking for “process offenders” and punishing them. This is standardizing processes, reducing variability, and eliminating waste. Doesn’t every business want that?
Firing Customers For Profit November 7, 2013Posted by Tim Rodgers in Management & leadership, strategy, Supply chain.
Tags: business development, early stage companies, management, outsourcing, project management, strategy, supply chain
add a comment
Businesses large and small generally work diligently to satisfy customers, and they’re frequently reminded that the cost of acquiring a new customer is much greater than the cost of retaining an existing one. Unfortunately many of those businesses fail to appreciate that each customer has an incremental cost, not just to acquire, but also to manage. It’s possible that an organization can spend more money to support a customer than what they get in return, which is obviously an undesirable situation.
Early stage companies are particularly susceptible to this kind of trap. In their eagerness to turn their ideas into revenue, they will often incur hidden costs in order to customize products and services for each potential customer. Any customer who is willing to pay looks like a good customer. Geoffrey Moore writes about this in his excellent book “Crossing the Chasm” (HarperBusiness, 1991). The danger is that the company loses economies of scale, leverage and re-use efficiencies, and ultimately the focus that defined the unique profit opportunity in the first place.
Unprofitable customers or segments can be hard to detect. It’s easy to add up the direct material cost of a single product configuration, but you also need to understand how much time your sales and support staff spend with a customer. Does your purchasing team have to manage unique suppliers? Does your quality team perform special tests or inspections? Your indirect labor may be spending a disproportionate amount of time dealing with requirements and requests from customers who squeak.
Unprofitable customers are not necessarily bad for the business. Moore writes about segments with “bowling pin potential” that may be a net loss today, but enable the firm to establish foundational processes, move up the learning curve, and leverage and grow in the future. These loss-leaders have long-term strategic value, but it’s important to understand and assess the investment in order to ensure the expected return.
Actually refusing to do business with a customer is extreme and could hurt your reputation, but consider ways to reduce the cost to manage a customer that isn’t currently providing a net profit or enables future profitability. The firm that fails to understand their “cost to serve” may find itself out of business despite many happy customers.
Are Your Suppliers Really Committed to Quality? November 6, 2013Posted by Tim Rodgers in Management & leadership, Process engineering, Quality, Supply chain.
Tags: factory quality, leadership, management, outsourcing, performance measures, process, quality engineering, six-sigma, supply chain, test & inspection, training
1 comment so far
Suppliers always declare their commitment to the highest standards of quality as a core value, but many have trouble living up to that promise. I can’t tell you how many times I’ve visited suppliers who proudly display their framed ISO certificates in the lobby yet suffer from persistent quality problems that lead to higher cost and schedule delays. Here’s how you can tell if they’re really serious:
1. Do they have an on-going program of quality improvement, or do they wait until you complain? Do they have an understanding of the sources of variability in their value stream, and can they explain what they’re doing to reduce variability without being asked to do so? Look for any testing and measurements that occur before outgoing inspection. Award extra credit if the supplier can show process capability studies and control charts. Ask what they’re doing to analyze and reduce the internal cost of quality (scrap and rework).
2. Do they accept responsibility for misunderstandings regarding specifications and requirements? Or, do they make a guess at what you want, and later insist they just did what they were told? Quality means meeting or exceeding customer expectations, and a supplier who is truly committed to quality will ensure those expectations are clear before they start production.
3. Do you find defects when you inspect their first articles, or samples from their first shipment? If the supplier can’t get these right when there’s no schedule pressure, you should have serious concerns about their ability to ramp up to your production levels. By the way, if you’re not inspecting a small sample of first articles, you’ll have to accept at least half of the blame for any subsequent quality problems.
4. Has the supplier ever warned you of a potential quality problem discovered on their side, or do they just hope that you won’t notice? I realize this is a sign of a more mature relationship between supplier and customer, but a true commitment to quality means that the supplier understands their role in your value stream, and upholds your quality standards without being asked.
Ultimately, you will get the level of quality you deserve, depending on what suppliers you select and the messages you give them. You may be willing to trade quality for lower unit cost, shorter lead time, or assurance of supply. The real question is: What level of quality do you need? What level of poor quality can you tolerate?
Natural Variation, Outliers, and Quality November 2, 2013Posted by Tim Rodgers in Product design, Quality, Supply chain.
Tags: factory quality, performance measures, quality engineering, six-sigma, test & inspection
add a comment
When you work in quality, people want to tell you when bad things happen. A product has failed in the field. A customer is unhappy. The factory has received a lot of bad parts. You’ve got to figure out the scope of the problem, what to do about it, and how this could have possibly happened in the first place. Is this the beginning of a string of catastrophes derived from the same cause, or is this a one-time event? And, by the way, isn’t it “your job” to prevent anything terrible from happening?
People who haven’t been trained in quality may have a hard time understanding the concept of natural variation. Sometimes bad things happen, even when the underlying processes are well-characterized and generally under-control. A six-sigma process does not guarantee a zero percent probability of failure, and of course you probably have very few processes that truly perform at a six-sigma level, especially when humans have the opportunity to intervene and influence the outcome. Every process is subject to variation, even at the sub-atomic level.
So, how can you tell whether this bad thing is just an outlier, or an indicator of something more serious? And, how will you convince your colleagues that this is not the first sign of disaster? Or, maybe it is. How would you know?
You may possess process capability studies and control charts that show that all processes that contribute to this result are stable and deliver outcomes that are within specifications. That would allow you to show that this incident is a low probability event, unlikely, but not impossible. However, I’m not sure that any organization can honestly say they have that level of understanding of all the processes that influence the quality of their ultimate product or service.
In the absence of hard data to establish process capability, you’re left with judgment. Could this happen again? Was it a fluke occurrence, an accident, an oversight, something that happened because controls were temporarily ignored or overridden? Or, are there underlying influences that would make a reoccurrence not just possible, but likely? These are the causes that are hard to address, because they force organizations to decide how important quality really is. Is the organization really prepared to do what is necessary to prevent any quality failure? What level of “escapes” is acceptable, and how much is the organization willing to spend to achieve that level? While it’s easy to find someone to blame for the bad thing, it’s harder to understand how it happened, and how to prevent it from happening again.