jump to navigation

What Is the Quality Team Responsible For? (Part 2) January 11, 2016

Posted by Tim Rodgers in Process engineering, Quality.
Tags: , , , , ,
add a comment

If “everyone is responsible for quality,” then what is the quality team responsible for? This isn’t a trick question. If a team or department (or person) doesn’t have a clear, distinct, and ideally-unique assigned responsibility, then should they continue to exist as a separate entity in the organization? Shouldn’t they be doing something else instead, as part of another team?

Of course many businesses don’t have a separate quality team or department at all, and others have chosen to eliminate the quality department as an independent function. That doesn’t necessarily mean that they don’t care about quality. Some of these businesses would probably argue that they have a greater commitment to quality because those principles and tools are fully integrated into all of their functions and processes. Why should all of the Six Sigma Green Belts and Black Belts be located in one central organization? Why not build local competencies within the functional groups, whether in new product development or marketing or finance?

(more…)

Quality Under Constraints: Making the Best of It July 9, 2014

Posted by Tim Rodgers in Management & leadership, Product design, Quality.
Tags: , , , , , ,
add a comment

Lately I’ve been seeing news reports that illustrate the difficult environment that most quality professionals operate in. Here’s one example: executives from the US Chemical Safety Board (CSB) were recently called to testify before the US House of Representatives Committee on Oversight and Government Reform to address recent, highly-publicized delays and whistle-blower complaints. Former board members and employees have described a dysfunctional culture where criticism of management is considered “disloyal.” Independent investigators have reported a large and growing backlog of unfinished investigations, a situation made worse by employee attrition. The former employees report a failure to prioritize the pending investigations, “nor is there any discussion of the priorities.” The current CSB Chairman cited a lack of resources in his testimony: “We are a very small agency charged with a huge mission of investigating far more accidents than we have the resources to tackle.”

Obviously the report of a dysfunctional culture at the CSB is something that should be seriously investigated and addressed. However, my interest in this story is the struggle to prioritize investigations, do a thorough job, and close them out while operating within a constrained budget and increasing workload. I think everyone has to deal with this kind of problem in their work: too much to do and not enough time or resources to do it all with the level of completeness and quality that we would like. The old joke is you can’t have cost and schedule and quality, you can only choose two.

However, people who work in quality feel this problem more acutely than most. After all, you can directly measure cost and schedule, but it’s a lot harder to measure quality objectively. Quality professionals deal with statistical probabilities and risks, rarely with 100% certainties. In most cases, all you can do is minimize the risk of failure within the given constraints, and make sure everyone understands the inherent assumptions.

A good example is the hardware product development environment. The release schedule and ship dates are often constrained by contractual commitments to channels or customers. If the design work runs longer than planned, as it almost always does when you’re doing something new, the time to fully test and qualify the design before going into production gets squeezed. This is the same problem that happens with teams that use the old waterfall model for software development.

Yes, you shouldn’t wait until the end of the project to start thinking about quality, and there are certainly things you can do to enhance quality while you’re doing the work, and sometimes quality itself can be a constraint (as in highly-regulated environments). However I contend that managing quality will always be about prioritizing; applying good judgment based on experience, and data, and statistical models; and generally doing the best you can within constraints. Ultimately our success in managing quality should be judged by the soundness of our processes and methods, and our commitment to continuous improvement.

For the CSB, these are the questions I would ask if I were on the House Committee looking into their effectiveness: What is the quality requirement for their investigation reports, and how well do they understand it? How much faster could they release reports if those requirements were changed, while still operating under a limited staff and budget? What is their process for prioritizing investigations? CSB management should certainly be changed if the work environment has become dysfunctional, but they should also be changed if they can’t articulate a clear process for managing quality within the constraints they’ve been given.

 

 

 

Quality, Rework, and Throughput March 3, 2014

Posted by Tim Rodgers in Process engineering, Quality, Supply chain.
Tags: , , , , , , ,
add a comment

Some years ago when I was managing a software quality department I got into a heated conversation with one of my colleagues about our testing. “If your team would just stop finding defects, we can wrap up this project.” I had to point out that it wasn’t my team that introduced the defects in the first place. Of course no one deliberately does that. Software engineers want to write new code, not fix their (or anyone else’s) mistakes.

Quality isn’t only important for external customers and end-users. Internal operations should also improve quality as a way of focusing limited resources on value-added activities. In manufacturing, repair and rework are part of the “hidden factory” that reduces throughput and can prevent the plant from running profitably.

This isn’t hypothetical. At the China factory where I worked my goal was to improve end-of-line yield to the point where we could eliminate a single rework station for each production line. With more than 30 lines and 2 shifts that added up to some significant savings as well as an increase in capacity.

We’re human, mistakes will be made, and the complexity of our designs virtually guarantees that there will be unexpected outputs and interactions when we put those designs to the test in a service environment. However, we shouldn’t accept the frictional cost of fixing defects as an inherent inefficiency in the value delivery system. A defect found today is worth more than a defect found tomorrow, but a defect prevented by better design of products and processes has a far greater impact.

Something New: A Personal Web Site January 6, 2014

Posted by Tim Rodgers in job search, Management & leadership, Process engineering, Project management, Quality, strategy, Supply chain.
Tags: , , , , , , , ,
2 comments

Over the last few weeks I’ve been busy working on a personal web site that I  launched on December 25. My intention is to provide more depth about my expertise and accomplishments than what can be inferred from a two-page resume or a LinkedIn profile. This has been on my to-do list since 2012, and we’ll see how it’s received during my current “in transition” phase.

It’s been an interesting exercise, reviewing my work history and classifying my methods and results in a series of PowerPoint slides. One nice thing about getting older is that you start to figure out what you’re good at, and how to focus on those strengths. I can see common threads running through the projects in my career; an emphasis on cross-functional collaboration, strategic business alignment, and performance measures. I’ve been repeatedly attracted by opportunities to identify improvements and lead change initiatives.

I’ve already written blog posts here about some of the topics, but now they’re illustrated in more detail on the web site thanks to PowerPoint. I’m sure I’ll tinker with it in the coming months, adding some new content and tweaking the slides.

Here’s the link: http://timwrodgers.wix.com/timwrodgers. Let me know what you think.

The Power of Three (Defect Categories) December 5, 2012

Posted by Tim Rodgers in Project management, Quality.
Tags: , , , ,
add a comment

Over the last few weeks of software projects at HP we would have cross-functional discussions about open defects to essentially decide whether or not to fix them. We considered the probability or frequency of the defect’s occurrence, and the severity or impact of the defect to the end-user, then assigned the defect to a category that was supposed to ensure that the remaining time on the project was spent addressing the most important outstanding issues.

I don’t remember exactly how many different categories we had in those days (at least five), but for some reason we spent hours struggling with the “correct” classification for each defect. I do recall a lot of hair-splitting over the distinction between “critical,” “very high,” and “high” which seemed very important at the time. Regardless, everyone wanted their favorites in a high-priority category to make it more likely that they would get fixed.

I think we could have saved a lot of time if we had used just three categories: (1) must fix, (2) might fix, and (3) won’t fix. That covers it. Nothing else is necessary. The first group are those defects you must fix before release. The second group are the ones that you’ll address if you have time after you run out of the must-fix defects. The third group are the ones you aren’t going to fix regardless of how much time you have. To be fair, the might-fix defects should be ranked in some order of priority, but at that point you’ve already addressed the must-fix defects and it won’t matter much which defects you choose.

I’m not a psychologist, but I think there’s a big difference between trying to classify things into three categories vs. trying to classify things into more than three categories. I think the human brain gets a little overwhelmed by too many choices. Two might seem better than three because it forces a binary selection, but I think it’s a good idea to compromise and allow for a “maybe” category rather than endure endless indecision.

Managing Quality Without Data: Don’t Try This At Home November 16, 2012

Posted by Tim Rodgers in Management & leadership, Quality.
Tags: , , ,
1 comment so far

I know from experience how hard it is to measure, analyze, improve, and control quality in a high-stakes or high-volume production environment. There’s never enough data, and there’s constant pressure to draw conclusions and make decisions. What can we do to fix this defect? Did the process change work? When can we get the line running again? Sample sizes are usually too small to determine whether differences are statistically significant. One data point on a run chart will be taken as evidence that things are trending in the right direction. One defective product chosen at random “proves” that the whole batch is bad.

It’s worse when there’s no data at all, by which I mean the lack of a reliable source of objective, unfiltered, and unbiased data. You can’t run an effective quality system without links to the factory’s internal information systems: ERP, MRP, and other shop floor control and measurement systems. Data that’s automatically collected and reported in the course of normal production is less likely to be manipulated to make the situation look better (or worse) than it is. Data that’s manually collected is acceptable but less trustworthy, unless repeatability and reproducibility has already been established for operators.

There are too many people and constituencies with a vested interest when it comes to quality, people who want to believe that quality is always good, or at least good enough. It’s not fun for anyone when the production line is down or field failures are up. It’s easy to discount or ignore data as outliers when they don’t fit the desired story. There are also real situations where data may be suppressed or even fabricated. I think you’re better off with no data than with data that’s been compromised, but of course the better solution is to improve data integrity before making any changes to improve quality.

Professionals and Workers August 22, 2012

Posted by Tim Rodgers in Management & leadership, Organizational dynamics.
Tags: , , , , , ,
1 comment so far

I’ve been going through some old files now that I have some (unwelcome) time on my hands, and I found a Powerpoint presentation from 1997. I had just taken a new position, managing a software test team that was suffering from low morale. There was a widespread feeling that the work was unimportant and unappreciated by the rest of the organization. That could have been the beginning of a downward spiral, especially if higher-performers were able to find more rewarding jobs elsewhere.

I can’t remember where I first saw this contrast between those who are workers and those who are professionals, but it inspired the team and instilled a new sense of pride when I turned it into a presentation for an all-hands meeting:

A worker: A professional:
… is a robot, operated by a manager under remote control … is an independent human being
… is focused on boss, activity, and task … is focused on customer, result, and process
… performs tasks and follows instructions … is responsible for performing work and assuring its successful completion
… is characterized by obedience and predictability … is characterized by intelligence and autonomy
… is trained … learns
… has a job … has a career
… inhabits a precisely defined job and operates under close supervision … is constrained by neither

Also:

  • A professional sees themselves as responsible to the customer. Solve the problem and create value. If the problem is not solved or value is not created, the professional has not done their job.
  • Once provided with knowledge and a clear understanding of the goal, a professional can be expected to get there on their own.
  • A professional must be a problem solver able to cope with unanticipated and unusual situations without running to management for guidance.
  • Professionals ignore petty differences and distinctions within an organization. When we are all focused on results, the distinction between my work and your work becomes insignificant.
  • A professional career does not concentrate on position and power, but on knowledge, capability, and influence.
  • The professional’s career goal is to become a better professional and thereby reap the rewards of better performance.

Measuring Test Effectiveness: Three Views August 20, 2012

Posted by Tim Rodgers in Product design, Quality.
Tags: , , ,
add a comment

I’ve managed testing organizations supporting both hardware and software development teams, and I often had to explain to folks that it’s not possible to “test-in quality.” Quality is determined by the design and the realization of that design, whether through software coding or hardware manufacturing. Testing can measure how closely that realization matches the design intent and customer requirements, but testing can’t improve the quality of a poorly-conceived design (and neither can a flawless execution of that design).

So, how can you measure the effectiveness of a test program? Here are three ways that make sense to me:

1. Testing should verify all design requirements and all possible failure modes. This means there should be at least one test case associated with each functional, cosmetic, reliability, regulatory and all other requirements. Also, each failure mode predicted from an FMEA review should have a corresponding test to determine the design margin.

2. Testing should be designed to eliminate escapes, or at least make their occurrence a statistical improbability that is acceptable to the business. An escape is any defect found by the end-user. It may not be economically feasible to achieve zero defects, but any reported escape is an opportunity to improve test coverage. Is there an existing test case corresponding to this defect? Was the test performed and reported correctly? Does this test have to be run on a larger sample size to improve the confidence level?

3. Testing should report defects that get fixed. Testing is buying information, and if that information has no value to the organization, then it’s not a good use of resources. When I managed software quality I looked at the “signal-to-noise ratio,” or the percentage of all defects reported that were fixed by the development team. Defects that are not fixed are either potential escapes that should be discussed as a business risk, or they’re “noise” that waste money and distract the development team.

It may not be possible to test-in quality, but poorly-designed testing will surely frustrate your efforts to achieve the required level of quality.

The Zero Defects Quest July 1, 2012

Posted by Tim Rodgers in Management & leadership, Product design, Quality.
Tags: , , , , ,
add a comment

A few years ago I had a conversation with some of the engineers in a quality team I managed. The engineers were struggling to understand the business’s stated goal of “zero defects,” puzzled over what it would take to get there and worried about the consequences of falling short. They wanted the goal changed to something less demanding that could be realistically achieved, or some acknowledgement by senior management that “zero defects” was nothing more than a slogan to inspire employees, customers, and shareholders.

I was sympathetic. I understood that it was probably impossible to achieve zero defects given the complexity of the design and the influence of many inputs and processes that were difficult to control. I believe in the concept of good-enough quality, which weighs the level of quality and reliability of the organization’s output against the cost required to achieve it. Beyond the level of quality required or expected by the customer, I believe there’s an asymptotic relationship between defects and cost, and working to further reduce the number of defects will require ever-increasing expenses that are unlikely to be recovered through higher prices or larger market share.

I was, however, concerned about the slippery slope and what would happen if we became accustomed to a lower level of quality as an acceptable deliverable. I wanted to create a culture where people were dissatisfied with any defect, regardless of whether it was economically feasible to address all defects. I wanted to change the goal from “zero defects” to “all defects analyzed, root cause determined, prevention strategies proposed, and resolution (or not ) openly agreed to.” It wasn’t an inspiring slogan that fit on a poster, but when people realized that every defect would require discussion as an opportunity for improved quality, quality started to improve without a significant increase in cost. I don’t think we would have gotten there with an unrealistic goal that was cynically ignored.

New Product Introduction (NPI) Transitions June 21, 2012

Posted by Tim Rodgers in Project management, Quality, Supply chain.
Tags: , , , , ,
add a comment

Whether you’re working on a hardware product or a software release, it’s obviously a good idea to build some prototypes and evaluate the cost, performance, and reliability of the design before offering it for sale. Many organizations follow some kind of formal development lifecycle, which is essentially a structured process with well-defined and documented phases, each phase having a set of activities assigned to different functional groups that are supposed to be completed, and exit criteria that are supposed to be met before moving on to the next phase. Senior management is naturally interested in monitoring the progress of the development project, and it’s convenient to use the transition from one phase to another as a time for a checkpoint meeting.

A checkpoint meeting can be a simple project update, but organizations with a formal lifecycle will claim that a checkpoint is actually a phase gate, meaning that the project team must present evidence that they have completed all the required tasks and satisfied all the exit criteria. If those criteria are not met, the project team cannot move on to the next phase.

Unfortunately the reality for most organizations is that except for some scolding from the senior managers there are typically few consequences for failing to satisfy the requirements of a phase gate. Schedule commitments tend to overwhelm process fidelity. The project team may seek a waiver for some of the exit criteria, either promising to complete them at a later date TBD, or insisting that their project is special and therefore exempt from those “rules.” The quality sticklers who believe in blind adherence to the documented process get mad, and everybody gets a little more cynical about processes in general.

Phase reviews are not just a date on the calendar. Phase transitions are supposed to signal a change in the behavior of the design team and the supporting functional groups. In the world of hardware development one of the most important transitions is from qualifying the design to qualifying the factory production processes. It’s generally a bad idea to make design changes while trying to stabilize the manufacturing line and reduce variability. It may go by different names, but there should be a checkpoint where the project team demonstrates that they have verified that the design meets the committed customer and business requirements. There’s a later checkpoint where the team demonstrates that they have verified that they can manufacture that design at the required level of production units, all of which meet the committed customer and business requirements.

There’s a similar phase transition in the world of software development, specifically for teams that don’t follow an agile/scrum model. It’s often called the functionally-complete checkpoint, where all of the code supporting the required functionality of the software has been checked in and verified. This is an important checkpoint for the software quality team because it doesn’t make sense to run the full suite of system tests beforehand.

Two additional recommendations regarding development lifecycles and phase transitions:

1. Keep it simple. Often the descriptions of phase activities and exit criteria are burdened with a lot of detailed requirements that really aren’t critical to the transition. They are all probably important things that need to happen at some point in the project, but if they’re not truly required to be completed before moving to the next phase, just track them like all other project tasks.

2. I believe in the product development lifecycle because I believe that signaling a change in how the team is supposed to behave is more effective than trying to put all the project tasks on a Gannt chart and track their completion. If the phase transitions are not strictly enforced, then the “checkpoint meeting” is just a project update. Don’t fool yourself that it means anything else.

 

%d bloggers like this: